Privacy concerns with navigator.pay

2012-08-16 Thread Jonas Sicking
Hi All,

While looking at the navigator.pay API, there's one privacy concern that I had.

As the API is currently intended to use, the flow is something like
the following:

1. User visits website.
2. Website calls navigator.pay and provides a JWT-encoded request
which contains information about what is being payed for, how much
money is being payed, currency etc. The request is signed with the
developers private key (which means that it must be generated
server-side).
3. Gaia automatically sends the JWT encoded data to BlueVia. This
request includes user identifying information (except for the first
time the user uses BlueVia payments).
4. BlueVia returns a HTML page which contains UI which describes to
the user the information encoded in the JWT request. I.e. the details
of the payment.
5. The user clicks an accept payment button.
6. Gaia displays UI which allows the user to log in to BlueVia.
7. Once the user has logged in, BlueVia sends a server-to-server
request to the application server indicating that payment has been
received.
8. The webpage is notified that the payment went through.

My concern here is step 3. It seems like a privacy leak to me that
with no action from the user, details about something that the user is
considering buying, or that the user accidentally clicked, is sent to
BlueVia. Just because I trust BlueVia with handling my money, doesn't
mean that I'm comfortable with BlueVia knowing which websites I visit.
If I decide that I actually want to make a payment to the website
using BlueVia, then obviously I have to let BlueVia know, but until
then it doesn't seem like something that we should be telling BlueVia
about.

It seems like we can get a very similar UX experience with the same
number of clicks using a flow like:

1. User visits website.
2. Website calls navigator.pay and provides a JWT-encoded request
which contains information about what is being payed for, how much
money is being payed, currency etc. The request is signed with the
developers private key (which means that it must be generated
server-side).
3. Gaia decodes the JWT data and displays the information encoded in
the JWT request as well as a button that says Pay with BlueVia.
4. The user clicks an Pay with BlueVia button.
5. Gaia displays UI which allows the user to log in to BlueVia.
6. Once the user has logged in, the JWT data is sent to BlueVia.
7. BlueVia sends a server-to-server request to the application server
indicating that payment has been received.
8. The webpage is notified that the payment went through.

Did we do a privacy review of this API? Did this come up during that review?

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [b2g] Privacy concerns with navigator.pay

2012-08-16 Thread Jonas Sicking
On Thu, Aug 16, 2012 at 3:06 AM, DANIEL JESUS COLOMA BAIGES
dcol...@tid.es wrote:
 Hi,

  The bottom line is that we need a decision today, whatever that decision is. 
 We are moving back and forth continuously (the protocol was suggested late 
 June) and that is something we cannot afford.

Given that we haven't even done a security review of the API, I don't
see how we could possibly commit to an API today.

It would also be very helpful to know deadlines like this earlier.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [b2g] Privacy concerns with navigator.pay

2012-08-16 Thread Jonas Sicking
On Thu, Aug 16, 2012 at 1:29 AM, Andreas Gal g...@mozilla.com wrote:

 On Aug 16, 2012, at 1:13 AM, Jonas Sicking jo...@sicking.cc wrote:

 Hi All,

 While looking at the navigator.pay API, there's one privacy concern that I 
 had.

 As the API is currently intended to use, the flow is something like
 the following:

 1. User visits website.
 2. Website calls navigator.pay and provides a JWT-encoded request
 which contains information about what is being payed for, how much
 money is being payed, currency etc. The request is signed with the
 developers private key (which means that it must be generated
 server-side).
 3. Gaia automatically sends the JWT encoded data to BlueVia. This
 request includes user identifying information (except for the first
 time the user uses BlueVia payments).

 Two comments here. First, as a payment agent BlueVia has a certain trusted 
 position. Second, the user explicitly opted into BlueVia being registered as 
 a payment method.

As I said below, just because a user is comfortable with doing
payments through BlueVia doesn't mean he/she is comfortable sending
arbitrary browsing information to them.

I'm personally fine with funneling most of my payments through both
the Chase bank as well as the Visa credit card company. But I wouldn't
trust either of them to not try to use my browsing history to make
money in ways that I'm not ok with. For example by sending that
information to third parties.

 4. BlueVia returns a HTML page which contains UI which describes to
 the user the information encoded in the JWT request. I.e. the details
 of the payment.
 5. The user clicks an accept payment button.
 6. Gaia displays UI which allows the user to log in to BlueVia.

 Note that this is automatic (e.g. via BrowserID).

Indeed. I don't have any issues with this step. It's exactly the same
in my alternative proposal.

 7. Once the user has logged in, BlueVia sends a server-to-server
 request to the application server indicating that payment has been
 received.
 8. The webpage is notified that the payment went through.

 My concern here is step 3. It seems like a privacy leak to me that
 with no action from the user, details about something that the user is
 considering buying, or that the user accidentally clicked, is sent to
 BlueVia. Just because I trust BlueVia with handling my money, doesn't
 mean that I'm comfortable with BlueVia knowing which websites I visit.
 If I decide that I actually want to make a payment to the website
 using BlueVia, then obviously I have to let BlueVia know, but until
 then it doesn't seem like something that we should be telling BlueVia
 about.

 Note that for this to be a privacy leak, the website has to intentionally 
 request a payment, or in other words, the website has to intentionally leak 
 to BlueVia that its being visited, for this to leak to BlueVia. There are 
 ample of other ways of doing so that are much easier. I don't see this as 
 being worse than allowing sites to include images from other origins.

The problem is that if we define the navigator.pay API as only call
this function once you have made absolutely sure that the user is
willing to pay for this item/service then that means that websites
would have to be responsible for popping up an extra dialog asking
users are you sure that you want to pay for this?. This is exactly
the type of extra step that I think we're trying to avoid.

It's to our users benefit if we can tell web developers that we will
ensure to not sure the data with anyone without consulting the user
first.

 Before people start throwing out Mozilla proxying payment requests and shims 
 as a rescue here, that approach is roughly as bad. If we consider this a 
 leak, it leaks to Mozilla, instead of BlueVia.

This is not what I'm proposing.

 It seems like we can get a very similar UX experience with the same
 number of clicks using a flow like:

 1. User visits website.
 2. Website calls navigator.pay and provides a JWT-encoded request
 which contains information about what is being payed for, how much
 money is being payed, currency etc. The request is signed with the
 developers private key (which means that it must be generated
 server-side).
 3. Gaia decodes the JWT data and displays the information encoded in
 the JWT request as well as a button that says Pay with BlueVia.
 4. The user clicks an Pay with BlueVia button.

 I guess this is equivalent to the planned drop-down box. I have no opinion on 
 the UX here.

I'm happy to leave this up to UX too. I just wanted to demonstrate
that it's possible to create UX which doesn't require additional
clicks.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [b2g] Privacy concerns with navigator.pay

2012-08-16 Thread Jonas Sicking
On Thu, Aug 16, 2012 at 2:16 AM, Fernando Jiménez ferjmor...@gmail.com wrote:
 It seems like we can get a very similar UX experience with the same
 number of clicks using a flow like:

 1. User visits website.
 2. Website calls navigator.pay and provides a JWT-encoded request
 which contains information about what is being payed for, how much
 money is being payed, currency etc. The request is signed with the
 developers private key (which means that it must be generated
 server-side).
 3. Gaia decodes the JWT data and displays the information encoded in
 the JWT request as well as a button that says Pay with BlueVia.

 I agree that this would be nice to have and, as I mentioned before, it should 
 be easy to implement. Anyway, I have a few comments about this.

 First of all, the JWT content is defined by the payment provider. For the 
 BlueVia use case, the JWT contains human readable information about the 
 digital good being sold. But other payment providers might decide not to 
 request that kind of information within the JWT. Actually, BlueVia only needs 
 the application identifier. That means that we might end up showing non human 
 understandable information. Something like You are about to pay for 
 12345677.  To solve this, Gecko might force the JWT to have at least a clear 
 product price and a human readable description.

Don't we have the freedom to impose what JWTs that we are willing to
funnel between the website and the payment provider? I.e. can't we
require that the JWT contains human readable descriptions?

 Apart from that, the JWT request is signed with the developer's key, *only* 
 shared by the developer and the payment provider. Gecko doesn't know anything 
 about the developer's key, so the JWT can't be verified before showing its 
 content to the user. I am not saying that this is a problem, but I just want 
 to be sure that you are aware of this.

Yup. I'm aware of this. It's definitely somewhat worrying, but it
seems to me like it should work as long as the JWT request is signed
rather than encrypted with the developer key.

It could result in the situation when the user could be shown a UI
saying a payment for $10 is requested for this bowl of virtual
chicken soup, but once the user clicks pay using BlueVia and logs
in, he/she is faced with a dialog saying that the payment request is
invalid.

Not ideal, but also likely not going to happen terribly often since
there is no incentive for the website to do so.

 4. The user clicks an Pay with BlueVia button.

 I guess this is equivalent to the planned drop-down box. I have no opinion 
 on the UX here.

 5. Gaia displays UI which allows the user to log in to BlueVia.
 6. Once the user has logged in, the JWT data is sent to BlueVia.
 7. BlueVia sends a server-to-server request to the application server
 indicating that payment has been received.
 8. The webpage is notified that the payment went through.


 Why would the user need to be logged in before sending any JWT data to the 
 payment provider? That would mean that Gecko would also need to know about 
 how the payment provider handles user identification (for ex. login endpoint).

 If we send the JWT data and no user information, the payment provider would 
 only know that someone anonymous wants to buy something. Isn't that ok in 
 terms of privacy?

I'm a bit confused as to what you are proposing since you are
commenting on the user-flow in my counter proposal. Did you intend for
this comment to be in response to step 3 in the original flow?

If so, yes, it's true that we could do step 3 without sending user
identifying information to BlueVia. But that wasn't the flow that was
described to me when I asked how the proposed API worked. If we make
sure to not send any cookies to BlueVia in step 3 of the original flow
then that definitely limits the privacy leak. But it would also mean
adding platform support to loading iframes without sending cookies.
Something we currently don't have.

The other problem is that it's relatively easy to fingerprint people
even if we don't send cookies. Especially once you open an iframe
which lets scripts run. You can read about it here:
https://panopticlick.eff.org/. Hence it's generally better to not send
data to 3rd parties, than to rely on that they can't identify who is
sending the data.

But I definitely agree that we should keep in mind the option of
keeping the original flow, but not sending user-identifying
information in step 3.

 Did we do a privacy review of this API? Did this come up during that review?

 I don't think thats completed yet.

 AFAIK, no, we didn't do a privacy review yet. We did a security review so far.

If it turns out that I'm the only person worried about this privacy
leak, then we should absolutely go with the current API. I mostly
started this thread to get input from the security and privacy teams
to see if this was something that worried them. If it doesn't, then we
shouldn't let this issue block us.

/ Jonas

Re: [b2g] Privacy concerns with navigator.pay

2012-08-16 Thread Jonas Sicking
On Thu, Aug 16, 2012 at 7:52 AM, Fernando Jiménez ferjmor...@gmail.com wrote:
 Yup. I'm aware of this. It's definitely somewhat worrying, but it
 seems to me like it should work as long as the JWT request is signed
 rather than encrypted with the developer key.

 It could result in the situation when the user could be shown a UI
 saying a payment for $10 is requested for this bowl of virtual
 chicken soup, but once the user clicks pay using BlueVia and logs
 in, he/she is faced with a dialog saying that the payment request is
 invalid.

 Not ideal, but also likely not going to happen terribly often since
 there is no incentive for the website to do so.

 Indeed. The final payment provider screen would contain the real 
 information about the current digital good being sold, so the user can 
 compare with the information not verified but showed by Gecko in the previous 
 step.

Well, if the signature doesn't match, which is the only part that we
can't verify in Gaia, then the user will get a message saying that the
payment request was invalid. So no need to compare anything, the
user's money should never have been at risk.

 I'm a bit confused as to what you are proposing since you are
 commenting on the user-flow in my counter proposal. Did you intend for
 this comment to be in response to step 3 in the original flow?

 Sorry, I was referring to step 6 of your proposal. As I said, IMHO where to 
 ask for a user login should be up to the payment provider. It shouldn't be 
 Gaia loading the payment provider login screen, but the payment flow (that 
 would contain the login screen) in general.

Then I'm not really following what you were saying. The user
experience in my proposal is eactly the same as the user experience in
the original flow once the user has chosen BlueVia as payment
provider.

 If so, yes, it's true that we could do step 3 without sending user
 identifying information to BlueVia. But that wasn't the flow that was
 described to me when I asked how the proposed API worked. If we make
 sure to not send any cookies to BlueVia in step 3 of the original flow
 then that definitely limits the privacy leak. But it would also mean
 adding platform support to loading iframes without sending cookies.
 Something we currently don't have.

 The other problem is that it's relatively easy to fingerprint people
 even if we don't send cookies. Especially once you open an iframe
 which lets scripts run. You can read about it here:
 https://panopticlick.eff.org/. Hence it's generally better to not send
 data to 3rd parties, than to rely on that they can't identify who is
 sending the data.

 But I definitely agree that we should keep in mind the option of
 keeping the original flow, but not sending user-identifying
 information in step 3.

 Well, we would keep the original flow with the addition of a confirmation 
 screen shown by Gecko.

Note that I'm not proposing an additional confirmation by Gecko. I'm
proposing that we replace the confirmation UI displayed by BlueVia in
step 3 of the original flow, with a confirmation UI displayed by
Gecko. So from the user's point of view it's almost exactly the same
behavior.

 If it turns out that I'm the only person worried about this privacy
 leak, then we should absolutely go with the current API. I mostly
 started this thread to get input from the security and privacy teams
 to see if this was something that worried them. If it doesn't, then we
 shouldn't let this issue block us.

 Actually, you are not the only one, as I am also concerned about this matter, 
 that I must confess that I didn't think about until you mentioned. I still 
 think that this is mostly a payment provider identity protocol issue, but I 
 agree that Gecko should probably ask for user's confirmation before sending 
 any users data ,even if he authorized the payment provider to automatically 
 log him in. Anyway, even if we want to implement this, it may not a  be a 
 reason to block the basic parts of the implementation. But that's not my 
 decision at all :)

 I just need confirmation to start developing the fix for this, that I repeat, 
 should not be hard to implement.

Sounds good! Let me know if you have any questions. I'll try to be
available late tonight again so that we can rip through any questions
quickly.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


New Security Details document

2012-08-08 Thread Jonas Sicking
Hi All,

bcc'ing broadly here since I've attempted to answer questions from a
lot of different people.

I've written a new document which documents the OpenWebApps/B2G model
in more detail: https://wiki.mozilla.org/Apps/SecurityDetails

Please post any replies to dev-webapps in order to keep the discussion
coherent. There are still some sections missing towards the end, but
they should mostly cover implementation details. My hope is to fill
those out too very soon, but the document already seems useful so I
wanted to start pointing people to it.

I'm very aware that it's a big document, but there's been a lot of
questions and confusion, so there's a lot of information to get out.
I've tried to put useful headings on the various sections so that
people can skip to the sections that interest them.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [Security Reviews] Wee of 21-Apr

2012-05-21 Thread Jonas Sicking
Should all of these dates say may instead of apr?

/ Jonas

On Mon, May 21, 2012 at 5:38 AM, Curtis Koenig curt...@mozilla.com wrote:

  Security Reviews
Date / Time  Item   Mon Apr 21 / 13:00 PST  Network 
 Monitorhttps://mail.mozilla.com/home/ckoe...@mozilla.com/Security%20Review.html?view=monthaction=viewinvId=130654-130653pstat=ACexInvId=130654-179151useInstance=1instStartTime=133763040instDuration=360
   Wed Apr 23 / 13:00 PST  Expose
 a client TCP socket/UDP datagram API to web 
 applicationshttps://mail.mozilla.com/home/ckoe...@mozilla.com/Security%20Review.html?view=monthaction=viewinvId=110473-110472pstat=ACexInvId=110473-179809useInstance=1instStartTime=133780320instDuration=360
   Thu Apr 24 / 10:00 PST  Script
 Debuggerhttps://mail.mozilla.com/home/ckoe...@mozilla.com/Security%20Review.html?view=monthaction=viewinvId=110484-110483pstat=ACexInvId=110484-179672useInstance=1instStartTime=133787880instDuration=360
   Fri Apr 25 / 10:00 AM PST  Land
 module loader to 
 firefoxhttps://mail.mozilla.com/home/ckoe...@mozilla.com/Security%20Review.html?view=monthaction=viewinvId=110490-110489pstat=ACexInvId=110490-180042useInstance=1instStartTime=133796520instDuration=360

 *Calendar and Meeting 
 detailshttps://mail.mozilla.com/home/ckoe...@mozilla.com/Security%20Review.html
 *

 --
 /Curtis


 ___
 Security-group mailing list
 https://mail.mozilla.org/listinfo/security-group


___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [b2g] WebAPI Security Discussion: Vibration API

2012-05-03 Thread Jonas Sicking
On Wed, Apr 18, 2012 at 9:32 PM, Adrienne Porter Felt a...@berkeley.edu wrote:
 Could it be limited to both foreground content that is the top level
 window?  That way ads in iframes won't be able to annoy the user as much
 (and websites can ensure that ads won't be annoying by putting them in
 frames).

I feel like vibration is very similar to audio. I'm fairly sure there
are websites that have a policy that none of the ads that they are
showing are allowed to play audio. Unfortunately this can't be
enforced through technical means right now, with the result being that
sometimes ad agencies break the policies.

I see the desire to disable vibration for cross-origin iframes, but I
think it would also disable useful usecases if we do it as a blanket
policy. For example many games on facebook run inside an iframe. What
would be really cool is if we had the ability for a site to create an
iframe for an ad and say that the contents of the iframe wasn't
allowed to play audio or enable the vibrator.

Alternatively we could do it the other way around and say that
cross-origin iframes by default disable vibration, but can then be
explicitly re-enable the vibrator.

We could implement a allow by default but allow parent website to
disable vibration by extending the sandbox attribute. We could
probably do audio that way too since sandboxes disable plugins.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [b2g] WebAPI Security Discussion: Idle API

2012-05-01 Thread Jonas Sicking
Sorry for not responding until now. Was away on vacation.

 Inherent threats:  Privacy implication - signalling mulitple windows at 
 exactly the same time could correlate user identities and compromise privacy

I think there's another threat, which is simply monitoring if the user
is active on the computer, which is a bit of a privacy invasion. For
example, a user might not expect that a corporate website that the
user is logged in to monitors how active the user is at the computer
to see if he/she puts in a full day of work.

There's also another threat which is easier to solve. The API allows
specifying how long the user has to be idle before the page is
notified. If we allow *very* short idle times, say 0.1 seconds, then
the page can basically sense each time the user presses a key. This is
easily fixed by enforcing a minimal idle time of X seconds. Given that
the main use cases is to do things like notify IM apps when the user
is away from the computer, X can be cranked up fairly high (30 seconds
perhaps) without loosing any important use cases.

 == Regular web content (unauthenticated) ==
 Use cases for unauthenticated code: Event is fired when the user is idle
 Authorization model for normal content: Implicit

I think that for normal content we might not want to allow this API at
all without a prompt. The value to privacy risk ratio is pretty low
given that most apps can do just fine without access to the API.

Alternatively, we could make the Idle API simply monitor activity *on
that page* for uninstalled pages, unless there has been a prompt. That
way we're not exposing *any* new information which couldn't be gotten
through simply monitoring all UI events.

 Authorization model for installed content:Implicit

This one I'm less sure about where it falls. Maybe same as normal content?

 Potential mitigations: Exact time user goes idle can be fuzzy so as to reduce 
 correlation

Yes, definitely think we should do this. But it only addresses the
correlation issue. Not the privacy leak.

 == Trusted (authenticated by publisher) ==
 Use cases for authenticated code: As per unauthenticated
 Authorization model:
 Potential mitigations:

 == Certified (vouched for by trusted 3rd party) ==
 Use cases for certified code: As per unauthenticated
 Authorization model:
 Potential mitigations:

I'm similarly unsure what to do here. I could see prompting here too
mostly because most apps would do just fine without ability to know
when the user is interacting with the device. At the same time, these
types of apps could potentially figure out when the screen is being
turned off anyway which is essentially the same thing as the user
being idle (we don't have such an API right now, but I suspect we'll
end up with one).

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: WebAPI Security Discussion: Screen Orientation

2012-04-10 Thread Jonas Sicking
On Tue, Apr 10, 2012 at 4:59 PM, Lucas Adamski ladam...@mozilla.com wrote:
 Here's the first API up for discussion.  This should be pretty 
 straightforward so I hope to close out this discussion by
 end of day Thursday (PDT).

 I'd like to keep this discussion on mozilla.dev.webapps, but I'll take 
 responses on other lists over silence. :)

 Name of API: Screen Orientation
 Reference: bug 720794 bug 673922

 Brief purpose of API: Get notification when screen orientation changes as 
 well as lock the screen orientation

 Inherent threats: minor information leakage (device orientation), minor user 
 inconvenience (lock device orientation)

 Threat severity: low per https://wiki.mozilla.org/Security_Severity_Ratings

 == Regular web content (unauthenticated) ==
 Use cases for unauthenticated code: Prevent screen orientation from changing 
 when playing a game utilizing device motion

I'd also add: Switch screen orientation when switching between
different parts of an app. For example when switching from UI to
browse lists of videos, to UI which plays a specific video. Or
switching orientation between people facing one another playing a game
on the device.

 Authorization model for normal content: implicit for detecting orientation, 
 explicit runtime for locking orientation

I'm not sure what explicit runtime for locking entails here. I don't
think we want an explicit prompt when the page requests that the
orientation be locked in a particular direction. Basically it seems to
me that prompting the user for something as trivial as screen
orientation (which you can basically do by simply rendering your text
sideways anyway) would just lead to prompt fatigue with users.

 Authorization model for installed content: implicit for both

Agreed. For an installed application you can always exit the
application by closing it (this should be true on both mobile and
desktop). So the page can't really DoS the user by continuously
switching the screen orientation.

 Potential mitigations: Orientation should remained locked only while focused.

I think we might need more than that, especially since I'd prefer to
not have an explicit ask-for-permission prompt for screen locks, even
for uninstalled apps. What I propose is this:

1. Orientation locks only affect the locked app/page. If the user
switches tab or app, the other pages aren't affected.
2. If the page is installed content, always allow the orientation lock.
3. If the page is in fullscreen mode, always allow the orientation
lock. There is no browser UI being displayed anyway and so the page
could cause the same behavior by simply rendering text sideways. Need
to verify that this will work on mobile.
4. If the page isn't in fullscreen mode, and is contained in an
iframe (or frame) never allow the lock request.
4. If the call to lock the screen orientation happens in response to a
user action, always allow the orientation lock.
5. If the call to lock the screen orientation doesn't actually require
the screen to be immediately re-orientated, always allow the
orientation lock.
6. If no other calls to change the lock orientation has happened in
the last X (say 5) seconds, allow the orientation lock. This is mostly
to cover the case of the page setting its lock during initial page
setup.

We could potentially skip rule 5 since it doesn't actually solve any
real problems, and might just be confusing to developers. The other
rules should be enough to allow all sanely written code to just
work.

 == Trusted (authenticated by publisher) ==
 == Certified (vouched for by trusted 3rd party) ==

For the sake of simplicity for page authors, I think I would prefer to
keep these two the same as the unauthenticated. I agree that we could
relax constraints here, but I think that would cause more confusion
than it would add value. And for now these two groups consist of
installed content only, which basically means full rights to the API.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [b2g] Types of applications - Proposal and next steps

2012-04-06 Thread Jonas Sicking
On Mon, Apr 2, 2012 at 10:58 AM, Lucas Adamski ladam...@mozilla.com wrote:
 Its been a very productive discussion, though I do think we have perhaps 
 focused too much on the question of installed vs not and thereby created a 
 bit of a false dilemma.  For example, if we agree that random HTTP web 
 content should be permitted to request access to a certain set of webAPIs, 
 whether that content is installed via a manifest or not does not 
 significantly change the risk inherent in granting it that set of privileges. 
  This permits the use case of just turning a normal website into an app, 
 without having to go through significant packaging.  This is ok because that 
 app can have no more privilege than a regular web page can request.

 Now if there is a set of APIs that an app might have implicit access to, or 
 APIs that present too much risk to have completely random web content request 
 access to, then we need to have a category of application that the user can 
 make a trust decision regarding (the app or publisher) before granting such 
 privileges.  This ensures that when the user decides to trust code from X, 
 that can be a meaningful decision.

 I'd like to propose a different way of framing these categories as:

 a) unauthenticated content (typical web content): contains privileges that 
 any content can request.  Risk is generally limited to privacy, i.e. to what 
 the user chooses to share via that API at that point in time.  Safe enough 
 for any content to request at any time. (risk: low severity)

 b) authenticated content (trusted content): privileges that require the user 
 to make some explicit trust decision based upon the identity of the 
 requesting party BEFORE these privileges can be explicitly prompted for or 
 implicitly granted.  This requires code integrity as well as authenticity.  
 These are privileges that we would not want random pages prompting for 
 without the user first accepting the identity of the requesting party, and if 
 abused the risk is limited to disclosure of confidential information / 
 privacy / persistent annoyance / phishing, but not persistent system 
 compromise (risk: moderate to high severity)

 c) certified content (trusted content vouched for by 3rd party): privileges 
 that the user has to make an explicit trust decision about based upon strong 
 authentication AND vouched for by a trusted 3rd party.  One use case for 
 example are B2G APIs for implicit access to dialer and SMS.  For an app to 
 have direct access to them, it would need to be certified by the carrier or 
 manufacturer in question.  These are APIs that the average user cannot 
 realistically make a risk judgement about and/or the misuse of which can 
 result in local system compromise or direct financial impact (risk: critical 
 severity).

I had a discussion with Lucas about these groups and we came to the
conclusion that it might be good to split up the a) group into two
groups:

a1) unauthenticated content (typical web content): contains privileges
that any content can request.  Risk is generally limited to privacy,
i.e. to what the user chooses to share via that API at that point in
time.  Safe enough for any content to request at any time. (risk: low
severity)
a2) installed unauthenticated content: Same as a1, but some
privileges are automatically granted. No privileges that expose the
user to additional privacy or security risk are granted automatically.
However privileges which use device resources (harddrive space,
network bandwidth, CPU power) and privileges which could annoy the
user (replace context menus, set screen orientation).

So there are clearly APIs which have different UX between a1 and a2,
so it makes sense to split it into two separate groups. Also
considering that most applications will hopefully fall into either of
these categories, it makes sense to be more detailed in how we treat
them.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [b2g] OpenWebApps/B2G Security model

2012-03-16 Thread Jonas Sicking
On Sun, Mar 11, 2012 at 6:51 PM, Jim Straus jstr...@mozilla.com wrote:
 Hello Jonas -
  The problem I'm trying to solve is knowing that the app I think I'm getting 
 is the app that I'm getting.  SSL doesn't do anything to solve this, it just 
 protects the privacy on the wire, not what is actually going through it.   If 
 a store wants to assign elevated permissions of any sort, I want assurance 
 that the app they are providing the elevated permissions to is the app that 
 I'm getting.  It does't matter if the store is validating the app by hand 
 inspecting code, doing some automated inspection, contractual obligations, 
 the word of the developer, whatever.  If they are asserting that the 
 application is to be trusted with any elevated permissions, I don't want to 
 get something else.  Code signing doesn't tell me what an app does, just that 
 it is the app hasn't been modified.  Either through a developer changing 
 their application, a hacker breaking into a site, or anything else.  If the 
 developer does want to update their app, I want to the store to re-assert 
 that the new app should be able allowed to have whatever permissions it is 
 granting, not just the developer doing it unilaterally.  I suspect that 
 stores will compete partially on price and breadth of offerings, but also on 
 their assurances that the apps they are providing are safe.
  Actually, in thinking about it, I think that stores that sell apps that come 
 from a third party server are more secure, not less as a hacker would have to 
 obtain the ability to sign an app and also break into the third party server 
 to affect a change.  And they would have to hack into another server to 
 affect a second app.  If a store hosts everything themselves, hacking that 
 single server and getting the ability to sign apps would expose lots of apps 
 to being hacked.
  Black listing based on scheme/host/port is probably not sufficient for an 
 organization that distributes more than one application.  This was raised in 
 a different discussion related to web apps in general.  But even if it was, 
 we may want to blacklist a particular version of an application, not the 
 application in general.  The signature provides a mechanism for this.
  I agree that removing permissions for some application infractions might be 
 a good idea.  The actual semantics of what the black list can do to to a 
 particular app can be discussed and enumerated.  But there will definitely be 
 apps that we want to completely disable (eg. if we find an app is hijacking 
 information, I don't want it running at all.)

I have to admit I've lost track of what it is that you're actually
asking for. knowing that the app I think I'm getting is the app that
I'm getting depends highly on the definition of the app.

As I've stated, I don't want to force app developers to have to their
code inspected by stores, nor do I want to force stores to review
developers code. And if a code review hasn't happened I don't see what
signing the code buys anyone.

Instead I want stores to verify that they can trust a developer
through things like contractual means and restricting which set of
privileges they give an app. It has also been suggested that stores
should be able to require certain technical security measures from the
app, like EV Certs and/or certain CSP policies. This sounds like great
ideas to me. Likewise, it would likely be a good idea to have minimum
requirements on stores that they use things like EV Certs and CSP
policies.

If we do this, then we can use SSL both go guarantee that the code
that is delivered to the user is the code that the developers have
authored, and the security policy that the store intended to entrust
the code is the policy that is delivered to the user.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [b2g] OpenWebApps/B2G Security model

2012-03-16 Thread Jonas Sicking
On Wed, Mar 14, 2012 at 2:35 PM, Lucas Adamski ladam...@mozilla.com wrote:
 My understanding is that there will be multiple app stores.  But code signing 
 has another benefit: reducing systemic risk.

 This assume code signing and sane key management, but lets say there's a very 
 popular app with significant privileges.
 To compromise a large number of people, you'd need to:
 a) compromise the site hosting the app
 b) compromise the key signing the app (assuming you require app updates to be 
 signed with the same key)
 c) compromise or trigger the update mechanism for the app
 d) wait for updates to trickle out

 This is a tedious process that slows down exploitation, and that's no fun.

 If app authentication relies only on SSL, then you just need to pop a web 
 server (which isn't hard, really).  Everyone
 using the app gets owned simultaneously.

If we rely on only SSL we still get a), c) and d) AFAICT. Signing only
adds b). The other question is, how do you deliver the keys? It would
have to be through some mechanism other than through the web server to
add any level of security.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [b2g] OpenWebApps/B2G Security model

2012-03-16 Thread Jonas Sicking
On Thu, Mar 15, 2012 at 10:52 AM, Adrienne Porter Felt a...@berkeley.edu 
wrote:
 https://wiki.mozilla.org/Apps/Security#Management_.2F_granting_of_API_permissions_to_WebApps

 Under Management / granting of API permissions to WebApps, I think two
 important points are missing:

 4. User should be able to audit usage of permissions (this is different from
 viewing what permissions an app has, since that does not tell you how or
 when it is used)
 5. Apps cannot request permission to do something that is not listed in the
 manifest

Agreed on 4. For 5 I would rather say Apps cannot request permission
to do something is not listed in the manifest *and* that the store
hasn't granted them access to do.

 I'd also like to raise the issue of what happens to permissions when
 principals interact.  Do webapps have iframes like websites?  Can they embed
 advertisements?

Yes.

  Do the advertisers then get all of the permissions?

No.

 There are two ways iframes/permissions don't mix well:

 * Child frame requests permission to do something. User thinks that the
 dialog belongs to the parent frame, accidentally grants the child frame
 access to something.

 * Parent frame belongs to an untrusted app with no privileges. It opens a
 child frame with a trusted app in it.  Let's say the child frame performs a
 privileged action as soon as it is opened, using a permanently-granted
 permission.  The untrusted parent frame has now caused some action to occur
 without the user realizing it.

I don't think we should allow trusted apps to be framed. I.e. if an
app opens a url which belongs to a trusted app in an iframe, that url
should run with no special permissions at all. Prompt or no prompt.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [b2g] OpenWebApps/B2G Security model

2012-03-12 Thread Jonas Sicking
On Sun, Mar 11, 2012 at 7:31 PM, Adrienne Porter Felt a...@berkeley.edu wrote:
  Each API which requires some sort of elevated privileges will require
  one of these capabilities. There can be multiple APIs which
  semantically have the same security implications and thus might map to
  the same capabilities. However it should never need to be the case
  that an API requires to separate capabilities. This will keep the
  model simpler.

 This constraint might prove rather difficult to stick to.  Android has a
 significant number of API calls with multiple permission requirements.  This
 seems to be a natural side effect of transitivity + a large API: given
 methods A and B that are protected by different permissions, you will
 eventually create a method C that invokes both A and B.  A natural way that
 this occurs is that you have separate read and write permissions, and
 then you create a new action that involves both read and write actions.

So far I don't think we've run into this need. I'd be curious to know
where Android did.

One example where we might be pushing the boundaries might be the
Device Storage API [1] where we'll have different levels of security
for:

1. Adding new files
2. Reading existing files
3. Full read/write access

[1] https://wiki.mozilla.org/WebAPI/DeviceStorageAPI

  Another thing which came up during a recent security review is that
  we'll likely want to have some technical restrictions on which sites
  can be granted some of these capabilities. For example something as
  sensitive as SMS access might require that the site uses STS (strict
  transport security) and/or EV-certs. This is also applies to the
  stores which we trust to hand out these capabilities.


 I strongly second restricting certain capabilities to all-HTTPS websites.  I
 have no reason to believe that website developers are much better than
 Chrome extension developers, and Chrome extension developers use HTTP
 resources in insecure ways all the time regardless of how privileged the
 extensions are.

Sounds good.

 On another note:

 I think a permission test suite is crucial for the long-term success of the
 API.  Every time someone defines a new API, he/she should have to build an
 accompanying permission check test.  I don't know what Mozilla's code review
 model is but perhaps this process could be incorporated into it.  This way,
 the permission policy is always known and it is always possible to verify
 that it has been implemented correctly.  Otherwise, you end up with
 Android's complex-yet-undocumented permission model.

Agreed. Mozilla has as a policy that everything that we check in has
tests. Having checks for security aspects is especially important
though I don't think that's spelled out explicitly in the policy.

The security policy will definitely be front-and-center in every API
that we design. I.e. we should design with security policy in mind
from the ground up. So far we haven't quite been able to do that given
that we haven't had the necessary vocabulary and infrastructure in
place. The aim of this thread is to fix that.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [b2g] OpenWebApps/B2G Security model

2012-03-11 Thread Jonas Sicking
On Sat, Mar 10, 2012 at 1:00 PM, Jim Straus jstr...@mozilla.com wrote:
 Jonas, Paul, etc. -
  For any app, but in particular, a third party hosted apps, we could require 
 that the manifest contain a signed cryptographic hash of the core of the 
 application (javascript, html, css?), along with the signature of the trusted 
 store.  This hash would be validated as signed by a trusted source (like or 
 the same as SSL certs) and that the applications core matches the hash.  This 
 would require that the browser/device pre-load the given content, but 
 hopefully apps will be using the local-cache mechanism, so this should not be 
 burdensome.  Using this, once a trusted store has validate an application, 
 the application can't be changed, even if it is hosted by a third party.  We 
 would have to enforce that a signed application can't download untrusted 
 javascript (eval becomes a sensitive API?).  This would allow a third party 
 to host the apps approved by a given store.  It would also prevent a hacked 
 store site from distributing hacked apps (well, things  like images could 
 still be hacked, but not functionally) as long as the hacker doesn't have 
 access to the signing system (which should clearly not be on a public 
 machine).  This doesn't prevent a hacker from gaining access to information 
 communicated bak to a server, but at least makes sure that it isn't 
 re-directed somewhere else.

It's not entirely clear what problem it is that you are trying to
solve? Are you trying to avoid relying on SSL for safe delivery? Or
trying to provide the ability for stores to do code verification
before the grant apps access to sensitive APIs while still letting
those apps be hosted outside of the store?

I don't see a problem with relying on SSL. It's a technology that
developers understand very well and which we should IMHO encourage
more (especially in combination with SPDY).

I'm not a big believer in code signing. It's much too easy to miss
something in a big codebase and JS certainly doesn't lend itself well
to static analysis (which is what code review really is), neither by
humans or computers. Additionally one of the big benefits of the web
is the ease of deploying new code, which would be significantly
hampered if developers had to get any new version reviewed by stores.

So I'd prefer to push back against code reviews as much as we can. If
we end up needing it, then something like what you are describing
might be a possible solution.

  The signing mechanism can also be used to black list an app.  If Mozilla 
 maintains a site with a list of blacklisted signatures and devices query that 
 site, the apps could be disabled.  In whatever UI we have to view the list of 
 apps and control their permissions, a blacklisted app would show up as black 
 listed and all permissions denied.  A user, who needs the app would then 
 explicitly re-enable it and re-add permissions (making it a pain to go 
 through the process of looking at the permissions and enabling them), along 
 with suitable warnings when they do so.  Probably the black list site should 
 contain both the signatures to deny and an explanation of why, (consumes 
 excess resources, connects to high-cost SMS servers, leaks contacts, etc.) so 
 that the user can make an informed choice such as  to allow an app that 
 consumes excess resources, but not allow an app that leaks personal 
 information or incurs excessive costs.

I think black-listing would be more effectively done by black listing
a origin (scheme+host+port), rather than by signature. Working around
a blacklist that is by signature is easy enough that I'd be worried
people would even do it by mistake.

But I like your ideas about including a description when black
listing, as well as (probably optionally) disabling an apps all
elevated privileges. In fact, I think one of our blacklist options
should be to let an app keep running, but disable a specific elevated
privilege. So for example a game which works great but ends up sharing
high scores over SMS a bit too much should still be able to run, but
have the SMS capability disabled.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [b2g] OpenWebApps/B2G Security model

2012-03-11 Thread Jonas Sicking
On Sat, Mar 10, 2012 at 1:41 PM, lkcl luke luke.leigh...@gmail.com wrote:
 this is all really good stuff, jim.  but i have to reiterate: WHERE
 IS IT BEING FORMALLY DOCUMENTED?  please don't say in the mailing
 list.

Once we've had a bit more of a discussion here on the list, I think we
should document everything both as part of the OWA documentation, as
well as part of the general B2G documentation. But at this point I'm
not sure that there is enough consensus to start editing wikis.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: OpenWebApps/B2G Security model

2012-03-09 Thread Jonas Sicking
On Fri, Mar 9, 2012 at 8:16 PM, Jonas Sicking jo...@sicking.cc wrote:
 User control:

 I think it's very important in all this that we put the user in
 ultimate control. I don't think we want to rely on the user to make
 security decisions for all APIs, however I think it's important that
 we enable users to do so if they so desire. And I think that users
 should be able to make security decisions in both directions, I.e.
 both enable more access as well as less access than the above system
 provides.

 So during installation I think users should be able to tune down
 access on a capability-by-capability basis. I.e. the user should be
 able to say, I want to run this SMS app, but I want to completely
 disable the ability to send SMS messages

 Additionally, we should have some way for a user to install an app
 from a completely untrusted source and grant it any privilege that
 he/she wants to. This needs to be a quite complicated UI so that users
 don't do this accidentally, but I think it's important to allow as to
 not create situations like on iOS where certain apps require users to
 hack the device to get to install at all.

And of course in this part I forgot to mention that I think we should
have a place where users can see a list of all the apps they have
installed, and which privileges they are granted, and give them the
ability to lower those privileges to either prompt or deny (where
prompt would be with default to not remember since at this point I
think we can assume that the user is capable of checking the box if
desired).

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: measuring use of deprecated web features

2012-02-16 Thread Jonas Sicking
keygen (might be hard to remove even though rarely used since it's
used by banks)
netscape.security.PrivilegeManager.enablePrivilege

/ Jonas

On Tue, Feb 14, 2012 at 5:34 PM, Jesse Ruderman jruder...@gmail.com wrote:
 What rarely-used web features are hurting security? What might we
 remove if we had data on prevalence?

 https://etherpad.mozilla.org/MeasuringBadThings
 ___
 dev-planning mailing list
 dev-plann...@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-planning
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy - Relaxed Restrictions Mode(s)

2009-07-01 Thread Jonas Sicking

FunkyRes wrote:

On Jun 22, 4:15 pm, Brandon Sterne bste...@mozilla.com wrote:

Some sites have shared the desire to use some features of CSP, but not
all of them at once.  For example, a site may want to utilize the
content loading features of CSP to help prevent data exfiltration, but
they may not want to be subject to the JavaScript restrictions which are
enabled by default (no inline script, no eval, etc.).

We have made two additions to the spec that we think will address these
needs:

1. Sites can opt-out of no inline scripts by adding the inline
keyword to their script-src directive.
2. Sites can opt-out of no code from strings by adding the eval
keyword to their script-src directive.

These additions may enable some sites, who would otherwise be deterred
by the JS restrictions, to adopt CSP in a limited fashion early, and
later do a full implementation as resources permit.

Cheers,
Brandon


One thing I would find greatly beneficial is examples of how to do
things properly in a cross browser compliant way.

For example, for form validation - form onsubmit=return checkform
() just works.
I've figured out (I think) how to properly attach most events
externally - like onchange, onclick, etc. - but whenever I try to
attach something to the submit event of a form, the script runs but
then the form data is posted to the action page regardless whether it
returns true or false. It just works with the inline onsubmit
attribute.

Part of the problem is IE and Firefox have different ways to attach
events, but I think there must be some concept I just don't get about
how the submit event works that isn't a problem with inline.


If you do:

myForm.onsubmit = function() {
  return checkform();
}

I think it should work. Otherwise

myForm.addEventListener(submit, function(event) {
  if (!checkform()) {
event.preventDefault();
  }
}, false);

should work in any browser that implements DOM-Events. Unfortunately IE 
does not yet.


/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Work-around for Moxie Marlinspike's Blackhat attack

2009-03-05 Thread Jonas Sicking

Gervase Markham wrote:

On 28/02/09 00:32, Jonas Sicking wrote:

It'd be good to have a separate pref, network.IDN.blacklist_chars_extra,
where users can add additional characters without having to worry about
not receiving updates to the list we maintain.


If users have to add chars to this list manually, that's Really Bad - 
because most won't.


i agree we shouldn't rely on it. But it's IMHO always good if users can 
be proactive before we roll out patches, or if they want to be more 
restrictive than we dare to be.


What's easier - getting loads of users to modify 
this pref, or shipping an automatically-installed security update to all 
of them?


Is there anything that makes this an either-or situation?

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Work-around for Moxie Marlinspike's Blackhat attack

2009-02-27 Thread Jonas Sicking

Daniel Veditz wrote:

Jean-Marc Desperrier wrote:

Until a better solution is deployed, here is the work around to make
Moxie Marlinspike's attack ineffective.


Note that the better fix will be a default change for this very pref,
and any user-modified value will continue to take precedence. Please
remember to undo this change when we ship a fix or you will not get the
updates.


It'd be good to have a separate pref, network.IDN.blacklist_chars_extra, 
where users can add additional characters without having to worry about 
not receiving updates to the list we maintain.


/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: HTTPOnly cookies specification

2008-12-16 Thread Jonas Sicking

Bil Corry wrote:
Jonas Sicking wrote on 12/16/2008 4:32 PM: 

Bil Corry wrote:

There's a group of us working on creating a spec for HTTPOnly
cookies.  We have a draft of the HTTPOnly scope available to review:

http://docs.google.com/View?docid=dxxqgkd_0cvcqhsdw

If you have an active interest in participating, our list is here:

http://groups.google.com/group/ietf-httponly-wg

My first reaction to all this is: Can you really create a useful spec
for HTTPOnly cookies without first creating a spec for cookies? I.e. as
far as I know there is no useable spec out there for how to parse
HTTPOnly cookies at all, so it'd seem hard to detect what a HTTPOnly
cookie is.


That's what Dan Winship said (more or less):

http://lists.w3.org/Archives/Public/ietf-http-wg/2008OctDec/0235.html

I do agree that cookies could use a massive overhaul, taking the original 
Netscape cookie spec, RFCs 2109, 2964, and 2965, along with Yngve Pettersen's 
2965 replacement draft and merge them all together with the real-world 
implementations (HTTPOnly, etc) and from that, create one spec to rule them all.

But as I replied to Stefanos; Mozilla, WebKit and Microsoft have all recently 
updated their HTTPOnly features -- we want to piggyback on that momentum to get 
HTTPOnly implemented in a standard way without having to wait another year or 
two for a comprehensive cookie overhaul.


Out of curiosity, what do you want to specify beyond what XMLHttpRequest 
and HTML5 specifies?


/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Firefox plugin: Basic Authentication

2008-10-06 Thread Jonas Sicking
Yogesh Joshi wrote:
 Hi,
 
 I am a student at pursuing my post graduation. As a part of a project I am 
 developing a firefox plugin.
 
 
 For the same I need to work on Basic  Digest Authentication of
 firefox. Its been quite a few days since I am struggling with a
 problem. I need to capture the parameters entered in the Authentication
 Dialog box using JavaScript or may be some Observer on the
 authentication dialog. 
 
 What is the event that is fired after the user enters his/her username  
 password in the authentication dialog and then clicks Ok? Is there any way 
 I can get the unencrypted username  password in the plugin script?

You might be able to register to *be* the auth dialog, by implementing 
the contract id @mozilla.org/passwordmanager/authpromptfactory;1. auth 
prompt and pass on any requests for dialogs you get. This way you can 
see the returned data and record it, before passing it back to the code 
that is calling your component.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: signed scripts and security changes in 2.0.0.15?

2008-08-13 Thread Jonas Sicking
Nelson Bolyard wrote:
 Jonas Sicking wrote, On 2008-08-11 20:33:
 
 I would strongly recommend against using signed files at all. It's 
 something that we want to get rid of since the security model is so poor.
 
 Jonas, please enlighten us with an explanation of that claim.

Signed files are a bad security model. It gives the page way more access 
than they should have, thus potentially putting users at risk. Hence we 
want to get rid of it.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Signed Jar in JSP / Firefox 2.0.0.15

2008-07-24 Thread Jonas Sicking
Marine wrote:
 Boris Zbarsky a écrit :
 Marine wrote:
   
 However, I don't see  how to put all the code in a signed jar, as JSP 
 will generate HTML code dynamically for each client request.

 Is it possible to dynamically generate the signed jar?  Or move the 
 logic from server to client?
   
 I fear it won't be easy... and I don't want to waste a lot of time on 
 this, to finally see it's not possible :(
 Except if someone can tell me he has already done that, and how ?
 
 I don't claim this is easy to do, basically.  The signed jar model is 
 not the easiest thing in the world to work with.  :(
   
 Yes, another way to certficate code would be nice. For example, register 
 in Firefox the url of a given website that may use advanced privileges.
 But maybe it wouldn't be safe, I'm a newbie in browser security !

The signed script feature is something that we really want to kill. As 
you have noticed, it is far from easy to work with. Additionally it 
increases our attack surface for people trying to hack firefox and its 
users a lot.

The recommended solution is instead to write a firefox extension. This 
extension can download any dynamic resource you want without having to 
bother with signing.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Question about CAPS

2008-07-23 Thread Jonas Sicking
Boris Zbarsky wrote:
 Alex Yip wrote:
 Thanks Boris, is the CAPS system being removed?
 
 Not completely, but if DOM is switched off xpconnect, it will no longer 
 be making CAPS calls (which will be a big performance benefit).
 
 If so, is there a replacement for it?
 
 Not really, as far as I know.

We should have a separate preference for turning off window.postMessage 
though, as it's a feature that affects security.

If there isn't such a pref already, please file a bug on it and cc me.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Object tag element opened with Open File -- Local File Security

2008-06-19 Thread Jonas Sicking
Martijn wrote:
 On Wed, Jun 18, 2008 at 9:12 PM,  [EMAIL PROTECTED] wrote:
 Hi,

 I'm trying to open an html file with the object tag in it from a local
 file (file://) in Firefox 3.0

 The html file is the following:

 html
 body

 object  type=audio/mpeg data=media/where_to_begin.mp3 width=200
 height=20
param name=src value=media/where_to_begin.mp3 /
param name=autoplay value=false /
alt : a 
 href=media/where_to_begin.mp3media/where_to_begin.mp3/
 a
 /object
 /body
 /html

 The URL looks something like:

 file:///C:/Program%20Files/test.html

 I basically get the quicktime icon but the file does not load.

 If I launch this very same file through my localhost web server, the
 mp3 file loads just fine. However, I need to be able to launch this
 file on computers with no web server (so from the Open File option).

 Does this have something to do with the new security features in
 firefox 3.0? This worked just fine in firefox 2.
 Is there a way I can have this media file load correclty when opening
 if from the local file system?

 Many thanks for any help,
 
 I suspect you're encountering the result of
 https://bugzilla.mozilla.org/show_bug.cgi?id=230606
 CC-ing Dan, he knows more about this stuff.

That seems unrelated as this is loading a file from a sub directory, 
which should be allowed.

/ Jonas
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: How to get a free certificate

2008-04-21 Thread Jonas Sicking
Eddy Nigg (StartCom Ltd.) wrote:
 Jose Luis:
 As mentioned in 
 http://www.mozilla.org/projects/security/components/signed-scripts.html 
 Javascript must be signed with certificates when trying to enable 
 priviledges.

 How do I get a free certificate for this.

   
 Hi Jose,
 
 As far as I know there are none. It might be that GoDaddy still gives 
 out code signing certs for open source projects for free (so I haven't 
 seen for along time about it, they might have discontinued it).
 
 Besides that, it's highly unpractical to sign javascripts and html pages 
 (as all of them must be signed and placed into the jar) for most  sites, 
 since todays requirements and sites are mostly not static, but 
 dynamically assembled on the server side. In my opinion, the security 
 concept of the Mozilla browser(s) is not really usable... :-(

Yes, script signing is not a very practical solution and has a lot of 
bad issues. Ranging from the certificate issue you bring up, to a bad UI 
on the users end when you request privilege.

It's basically only there as a hold-over from the netscape days which 
inherited its design from java. Many many moons ago.

It's entirely possible that we will completely remove the code-signing 
feature from firefox 4 or so.

If you need to run code with extended privileges I would suggest you 
create an extension that is specifically designed to work together with 
your site.

Hope that helps.

Best Regards,
Jonas Sicking
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security