Re: Intent to implement: ScrollTimeline

2017-03-25 Thread zbraniecki
On Saturday, March 25, 2017 at 8:02:36 PM UTC+1, Botond Ballo wrote:

> What you describe sounds like other types of timelines, linked to user
> gestures. There is mention of that on the wiki [1] [2], but no
> concrete proposal that I'm aware of. I would imagine contributions to
> the development of such a proposal would be welcome!

Thanks!

Yeah, it seems that the touch-based scrubbing is the closest to what I thought 
of. It's not my idea, of course. I just really like the UX of the material 
design from Google where the animation is linked to the progress of some touch 
event.

It's not only pleasant to look at and play with, but it also lowers the 
cognitive confusion factor, since the visual stimuli is directly linked, by 
both trigger and progress, user action, instead of "happening on its own".
Lastly, it works really well as tutorial since user can slow down, or pause in 
the middle of the action and the animation slows down, or pauses, in response. 
User can even reverse the motion and the animation will follow.
This all in my, naitve, UX stufy on a few of my less technically gifted 
friends, did wonders to their ability to gain sense of comfort in understanding 
the UX paradigms of the software.

I'd love to see the Web gain this capability, and of course, to see Firefox UI 
use that.

zb.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: ScrollTimeline

2017-03-25 Thread Botond Ballo
> Is it possible to use this, or is there a similar proposal, for linking 
> animation timeline to other user-controlled means of interacting with the UI?
>
> I'm thinking primarily about things like:
>
>  - drag&drop - the percentage of the distance between the source and target 
> linked to the animation timeline
>  - touch events - unfold or move an element with a thumb on mobile triggers 
> an animation linked to the percentage of the distance between folded/unfolded.

Web Animations was designed with the possibility of extending it with
other types of timelines in mind.

ScrollTimeline is one such extension, a timeline linked to scrolling.

What you describe sounds like other types of timelines, linked to user
gestures. There is mention of that on the wiki [1] [2], but no
concrete proposal that I'm aware of. I would imagine contributions to
the development of such a proposal would be welcome!

Cheers,
Botond

[1] 
https://wiki.mozilla.org/Platform/Layout/Extended_Timelines#Touch-based_scrubbing
[2] https://wiki.mozilla.org/Platform/Layout/Extended_Timelines#Other_timelines
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Future of out-of-tree spell checkers?

2017-03-25 Thread Ehsan Akhgari
On 2017-03-24 2:45 PM, Bill McCloskey wrote:
> If we do end up going with the dlopen plan, let's make sure that we
> enforce some kind of code signing. We're finally almost rid of all the
> untrusted binary code that we used to load (NPAPI, binary XPCOM,
> ctypes). It would be a shame to open up a new path.

Yeah, I agree.

Another option would be talking to the maintainers of libvoikko and
seeing if they would be open to maintaining the Mozilla bindings, in
which case I think we should even consider doing something like what we
do to download the OpenH264 binary at runtime when we need to.  We could
even build and sign it in the infrastructure ourselves if we imported it
into the tree, with task cluster this is possible today with a super
simple shell script (well, at least the building side of it!).  We
basically need someone to sign up for maintaining it, since I doubt that
MoCo will prioritize supporting a whole new spell checking backend for 2
new languages, but I think we can do a lot to help.

> On Fri, Mar 24, 2017 at 6:20 AM, Ehsan Akhgari  > wrote:
> 
> On 2017-03-24 4:20 AM, Henri Sivonen wrote:
> > On Fri, Mar 24, 2017 at 2:38 AM, Ehsan Akhgari
> mailto:ehsan.akhg...@gmail.com>> wrote:
> >> On Wed, Mar 22, 2017 at 11:50 AM, Jeff Muizelaar
> mailto:jmuizel...@mozilla.com>>
> >> wrote:
> >>>
> >>> On Wed, Mar 22, 2017 at 11:08 AM, Henri Sivonen
> mailto:hsivo...@hsivonen.fi>>
> >>> wrote:
> 
>  dlopening libvoikko, if installed, and having thin C++ glue code
>  in-tree seems much simpler, except maybe for sandboxing. What
> are the
>  sandboxing implications of dlopening a shared library that will
> want
>  to load its data files?
> >>>
> >>> My understanding is that the spell checker mostly lives in the
> Chrome
> >>> process so it seems sandboxing won't be a problem.
> >>
> >>
> >> That is mostly correct.  The spell checker *completely* lives in
> the parent
> >> process and is completely unaffected by sandboxing.
> >>
> >> But that's actually a problem.  My understanding is that
> WebExtensions won't
> >> be allowed to load code in the parent process.  Bill, Kris, is
> that correct?
> >> If yes, we should work with the maintainers of the Finnish and
> Greenlandic
> >> dictionaries on adding custom support for loading their code...
> >
> > But when (according to doing a Google Web search excluding
> mozilla.org 
> > and wading through all the results and by searching the JS for all
> > AMO-hosted extensions) the only out-of-tree spell checkers use
> > libvoikko, why involve Web Extensions at all? Why wouldn't we dlopen
> > libvoikko and put a thin C++ adapter between libvoikko's C API and our
> > internal C++ interface in-tree? That would be significantly simpler
> > than involving Web extensions.
> 
> Is that different than what I suggested above in some way that I'm
> missing?  I think it's better to engage the developers of those
> libraries first and ask them how they would like us to proceed.  At any
> rate, something has to change on their side, since after Firefox 57
> presumably Firefox would just ignore their XPI file or something.  The
> actual implementation mechanism would probably end up being the
> dlopening that you're suggesting, but if we're going to be signing up to
> doing that, we better have at least a communication channel with the
> authors of those libraries in case for example we need to change
> something on our interface some day.
> 
> 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to unship: :-moz-bound-element pseudo-class.

2017-03-25 Thread Emilio Cobos Álvarez
I intend to remove the :-moz-bound-element CSS pseudo-class in bug
1350147.

It's an XBL-related pseudo-class that allows matching the XBL element
bound in the current subtree.

We could implement it in Stylo, I guess, but it is slightly annoying,
and seems completely unused and untested both in mozilla-central[1],
comm-central[2], and add-on code[3].

I don't think there's any benefit of keeping untested and unused stuff
in the tree, but if I got it wrong, or anyone has any concern, please
speak.

Thanks for reading :)

 -- Emilio

[1]: https://searchfox.org/mozilla-central/search?q=moz-bound-element
[2]: https://searchfox.org/comm-central/search?q=moz-bound-element
[3]: https://dxr.mozilla.org/addons/search?q=moz-bound-element&redirect=false


signature.asc
Description: PGP signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Better download security through browsers

2017-03-25 Thread Daniel Veditz
Most people working on sub-resource integrity has wanted to extend SRI to
downloads, it was even in the initial version of the spec but foundered in
the weeds of edge cases iirc. I don't see an open issue for it though:
looks like it got lost in the transition from our old repo to the new one.
It's definitely one of the top desires for SRI2.

I've created an issue for it:
https://github.com/w3c/webappsec-subresource-integrity/issues/68

-Dan Veditz

On Fri, Mar 24, 2017 at 10:33 AM, Mike Hoye  wrote:

> Love it. How do we make it happen?
>
> - mhoye
>
>
> On 2017-03-24 1:30 PM, Tom Ritter wrote:
>
>> It seems like SubResource Integrity could be extended to do this...
>> It's specifically for the use case: where you kinda trust your CDN,
>> but you want to be completely sure.
>>
>> -tom
>>
>> On Fri, Mar 24, 2017 at 12:24 PM, Mike Hoye  wrote:
>>
>>> My 2006 proposal didn't get any traction either.
>>>
>>> https://lists.w3.org/Archives/Public/public-whatwg-archive/2
>>> 006Jan/0270.html
>>>
>>> FWIW I still think it'd be a good idea with the right UI.
>>>
>>> - mhoye
>>>
>>>
>>> On 2017-03-24 1:16 PM, Dave Townsend wrote:
>>>
 I remember that Gerv was interested in a similar idea many years ago,
 you
 might want to see if he went anywhere with it.

 https://blog.gerv.net/2005/03/link_fingerprin_1/


 On Fri, Mar 24, 2017 at 10:12 AM, Gregory Szorc 
 wrote:

 I recently reinstalled Windows 10 on one of my machines. This involved
> visiting various web sites and downloading lots of software.
>
> It is pretty common for software publishers to publish hashes or
> cryptographic signatures of software so the downloaded software can be
> verified. (Often times the download is performed through a CDN,
> mirroring
> network, etc and you may not have full trust in the server operator.)
>
> Unless you know how to practice safe security, you probably don't
> bother
> verifying downloaded files match the signatures authors have provided.
> Furthermore, many sites redundantly write documentation for how to
> verify
> the integrity of downloads. This feels sub-optimal.
>
> This got me thinking: why doesn't the user agent get involved to help
> provide better download security? What my (not a web standard spec
> author)
> brain came up with is standardized metadata in the HTML for the
> download
> link (probably an ) that defines file integrity information. When
> the
> user agent downloads that file, it automatically verifies file
> integrity
> and fails the download or pops up a big warning box, etc or things
> don't
> check out. In other words, this mechanism would extend the trust anchor
> in
> the source web site (likely via a trusted x509 cert) to file downloads.
> This would provide additional security over (optional) x509 cert
> validation
> of the download server alone. Having the integrity metadata baked into
> the
> origin site is important: you can't trust the HTTP response from the
> download server because it may be from an untrusted server.
>
> Having such a feature would also improve the web experience. How many
> times
> have you downloaded a corrupted file? Advanced user agents (like
> browsers)
> could keep telemetry of how often downloads fail integrity. This could
> be
> used to identify buggy proxies, malicious ISPs rewriting content, etc.
>
> I was curious if this enhancement to the web platform has ever been
> considered and/or if it is something Mozilla would consider pushing.
>
> gps
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
> ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

>>>
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform