Re: Switching Jetpack to use the runtests.py automation
Gregory Szorc wrote on 07/15/2014 09:04 PM: > On 7/15/14, 11:49 AM, Dave Townsend wrote: >> Since forever Jetpack tests in the Firefox trees have been run using our >> custom python CFX tool which is based on a fork of an ancient version of >> mozrunner. This causes us a number of problems. Keeping up with tree >> visibility rules is hard. Some features from newer versions of mozrunner >> like crash stack handling aren't available and our attempts to update to >> the newer mozbase have been blocked on trying to get some of our forked >> code accepted. It also makes it hard for Mozilla other developers to run >> our tests as CFX has a very different syntax to the other test suites. >> >> We've started investigating switching away from CFX and instead using the >> python automation that the mochitests use. This would work somewhat >> similarly to browser-chrome tests, runtests.py will startup Firefox and >> overlay some XUL and JS on the main window from where we can run the >> existing JS parts of the Jetpack test suites. >> >> There are many benefits here. The runtests.py code is well used and known >> to be resilient. It supports things like screenshots on failures and crash >> stacks that Jetpack tests don't currently handle. We'll use manifest files >> like the other test suites so disabling tests per platform will be easy. >> Excellent mach integration will make running individual tests simple. It >> also makes it possible to use commonjs style tests elsewhere in the tree. >> Release engineering should find managing the Jetpack tests a lot easier as >> they behave just like other mochitests. >> >> My initial experiment last week shows that this will work. The first part >> of our tests (package tests) is running and passing on my local machine and >> I expect to have the add-on tests working this week. >> >> I wanted to give everyone a heads up about this work to give you all a >> chance to ask questions or raise objections. The changes to runtests and >> the build system are minimal, just adding support for new manifest types >> really but I will be needing reviews for those. We'll also have to make the >> buildbot changes to switch over to use these new tests but I expect that to >> be pretty straightforward. > > Was Marionette considered? From what little I know (jgriffin and others > can correct me), Marionette seems like the logical base for this test suite. Adding the tools mailing list, so that members of the A-team are aware of this thread, and can answer appropriately. -- Henrik Skupin Senior Test Engineer Mozilla Corporation ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
How to use jquery in xul files of thunderbird
Dear, When I wanted to use jquery in XUL files, the event was not responded. For example, codes like "$("selectorName").hide(500);"has no action. If you provide me some documents related, I will be very appreciate. Thanks very much. Best regards, Huan Wang ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: PSA: DebugOnly<> fields aren't zero-sized in non-DEBUG builds
On Tue, Jul 15, 2014 at 6:33 PM, Benoit Jacob wrote: > Having to guard them in #ifdef DEBUG takes away much of the point > of DebugOnly, doesn't it? Yes. For the fields I've converted, I removed the DebugOnly<> wrapper. Nick ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: PSA: DebugOnly<> fields aren't zero-sized in non-DEBUG builds
It may be worth reminding people that this is not specific to DebugOnly but general to all C++ classes: In C++, there is no such thing as a class with size 0. So expecting DebugOnly to be of size 0 is not misunderstanding DebugOnly, it is misunderstanding C++. The only way to have empty classes behave as if they had size 0, is to inherit from them instead of having them as the types of members.That's called the Empty Base Class Optimization. http://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Empty_Base_Optimization Since DebugOnly incurs a size overhead in non-debug builds, maybe we should officially consider it bad practice to have any DebugOnly class members. Having to guard them in #ifdef DEBUG takes away much of the point of DebugOnly, doesn't it? Benoit 2014-07-15 21:21 GMT-04:00 Nicholas Nethercote : > Hi, > > The comment at the top of mfbt/DebugOnly.h includes this text: > > * Note that DebugOnly instances still take up one byte of space, plus > padding, > * when used as members of structs. > > I'm in the process of making js::HashTable (a very common class) > smaller by converting some DebugOnly fields to instead be guarded by > |#ifdef DEBUG| (bug 1038601). > > Below is a list of remaining DebugOnly members that I found using > grep. People who are familiar with them should inspect them to see if > they belong to classes that are commonly instantiated, and thus if > some space savings could be made. > > Thanks. > > Nick > > > uriloader/exthandler/ExternalHelperAppParent.h: DebugOnly mDiverted; > layout/style/CSSVariableResolver.h: DebugOnly mResolved; > layout/base/DisplayListClipState.h: DebugOnly mClipUsed; > layout/base/DisplayListClipState.h: DebugOnly mRestored; > layout/base/DisplayListClipState.h: DebugOnly mExtraClipUsed; > gfx/layers/Layers.h: DebugOnly mDebugColorIndex; > ipc/glue/FileDescriptor.h: mutable DebugOnly > mHandleCreatedByOtherProcessWasUsed; > ipc/glue/MessageChannel.cpp:DebugOnly mMoved; > ipc/glue/BackgroundImpl.cpp: DebugOnly mActorDestroyed; > content/media/MediaDecoderStateMachine.h: DebugOnly > mInRunningStateMachine; > dom/indexedDB/ipc/IndexedDBParent.h: DebugOnly mRequestType; > dom/indexedDB/ipc/IndexedDBParent.h: DebugOnly mRequestType; > dom/indexedDB/ipc/IndexedDBParent.h: DebugOnly mRequestType; > dom/indexedDB/ipc/IndexedDBChild.h: DebugOnly mRequestType; > dom/indexedDB/ipc/IndexedDBChild.h: DebugOnly mRequestType; > dom/indexedDB/ipc/IndexedDBChild.h: DebugOnly mRequestType; > ___ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform > ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
PSA: DebugOnly<> fields aren't zero-sized in non-DEBUG builds
Hi, The comment at the top of mfbt/DebugOnly.h includes this text: * Note that DebugOnly instances still take up one byte of space, plus padding, * when used as members of structs. I'm in the process of making js::HashTable (a very common class) smaller by converting some DebugOnly fields to instead be guarded by |#ifdef DEBUG| (bug 1038601). Below is a list of remaining DebugOnly members that I found using grep. People who are familiar with them should inspect them to see if they belong to classes that are commonly instantiated, and thus if some space savings could be made. Thanks. Nick uriloader/exthandler/ExternalHelperAppParent.h: DebugOnly mDiverted; layout/style/CSSVariableResolver.h: DebugOnly mResolved; layout/base/DisplayListClipState.h: DebugOnly mClipUsed; layout/base/DisplayListClipState.h: DebugOnly mRestored; layout/base/DisplayListClipState.h: DebugOnly mExtraClipUsed; gfx/layers/Layers.h: DebugOnly mDebugColorIndex; ipc/glue/FileDescriptor.h: mutable DebugOnly mHandleCreatedByOtherProcessWasUsed; ipc/glue/MessageChannel.cpp:DebugOnly mMoved; ipc/glue/BackgroundImpl.cpp: DebugOnly mActorDestroyed; content/media/MediaDecoderStateMachine.h: DebugOnly mInRunningStateMachine; dom/indexedDB/ipc/IndexedDBParent.h: DebugOnly mRequestType; dom/indexedDB/ipc/IndexedDBParent.h: DebugOnly mRequestType; dom/indexedDB/ipc/IndexedDBParent.h: DebugOnly mRequestType; dom/indexedDB/ipc/IndexedDBChild.h: DebugOnly mRequestType; dom/indexedDB/ipc/IndexedDBChild.h: DebugOnly mRequestType; dom/indexedDB/ipc/IndexedDBChild.h: DebugOnly mRequestType; ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Studying Lossy Image Compression Efficiency, July 2014
On 7/15/14 12:38 PM, stonecyp...@gmail.com wrote: > Similarly there's a reason that people are still hacking video into > JPEGs and using animated GIFs. People are using animated GIFs, but animated GIFs people are using may not be animated GIFs [1]. (2014/07/16 5:43), Chris Peterson wrote: > Do Chrome and IE support JPEG2000? I can't find a clear answer online. > The WONTFIX'd Firefox bug [1] says IE and WebKit/Blink browsers support > JPEG2000 (but WebKit's support is only on OS X). No, IE does not support JPEG2000. But IE9+ supports JPEG XR. Chrome does not support both, but it supports WebP [2]. [1] http://techcrunch.com/2014/06/19/gasp-twitter-gifs-arent-actually-gifs/ [2] http://xkcd.com/927/ -- vyv03...@nifty.ne.jp ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Studying Lossy Image Compression Efficiency, July 2014
On 7/15/14 12:38 PM, stonecyp...@gmail.com wrote: On Tuesday, July 15, 2014 7:34:35 AM UTC-7, Josh Aas wrote: This is the discussion thread for Mozilla's July 2014 Lossy Compressed Image Formats Study and the Mozilla Research blog post entitled "Mozilla Advances JPEG Encoding with mozjpeg 2.0". Would be nice if you guys just implemented JPEG2000. It's 2014. Not only would you get a lot more than a 5% encoding boost, but you'd get much higher quality images to boot. "But nobody supports JPEG2000 and we want to target something everyone can see!" If you had implemented it in 2014, everyone would support it today. If you don't implement it today, we'll wait another 15 years tuning a 25 year old image algorithm while better things are available. Similarly there's a reason that people are still hacking video into JPEGs and using animated GIFs. Do Chrome and IE support JPEG2000? I can't find a clear answer online. The WONTFIX'd Firefox bug [1] says IE and WebKit/Blink browsers support JPEG2000 (but WebKit's support is only on OS X). chris [1] https://bugzilla.mozilla.org/show_bug.cgi?id=36351 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Studying Lossy Image Compression Efficiency, July 2014
On Tuesday, July 15, 2014 10:34:35 AM UTC-4, Josh Aas wrote: > This is the discussion thread for Mozilla's July 2014 Lossy Compressed Image > Formats Study and the Mozilla Research blog post entitled "Mozilla Advances > JPEG Encoding with mozjpeg 2.0". #1 Would it be possible to have the same algorithm that is applied to webP to be applied to JPEG? #2 There are some JPEG services that perceptually change the image, without any noticeable artifacts. Have you tried something like that? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Switching Jetpack to use the runtests.py automation
On Tue, Jul 15, 2014 at 12:15 PM, Ted Mielczarek wrote: > On 7/15/2014 2:49 PM, Dave Townsend wrote: > > Since forever Jetpack tests in the Firefox trees have been run using our > > custom python CFX tool which is based on a fork of an ancient version of > > mozrunner. This causes us a number of problems. Keeping up with tree > > visibility rules is hard. Some features from newer versions of mozrunner > > like crash stack handling aren't available and our attempts to update to > > the newer mozbase have been blocked on trying to get some of our forked > > code accepted. It also makes it hard for Mozilla other developers to run > > our tests as CFX has a very different syntax to the other test suites. > > > > We've started investigating switching away from CFX and instead using the > > python automation that the mochitests use. This would work somewhat > > similarly to browser-chrome tests, runtests.py will startup Firefox and > > overlay some XUL and JS on the main window from where we can run the > > existing JS parts of the Jetpack test suites. > > > > There are many benefits here. The runtests.py code is well used and known > > to be resilient. It supports things like screenshots on failures and > crash > > stacks that Jetpack tests don't currently handle. We'll use manifest > files > > like the other test suites so disabling tests per platform will be easy. > > Excellent mach integration will make running individual tests simple. It > > also makes it possible to use commonjs style tests elsewhere in the tree. > > Release engineering should find managing the Jetpack tests a lot easier > as > > they behave just like other mochitests. > > > > My initial experiment last week shows that this will work. The first part > > of our tests (package tests) is running and passing on my local machine > and > > I expect to have the add-on tests working this week. > > > > I wanted to give everyone a heads up about this work to give you all a > > chance to ask questions or raise objections. The changes to runtests and > > the build system are minimal, just adding support for new manifest types > > really but I will be needing reviews for those. We'll also have to make > the > > buildbot changes to switch over to use these new tests but I expect that > to > > be pretty straightforward. > > ___ > I am totally into the sentiment here, but I'm not sure that I'm into > your exact plan of action. The Jetpack tests definitely have a lot of > issues due to not using the same infrastructure as other test harnesses, > so I'd be glad to see that get fixed. On the other hand, the Mochitest > harness is already unduly complicated by trying to do too many > things--plain Mochitests, chrome Mochitests, browser-chrome tests and > all the other variants are all shoehorned in there and it's a mess. I'd > rather not shoehorn yet another test framework into the Mochitest > umbrella. However! In the modern era Mochitest is mostly comprised of > some glue code that uses mozbase modules to do all the hard work. I > think you can cobble together a reasonable facsimile that runs the > Jetpack tests in a cleaner fashion by using the mozbase modules directly. > Creating a new test harness (even one based on top of mozbase) loses us some of the benefits of this. By re-using existing code we get improvements to that code for free. Right now making changes to CFX is hard in part because the people who wrote the code have moved on and almost no-one on the team really knows python. For example when new test requirements come upand are fixed in whatever harness we switch to > > Alternately, as gps pointed out in his reply, you might want to look at > Marionette as a starting point. The Marionette test harness uses mozbase > as well, but gives you the power of the Marionette protocol to control > the browser. If we were writing the Mochitest harness from scratch today > I would base it on top of Marionette. > I'm not set on the mochitest harness or any harness really. The mochitest harness seemed the simplest because I know it reasonably well and browser-chrome already does exactly what we need. Ignoring the work to support the manifests for these tests it took maybe 30 minutes to get the mochitest harness supporting jetpack package tests. I'll take a look at Marionette and see what I can learn. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Studying Lossy Image Compression Efficiency, July 2014
On Tuesday, July 15, 2014 7:34:35 AM UTC-7, Josh Aas wrote: > This is the discussion thread for Mozilla's July 2014 Lossy Compressed Image > Formats Study and the Mozilla Research blog post entitled "Mozilla Advances > JPEG Encoding with mozjpeg 2.0". Would be nice if you guys just implemented JPEG2000. It's 2014. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Studying Lossy Image Compression Efficiency, July 2014
On Tuesday, July 15, 2014 7:34:35 AM UTC-7, Josh Aas wrote: > This is the discussion thread for Mozilla's July 2014 Lossy Compressed Image > Formats Study and the Mozilla Research blog post entitled "Mozilla Advances > JPEG Encoding with mozjpeg 2.0". Would be nice if you guys just implemented JPEG2000. It's 2014. Not only would you get a lot more than a 5% encoding boost, but you'd get much higher quality images to boot. "But nobody supports JPEG2000 and we want to target something everyone can see!" If you had implemented it in 2014, everyone would support it today. If you don't implement it today, we'll wait another 15 years tuning a 25 year old image algorithm while better things are available. Similarly there's a reason that people are still hacking video into JPEGs and using animated GIFs. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Switching Jetpack to use the runtests.py automation
On 7/15/2014 2:49 PM, Dave Townsend wrote: > Since forever Jetpack tests in the Firefox trees have been run using our > custom python CFX tool which is based on a fork of an ancient version of > mozrunner. This causes us a number of problems. Keeping up with tree > visibility rules is hard. Some features from newer versions of mozrunner > like crash stack handling aren't available and our attempts to update to > the newer mozbase have been blocked on trying to get some of our forked > code accepted. It also makes it hard for Mozilla other developers to run > our tests as CFX has a very different syntax to the other test suites. > > We've started investigating switching away from CFX and instead using the > python automation that the mochitests use. This would work somewhat > similarly to browser-chrome tests, runtests.py will startup Firefox and > overlay some XUL and JS on the main window from where we can run the > existing JS parts of the Jetpack test suites. > > There are many benefits here. The runtests.py code is well used and known > to be resilient. It supports things like screenshots on failures and crash > stacks that Jetpack tests don't currently handle. We'll use manifest files > like the other test suites so disabling tests per platform will be easy. > Excellent mach integration will make running individual tests simple. It > also makes it possible to use commonjs style tests elsewhere in the tree. > Release engineering should find managing the Jetpack tests a lot easier as > they behave just like other mochitests. > > My initial experiment last week shows that this will work. The first part > of our tests (package tests) is running and passing on my local machine and > I expect to have the add-on tests working this week. > > I wanted to give everyone a heads up about this work to give you all a > chance to ask questions or raise objections. The changes to runtests and > the build system are minimal, just adding support for new manifest types > really but I will be needing reviews for those. We'll also have to make the > buildbot changes to switch over to use these new tests but I expect that to > be pretty straightforward. > ___ I am totally into the sentiment here, but I'm not sure that I'm into your exact plan of action. The Jetpack tests definitely have a lot of issues due to not using the same infrastructure as other test harnesses, so I'd be glad to see that get fixed. On the other hand, the Mochitest harness is already unduly complicated by trying to do too many things--plain Mochitests, chrome Mochitests, browser-chrome tests and all the other variants are all shoehorned in there and it's a mess. I'd rather not shoehorn yet another test framework into the Mochitest umbrella. However! In the modern era Mochitest is mostly comprised of some glue code that uses mozbase modules to do all the hard work. I think you can cobble together a reasonable facsimile that runs the Jetpack tests in a cleaner fashion by using the mozbase modules directly. Alternately, as gps pointed out in his reply, you might want to look at Marionette as a starting point. The Marionette test harness uses mozbase as well, but gives you the power of the Marionette protocol to control the browser. If we were writing the Mochitest harness from scratch today I would base it on top of Marionette. -Ted ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Switching Jetpack to use the runtests.py automation
On 7/15/14, 11:49 AM, Dave Townsend wrote: Since forever Jetpack tests in the Firefox trees have been run using our custom python CFX tool which is based on a fork of an ancient version of mozrunner. This causes us a number of problems. Keeping up with tree visibility rules is hard. Some features from newer versions of mozrunner like crash stack handling aren't available and our attempts to update to the newer mozbase have been blocked on trying to get some of our forked code accepted. It also makes it hard for Mozilla other developers to run our tests as CFX has a very different syntax to the other test suites. We've started investigating switching away from CFX and instead using the python automation that the mochitests use. This would work somewhat similarly to browser-chrome tests, runtests.py will startup Firefox and overlay some XUL and JS on the main window from where we can run the existing JS parts of the Jetpack test suites. There are many benefits here. The runtests.py code is well used and known to be resilient. It supports things like screenshots on failures and crash stacks that Jetpack tests don't currently handle. We'll use manifest files like the other test suites so disabling tests per platform will be easy. Excellent mach integration will make running individual tests simple. It also makes it possible to use commonjs style tests elsewhere in the tree. Release engineering should find managing the Jetpack tests a lot easier as they behave just like other mochitests. My initial experiment last week shows that this will work. The first part of our tests (package tests) is running and passing on my local machine and I expect to have the add-on tests working this week. I wanted to give everyone a heads up about this work to give you all a chance to ask questions or raise objections. The changes to runtests and the build system are minimal, just adding support for new manifest types really but I will be needing reviews for those. We'll also have to make the buildbot changes to switch over to use these new tests but I expect that to be pretty straightforward. Was Marionette considered? From what little I know (jgriffin and others can correct me), Marionette seems like the logical base for this test suite. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement: webserial api
On 07/13/2014 11:55 AM, Jonas Sicking wrote: Sadly I don't think that is very safe. I bet a significant majority of our users have no idea what a serial port is or what will happen if they allow a website to connect to it. Agreed. It seems like the concept users are most likely to reliably understand are physical devices. https://github.com/whatwg/serial/issues/23 indicates that that the expected supported underlying layers are USB, Bluetooth, and the random motherboard that still has RS232 ports. As noted in a comment on issue 20 at https://github.com/whatwg/serial/issues/20#issuecomment-28333090 it seems counterproductive to place so much importance on the legacy case. I think an important statement for the spec to make is why it needs to exist at all? Specifically, it seems like both the WebUSB https://bugzil.la/674718 and WebBluetooth https://bugzil.la/674737 specs should both be equally capable of producing the standard stream abstractions supported by the protocols. And then the security and UX can both benefit from the appropriate models. This includes features that the WebSerial API currently can't really offer, like triggering a notification/wake-up/load of the app when the device is reconnected via USB or comes into range of the device, etc. This is arguably a net UX win. Additionally, if the security model involved enumerating vendor/product, not only would it simplify the wake-up notification, but the Firefox OS app marketplace could even suggest apps. (Ex: a system notification could notice you plugged in a specific vendor/product pair for the first time and offer to launch a search. Or tell you what it already found, etc.) Note that I'm not saying the spec/implementation doesn't need to exist. However I do think that from a security/user comprehension perspective WebUSB/WebBluetooth should handle the friendly/easy-to-use stuff and WebSerial needs to be something that needs to be vouched-for by a marketplace or requires the user performing a series of manual steps that would make most people think twice about why they're doing it. Andrew ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Switching Jetpack to use the runtests.py automation
I think I speak for everyone who's debugged JP failures when I say: Huzzah! Thanks for doing this Mossop :-) On Tue, Jul 15, 2014 at 11:49 AM, Dave Townsend wrote: > Since forever Jetpack tests in the Firefox trees have been run using our > custom python CFX tool which is based on a fork of an ancient version of > mozrunner. This causes us a number of problems. Keeping up with tree > visibility rules is hard. Some features from newer versions of mozrunner > like crash stack handling aren't available and our attempts to update to > the newer mozbase have been blocked on trying to get some of our forked > code accepted. It also makes it hard for Mozilla other developers to run > our tests as CFX has a very different syntax to the other test suites. > > We've started investigating switching away from CFX and instead using the > python automation that the mochitests use. This would work somewhat > similarly to browser-chrome tests, runtests.py will startup Firefox and > overlay some XUL and JS on the main window from where we can run the > existing JS parts of the Jetpack test suites. > > There are many benefits here. The runtests.py code is well used and known > to be resilient. It supports things like screenshots on failures and crash > stacks that Jetpack tests don't currently handle. We'll use manifest files > like the other test suites so disabling tests per platform will be easy. > Excellent mach integration will make running individual tests simple. It > also makes it possible to use commonjs style tests elsewhere in the tree. > Release engineering should find managing the Jetpack tests a lot easier as > they behave just like other mochitests. > > My initial experiment last week shows that this will work. The first part > of our tests (package tests) is running and passing on my local machine and > I expect to have the add-on tests working this week. > > I wanted to give everyone a heads up about this work to give you all a > chance to ask questions or raise objections. The changes to runtests and > the build system are minimal, just adding support for new manifest types > really but I will be needing reviews for those. We'll also have to make the > buildbot changes to switch over to use these new tests but I expect that to > be pretty straightforward. > ___ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform > ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Switching Jetpack to use the runtests.py automation
Since forever Jetpack tests in the Firefox trees have been run using our custom python CFX tool which is based on a fork of an ancient version of mozrunner. This causes us a number of problems. Keeping up with tree visibility rules is hard. Some features from newer versions of mozrunner like crash stack handling aren't available and our attempts to update to the newer mozbase have been blocked on trying to get some of our forked code accepted. It also makes it hard for Mozilla other developers to run our tests as CFX has a very different syntax to the other test suites. We've started investigating switching away from CFX and instead using the python automation that the mochitests use. This would work somewhat similarly to browser-chrome tests, runtests.py will startup Firefox and overlay some XUL and JS on the main window from where we can run the existing JS parts of the Jetpack test suites. There are many benefits here. The runtests.py code is well used and known to be resilient. It supports things like screenshots on failures and crash stacks that Jetpack tests don't currently handle. We'll use manifest files like the other test suites so disabling tests per platform will be easy. Excellent mach integration will make running individual tests simple. It also makes it possible to use commonjs style tests elsewhere in the tree. Release engineering should find managing the Jetpack tests a lot easier as they behave just like other mochitests. My initial experiment last week shows that this will work. The first part of our tests (package tests) is running and passing on my local machine and I expect to have the add-on tests working this week. I wanted to give everyone a heads up about this work to give you all a chance to ask questions or raise objections. The changes to runtests and the build system are minimal, just adding support for new manifest types really but I will be needing reviews for those. We'll also have to make the buildbot changes to switch over to use these new tests but I expect that to be pretty straightforward. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: fine-grained filtering of bugmail
On 2014-07-15, 1:04 AM, Byron Jones wrote: Ehsan Akhgari wrote: On 2014-07-14, 9:50 AM, Byron Jones wrote: Ehsan Akhgari wrote: 1. Can we get a "Any direct relationship" field in the Relationship drop-down which means Assignee || Reporter || QA Contact || CC'ed || Mentoring (basically all cases except Watching)? how is that different from "not watching", which already exists? Doesn't "Not watching" also include "no relationship with the bug"? good point - file a bug please :) Bug 1038848. 3. Can we get a "All watched components" flag under Components? no, watching is your relationship with the bug, not a specific component. I'm talking about component watching here... ... i know :) component watching is the reason why you receive email, so it's covered by the 'relationship' filter. Oh, I see what you mean now. Thanks! ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement: webserial api
On 2014-07-14, 7:22 AM, tzi...@gmail.com wrote: On Monday, July 14, 2014 2:00:47 PM UTC+3, Gervase Markham wrote: On 13/07/14 18:35, Vasilis wrote: Jonas, I would be really interested in your thoughts. Try as we might (in the WebSerial API docs, at least), noone could actually think of a use case where providing access to a physical (RS232), or Virtual (VirtualUSB or VirtualBluetooth) serial port could be a privacy and/or security issue. It's a whole different beast when you provide access for cameras or any USB device, of course, but what could someone do with access to a serial port? The WebSerial interface doesn't cover the Universal Serial Bus, then? For USB, the OS has some underlying knowledge of what the device is, right? So we could do permissions for USB on a per-device rather than per-port basis, which is the right way to do it IMO. But AFAIK that's not possible for RS232. Gerv Which is the kind of exaggerated security for no real purpose that I mentioned. The three major OSes give you APIs to access any Serial-Port-like device (physical or virtual) in a straightforward manner, because, for all intents and purposes, those are Serial ports. Trying to go around this and map devices with ports ranges from hard (USB, Bluetooth) to impossible (RS232). I do agree with Kip, some Serial devices are important and/or dangerous, but do we really want to set the security of this based on the idea that someone from a government agency and/or industrial plan will use the power plant's controlling computer to: 1. Plug in a serial device, like an Arduino 2. Access the Internet 3. Go to a nefarious website 4. Give access to the PLC, and kaboom. Isn't that a little too much paranoia? Should we have restricted the Camera API because someone could have used it on a computer with a spycam, thus leaking goverment info and starting WW3? I'm going to ignore the caricature version of the threat model that you put forward here, but yes, this is a real threat and one which we should protect against. The difference between native OSes and the Web here is noteworthy: web pages run their code in a sandbox which currently doesn't get any interesting permissions, and this is the property which enables you to go to any website without the fear of the website installing a malware on your machine, etc. But native platforms provide no such guarantee, so granting access to hardware like this may be OK for native platforms, but not necessarily for the Web. The other issue is that prompting the user for a security decision is really tricky because we would be relying on the user to understand the details of the thread to be able to make a good decision. In most cases we prefer to prompt for privacy decisions not security decisions because usually the former is much easier for the user to decide. Cheers, Ehsan ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Studying Lossy Image Compression Efficiency, July 2014
Hello Josh, thank you and all involved for your efforts to make the web faster. Are there any plans to integrate into other tools, specifically imagemagick? Or would you leave that up to others? With all the options available for image processing one can end up with building quite a complex chain of tools and commands to produce the best output. While you state that you now accept also jpeg for re-compression, this usually involves loss of quality in the process. Does mozjpeg have a preferred input format (for best quality/performance)? Best regards Fabian ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Studying Lossy Image Compression Efficiency, July 2014
Study is here: http://people.mozilla.org/~josh/lossy_compressed_image_study_july_2014/ Blog post is here: https://blog.mozilla.org/research/2014/07/15/mozilla-advances-jpeg-encoding-with-mozjpeg-2-0/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Studying Lossy Image Compression Efficiency, July 2014
This is the discussion thread for Mozilla's July 2014 Lossy Compressed Image Formats Study and the Mozilla Research blog post entitled "Mozilla Advances JPEG Encoding with mozjpeg 2.0". ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement: webserial api
Thank you all for your input. I would like to sum up in order to have a better overview of what we are looking for. - Everybody agree that we should provide a restriction level to the api. - The restriction should be on per web page basic and not per port basic which will be inefficient. This has the disadvantage that any web page granted with permissions will be able to enumerate all local serial ports without notice. - Spec mentions that user should be able to grant or revoke permissions to any website explicitly. One way to achieve that is to prompt the user when necessary, which is insufficient security level for the average user. Another way is to configure this option somewhere in the settings, which lacks of usability. This remains an open questions and more proposals are welcome. - Another open question is if the privileged web app should have unrestricted access to the api. I will raise all the above to the security team in order to provide feedback. In any case keep sending your ideas / proposals. -Alex On Mon, Jul 14, 2014 at 4:58 PM, wrote: > Ah, sorry for not being too straightforward Erik. > > The answer is no (as far as the API design goes, but the implementation > should follow that ofc) > > There is actually a very nice image explaining this on our messageboard, > but I'm on my phone so I'll do my best to explain this with a similar > example. > > When you use a Mouse, the OS provides APIs for a mouse, independent of the > connection type (ps2, Bluetooth, serial, USB or other). The OS & drivers > make it show up as a mouse. > > Similarly, when you have a serial port (the OS recognizes this device as a > serial port), the OS provides a set of APIs to talk to that (open, close, > read & write), regardless of the underlying physical connection. > > The WebSerial API proposes the exposure of those APIs (but not the > underlying ones, so no way to talk to the USB stack) to the web. > > I hope this makes things more clear? > > Vasilis > > > On 14 Jul 2014, at 16:46, Eric Rescorla wrote: > > > > > > > > > >> On Mon, Jul 14, 2014 at 4:22 AM, wrote: > >> On Monday, July 14, 2014 2:00:47 PM UTC+3, Gervase Markham wrote: > >> > On 13/07/14 18:35, Vasilis wrote: > >> > > >> > > Jonas, I would be really interested in your thoughts. Try as we > might > >> > > >> > > (in the WebSerial API docs, at least), noone could actually think of > >> > > >> > > a use case where providing access to a physical (RS232), or Virtual > >> > > >> > > (VirtualUSB or VirtualBluetooth) serial port could be a privacy > >> > > >> > > and/or security issue. > >> > > >> > > > >> > > >> > > It's a whole different beast when you provide access for cameras or > >> > > >> > > any USB device, of course, but what could someone do with access to > a > >> > > >> > > serial port? > >> > > >> > > >> > > >> > The WebSerial interface doesn't cover the Universal Serial Bus, then? > >> > > >> > > >> > > >> > For USB, the OS has some underlying knowledge of what the device is, > >> > > >> > right? So we could do permissions for USB on a per-device rather than > >> > > >> > per-port basis, which is the right way to do it IMO. But AFAIK that's > >> > > >> > not possible for RS232. > >> > > >> > > >> > > >> > Gerv > >> > >> Which is the kind of exaggerated security for no real purpose that I > mentioned. > >> > >> The three major OSes give you APIs to access any Serial-Port-like > device (physical or virtual) in a straightforward manner, because, for all > intents and purposes, those are Serial ports. Trying to go around this and > map devices with ports ranges from hard (USB, Bluetooth) to impossible > (RS232) > > > > I still don't think I understand your answer here. Will this API allow > me to > > directly address USB devices? To take a concrete case, say that I have > > a USB printer, will I be able to use this API (subject to user consent) > > to talk to it directly and print documentS? > > > > -Ekr > > > > > >> I do agree with Kip, some Serial devices are important and/or > dangerous, but do we really want to set the security of this based on the > idea that someone from a government agency and/or industrial plan will use > the power plant's controlling computer to: > >> 1. Plug in a serial device, like an Arduino > >> 2. Access the Internet > >> 3. Go to a nefarious website > >> 4. Give access to the PLC, and kaboom. > >> > >> Isn't that a little too much paranoia? Should we have restricted the > Camera API because someone could have used it on a computer with a spycam, > thus leaking goverment info and starting WW3? > > > ___ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform > ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform