My comments in summary:
* I'm in agreement with your approach you are suggesting in #1
* I'd suggest getting feedback from Julien on how Gaia integrates with
the gecko layers to restart and cancel workflows. I'd suggest
simulating what he does there. We should aim for sanity tests here,
as the restart case (especially with packaged apps) is actually
quite common (many problems can happen to cause a download to fail)
* David Chan is already focusing on app permissions, so I'd keep that
out of scope for now
* The next step in analysis that I think would be helpful to get is to
take the end to end analysis you've done and apply a gecko
perspective on it referencing the underlying APIs involved
* We need signed privileged packaged app test cases in this list (e.g.
install a privileged app)
* Mostly in agreement with the theme of the priorities. My comments
specifically would suggest that these themes are important to
consider in comparing with your list:
o Install/launch/uninstall hosted app
o Install failure for hosted app with type as privileged or certified
o Install/launch/uninstall packaged app
o Install failure for packaged app not signed with type privileged
o Install failure for packaged app with type certified
o Install/launch/uninstall hosted app with appcache
o Download failure for packaged app for running out of space
o Download failure for hosted app with appcache for running out of
space
o Cancel download of hosted app with appcache
o Cancel download of packaged app
o Restart download of hosted app with appcache
o Restart download of packaged app
o Download failure for packaged app for having a bad packaged app zip
o Download failure for hosted app with appcache for having a bad
appcache manifest
o Update hosted app, hosted app with appcache, packaged app
o Fail to update hosted app, hosted app with appcache, packaged app
o Restart update of hosted app, hosted app with appcache, packaged app
o Install/launch/uninstall/update signed privileged packaged app
o Other more higher level areas of analysis could focus on:
+ Webapp manifest analysis
+ Drilling at the specifics of the download API
+ Mini-manifest analysis
Sincerely,
Jason Smith
Desktop QA Engineer
Mozilla Corporation
https://quality.mozilla.com
On 5/23/2013 3:08 PM, Jason Smith wrote:
==> Moving to dev-b2g (don't think there's anything confidential here).
+Fernando
+Julien
Sincerely,
Jason Smith
Desktop QA Engineer
Mozilla Corporation
https://quality.mozilla.com
On 5/23/2013 2:26 PM, David Clarke wrote:
Navigator.mozApps Developers:
I have been reviewing the automation suites that are currently in
place, and wanted to propose a plan for organizing test cases going
forward, and preparing for 1.1 / 1.2 features.
#1 Cleanup / Stabilization:
- http://mzl.la/Ywh6rL .There are currently some amount of tests
which are failing intermittently of the current mozApps chrome test
suite.
The current mozApps test suite is not run on B2G Emulator as it is a
mochitest-chrome based set of tests.
My proposal would be to move the current mozApps test suite to a
mochitest-plain based test suite, and use
SpecialPowers.autoConfirmAppInstall, which was not available
at the writing of the chrome based test.
The double benefit here is that we will remove one area of
intermittent failure, and also allow us to run the test suite on the
B2G emulator.
(imho this is the main cause of the test timeouts you are seeing in
the above link )
#2 Extend coverage:
- Identify areas of coverage that are needed and high priority, and
attempt to write test cases for them.
- http://bit.ly/16aGkPS
I have imported the manual test cases into the above google
spreadsheet and attempted to rank by both priority / difficulty to
get some sense of what test cases will be both easy / a high priority
to test.
- Good news is that packagedApp test cases are on the way, but there
are lots of test cases that are not on the automation roadmap.
Example:.... stop / cancel and restart packaged app installs.
It would be good to separate the test cases listed above and discuss
from a feasibility standpoint what is capable of being supported, and
then make sure the hooks are in the platform that will allow for the
automation to be written.
The general priority listings I have organized around:
P1: refers to Primary User Flows
P2: Secondary user flows, a little further from the beaten path
P3: Edge Cases, Error conditions, that are not explicitly part of a
user story.
I am only considering P1 issues for this round, but what I would like
to do is to start breaking down the list further, and seeing what
would be beneficial for gecko automation.
If there are specific areas that we can agree on for automation, then
we can focus around those specific use cases / test cases. The main
areas for apps testing are listed below, please feel free to update
with any thoughts or specific test cases that you think are
relevant. You can also edit the google spreadsheet that is linked to
from above, and add comments wherever necessary.
Packaged Apps:
Updated Apps:
Preloaded Appcache:
Appcache:
Hosted Apps:
Notifications:
Permissions:
Thanks all,
--David
_______________________________________________
b2g-internal mailing list
b2g-inter...@mozilla.org
https://mail.mozilla.org/listinfo/b2g-internal
_______________________________________________
dev-b2g mailing list
dev-b2g@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-b2g