Re: New QE-Verify Flag in Bugzilla

2014-08-19 Thread Clint Talbert
Hello all,

I know that there is a bunch of confusion in the wake of the recent 
reorganization of b2g, so let me set things straight. There is no new QE 
department. The QA team is still the same folks, although part of it is in the 
B2G org, part of it is in the Services org, and the core QA team are still in 
the Platform org.

This little "QE" moniker that was used here is indicating the aspirational 
shift that we'd like to make to move Quality Analysis at Mozilla toward a more 
"Quality Engineering" orientation. This means we are trying to shift toward 
deep diving into the technology behind the features we're testing, making 
better use of automation, being more proactive, being more data & metrics 
driven, being more experimental in ways that we do the work we do (with 
community, trying new approaches), etc. 

This "engineering" orientation toward Quality will be a hallmark of the unified 
voice for Quality at Mozilla that we are striving to create. I know that many 
developers across the org are also interested in this, and we invite you to be 
a part of that too.

There will be much more to come on all of this, and you should start to see 
some small things changing in the Quality teams as we move toward that 
"Engineering" direction. We aren't going to change the names of all the "QA" 
things right now, we are going to focus on earning the "engineering" moniker 
first before we change anything else.

Hope that clears up the confusion.

Clint

- Original Message -
From: "Jared Wein" 
To: "Marc Schifer" 
Cc: firefox...@mozilla.org, "dev-platform" , 
"dev-quality" , firefox-...@mozilla.org
Sent: Tuesday, August 19, 2014 11:03:58 AM
Subject: Re: New QE-Verify Flag in Bugzilla

Hi Marc,

Can you shed some background on the change in name from QA to QE? Does QE stand 
for Quality Engineering, whereas QA is Quality Assurance? What are the 
differences between the two? Do we now have two different teams/departments?

Thanks,
Jared

- Original Message -
> From: "Marc Schifer" 
> To: "dev-quality" 
> Cc: firefox...@mozilla.org, "dev-platform" , 
> firefox-...@mozilla.org
> Sent: Thursday, August 14, 2014 5:45:52 PM
> Subject: New QE-Verify Flag in Bugzilla
> 
> A short while ago we had a discussion about how to stream line some QA
> processes (Desktop QA - Bugzilla keywords: Proposal on dev-quality) and
> reduce the bug spam generated by setting keywords and whiteboard tags
> (Bugspam generated by the new process on firefox-dev). The final resolution
> of this discussion was to have a new Bugzilla flag created, qe-verify, that
> would replace the current use of QA+/- whiteboard tags and the verifyme
> keyword. This will give you the ability to easily filter out any activity on
> the flag instead of needing to have multiple filters for keywords and
> whiteboard activity as well simplify our queries for tracking work to be
> done in QE. The use of other keywords (qawanted, qaurgent, steps-wanted,
> regression-window-wanted, testcase-wanted) will remain the same.
> 
> Currently this is only implemented for Firefox QE, other teams are more then
> welcome to adopt if so desired.
> 
> Details:
>  New Flag: qe-verify[?|+|-]
>   qe-verify[?] = triage request
>   qe-verify[+] = bug to be verified
>   qe-verify[-] = bug will not/can not be verified
> 
>  Deprecate use of:
>   Whiteboard tag: QA[?|+|-]
>   Keyword   : verifyme
> 
>  The component list the flag can be set for is:
> 
>   Core   -- Any --
>   Firefox-- Any --
>   Firefox for Android-- Any --
>   Firefox for Metro  -- Any --
>   Firefox Health Report  -- Any --
>   Firefox OS -- Any --
>   Loop   Client
>   Loop   General
>   Mozilla Localizations  -- Any --
>   Mozilla QA Mozmill Tests
>   Mozilla Services   -- Any --
>   NSPR   -- Any --
>   NSS-- Any --
>   Plugins-- Any --
>   Snippets   -- Any --
>   Toolkit-- Any --
> 
> If there is a component that I missed please let me or the bugzilla team know
> so we can fix it.
> 
> This message was posted to dev-quality and has been crossposted to on
> firefox-dev and dev-platform. Please follow up on dev-quality.
> 
> Thank You.
>  Marc S.
> Firefox Q.E. Manager
> 
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Javascript code coverage ?

2014-06-16 Thread Clint Talbert

Inline
On 6/16/2014 10:23, Sylvestre Ledru wrote:

Hello,

I am working on providing weekly code coverage of Firefox code.
For now, I am able to do that for C/C++ code.
Awesome. Where are you putting these reports? What is the workload used 
to generate those reports?

I would like to know if anyone tried to generate code coverage recently
on the Javascript code of Firefox (or Firefox OS)?
There was some old work done on this by Jcranmer and decoder a while ago 
[1]. I don't know what ever came of it.

I saw some b2g reports [1] talking about Coverage but I am not sure that
they mean the same thing.
No, that is the metric of coverage of the test run. These are large, 
mostly manual testruns that we run against devices on b2g, and the 
coverage metric is a metric of how much of the test run we have 
completed by that point. (The results come out every few days, so you 
can see the coverage grow as we complete more testing).


Thanks,
Clint

[1] http://quetzalcoatal.blogspot.com/2012/06/js-code-coverage.html

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Monitoring software being installed on performance test machines

2013-10-17 Thread Clint Talbert
We have enough tooling in place to see if these affect performance or 
not. What we will need is a clear demarcation of when the change is made 
to each OS. Ideally the change would go out across all OS's at the same 
(or nearly the same) time.  If we can at least get the change rolled out 
across all OS's within the same small number of hours (or at worst the 
same day) that will vastly help us determine if there are any impacts to 
performance testing due to this change.


Thanks for the head's up.

Clint
On 10/17/2013 09:00 AM, John O'Duinn wrote:

tl:dr: We recently installed system monitoring software on our buildbot
masters, build-not-test slaves, and various other RelEng machines. IT
want to continue this rollout, deploying monitoring software onto RelEng
production test machines, which raises a concern about possible impact
to performance numbers. If you see any production impact, please let us
know.

==

We are being asked by IT to deploy monitoring tools onto all build,
unittest and performance testing machines. These are to help gather
system level statistics about CPU, memory, disk utilization, etc. This
is so IT can monitor efficiency of production jobs run on these systems.

This monitoring software has already been installed on buildbot masters,
linux+mac builders, and some misc other servers. As those changes were
zero-risk to production, we didn't need to forewarn these newsgroups.
However, installing this software on production win32/64 builders and
win/mac/linux performance testers has a small-but-non-zero risk that the
act of running these tools will change the timing results in performance
test jobs. Hence this advance notice.

Exact timing of this rollout is waiting on some unrelated win64
toolchain builder fixes to finish being deployed into production. We all
agreed that adding these monitoring tools *at the same time* as doing
windows toolchain upgrade, would unnecessarily complicate problem detection.

Once everything is ready for final deploy, another post will be sent to
newsgroup (and sheriffs), to help with any possible after-the-fact
regression range hunting. If there are any performance result wobble
because of these changes, I've been told we can tolerate minor
performance result disruption for a week or so, without impacting
releases. Currently, this experiment is slated to run for 2 weeks, but
obviously, if this monitoring introduces larger disruption, we will
disable them asap. Sheriffs and RelEng buildduty will be monitoring
closely, but as always, if you see anything weird, please make sure they
know asap.

No downtime is required, as our systems will pick up these changes
between test runs as machines reboot.

The curious can follow along in bug#920626 (deploy collectd to RelEng
mac+linux test systems) and bug#920629 (deploy graphite client to RelEng
Windows build and test systems).

If you've any questions, or concerns, please let me know.
John.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Sheriffing Module Proposal

2013-08-27 Thread Clint Talbert

Hello all,

I recently proposed a Sheriffing module over in mozilla.governance: 
https://groups.google.com/forum/#!topic/mozilla.governance/Knjirdi3suE


The short story is that it's high time we had a module for the Tree 
Sheriffs. I propose starting with our current full time sheriffs - Ed 
Morley as owner, RyanVM, Philor, Tomcat, and Kwierso as peers. We can of 
course add more peers as time goes on.


Before we go ahead with creating it, I wanted to invite feedback here 
too, which I should have done when I first posted it (sorry). Feel free 
to followup on the governance thread to keep the conversation in one 
place, if you don't mind.


Thanks,

Clint

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Generic data update service?

2013-07-12 Thread Clint Talbert

On 7/12/2013 9:49 AM, Gervase Markham wrote:

We keep hitting cases where we would like Firefoxes in the field to have
some data updated using a process which is much lighter in expended
effort than shipping a security release. Here are some examples of the
data Firefox stores that I know of which might benefit from this:

- The Public Suffix List (more important with new TLDs coming)
- The Root CA store, or trust lists in general
- Addon and plugin blacklists
- The default prefs file
- The UA override list (needed for B2G, at least)


This is all good stuff, and I want to support us being nimble. We also 
need to balance that against security and quality in our builds. We go 
through the release process for a reason, and we exert the energy to QA 
these builds and ensure we can update them incrementally, reliably, and 
repeatably. I think that such a service like this can be fine, but I'd 
want to be very certain we only change certain, safe items in the 
profile directory, and we stay away from items in the application directory.


My two cents,
Clint
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Code coverage take 2, and other code hygiene tools

2013-06-25 Thread Clint Talbert

On 6/24/2013 8:02 PM, Justin Lebar wrote:


Under what circumstances would you expect the code coverage build to 
break but all our other builds to remain green?


Sorry, I should have been more clear. For builds, I think it would be 
pretty unusual for them to break on code coverage and yet remain green 
on non-coverage builds.  But I've seen tests do wild things with code 
coverage enabled because timing changes so much.  The most worrisome 
issues are when tests cause crashes on coverage enabled builds, and 
that's what I'm looking for help in tracking down and fixing. Oranges on 
coverage enabled builds I can live with (because they don't change 
coverage numbers in large ways and can even point us at timing dependent 
tests, which could be a good thing in the long run), but crashes 
effectively prevent us from measuring coverage for that test 
suite/chunk. Test crashes were one of the big issues with the old system 
-- we could never get eyes on the crashes to debug what had happened and 
get it fixed.


Thanks,
Clint
On Jun 24, 2013 6:51 PM, "Clint Talbert" <mailto:ctalb...@mozilla.com>> wrote:


Decoder and Jcranmer got code coverage working on Try[1]. They'd
like to expand this into something that runs automatically,
generating results over time so that we can actually know what our
code coverage status is with our major run-on-checkin test
harnesses.  While both Joduinn and I are happy to turn this on, we
have been down this road before. We got code coverage stood up in
2008, ran it for a while, but when it became unusable and fell
apart, we were left with no options but to turn it off.

Now, Jcranmer and Decoder's work is of far higher quality than
that old run, but before we invest the work in automating it, I
want to know if this is going to be useful and whether or not I
can depend on the community of platform developers to address
inevitable issues where some checkin, somewhere breaks the code
coverage build.  Do we have your support?  Will you find the
generated data useful?  I know I certainly would, but I need more
buy-in than that (I can just use try if I'm the only one concerned
about it). Let me know your thoughts on measuring code coverage
and owning breakages to the code coverage builds.

Also, what do people think about standing up JSLint as well (in a
separate automation job)?  We should treat these as two entirely
separate things, but if that would be useful, we can look into
that as well.  We can configure the rules around JSLint to be
amenable to our practices and simply enforce against specific
errors we don't want in our JS code.  If the JS style flamewars
start-up, I'll split this question into its own thread because
they are irrelevant to my objective here.  I want to know if it
would be useful to have something like this for JS or not.  If we
do decide to use something like JSLint, then I will be happy to
facilitate JS-Style flamewars because they will then be relevant
to defining what we want Lint to do but until that decision is
made, let's hold them in check.

So, the key things I want to know:
* Will you support code coverage? Would it be useful to your work
to have a regularly scheduled code coverage build & test run?
* Would you want to additionally consider using something like
JS-Lint for our codebase?

Let me know,

Clint

[1]
https://developer.mozilla.org/en-US/docs/Measuring_Code_Coverage_on_Firefox
___
dev-platform mailing list
dev-platform@lists.mozilla.org <mailto:dev-platform@lists.mozilla.org>
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Code coverage take 2, and other code hygiene tools

2013-06-24 Thread Clint Talbert
Decoder and Jcranmer got code coverage working on Try[1]. They'd like to 
expand this into something that runs automatically, generating results 
over time so that we can actually know what our code coverage status is 
with our major run-on-checkin test harnesses.  While both Joduinn and I 
are happy to turn this on, we have been down this road before. We got 
code coverage stood up in 2008, ran it for a while, but when it became 
unusable and fell apart, we were left with no options but to turn it off.


Now, Jcranmer and Decoder's work is of far higher quality than that old 
run, but before we invest the work in automating it, I want to know if 
this is going to be useful and whether or not I can depend on the 
community of platform developers to address inevitable issues where some 
checkin, somewhere breaks the code coverage build.  Do we have your 
support?  Will you find the generated data useful?  I know I certainly 
would, but I need more buy-in than that (I can just use try if I'm the 
only one concerned about it). Let me know your thoughts on measuring 
code coverage and owning breakages to the code coverage builds.


Also, what do people think about standing up JSLint as well (in a 
separate automation job)?  We should treat these as two entirely 
separate things, but if that would be useful, we can look into that as 
well.  We can configure the rules around JSLint to be amenable to our 
practices and simply enforce against specific errors we don't want in 
our JS code.  If the JS style flamewars start-up, I'll split this 
question into its own thread because they are irrelevant to my objective 
here.  I want to know if it would be useful to have something like this 
for JS or not.  If we do decide to use something like JSLint, then I 
will be happy to facilitate JS-Style flamewars because they will then be 
relevant to defining what we want Lint to do but until that decision is 
made, let's hold them in check.


So, the key things I want to know:
* Will you support code coverage? Would it be useful to your work to 
have a regularly scheduled code coverage build & test run?
* Would you want to additionally consider using something like JS-Lint 
for our codebase?


Let me know,

Clint

[1] 
https://developer.mozilla.org/en-US/docs/Measuring_Code_Coverage_on_Firefox

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MozillaBuild 1.7 Release

2013-05-09 Thread Clint Talbert

On 5/9/2013 10:00 AM, Gregory Szorc wrote:


MozillaBuild was AFAIK the last thing holding us back from requiring
Python 2.7.3 to build the tree. So, I filed bug 870420 to up the build
requirements from 2.7.0+ to 2.7.3+. For the unaware, 2.7.3 fixes a whole
host of small bugs that we constantly have to work around. I'd rather we
just bump the version requirement than continue to toil with workarounds
to fixed Python bugs.

++ Get everything to python 2.7.3!

-- Clint
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal for using a multi-headed tree instead of inbound

2013-04-03 Thread Clint Talbert

On 4/3/2013 6:33 PM, jmaher wrote:

I looked at the data used to calculate the offenders, and I found:

total type, total jobs, total duration, total hours
try builders, 3525, 12239477, 3399.8547
try testers, 71821, 121294315, 33692.8652778
inbound builders, 7862, 30877533, 8577.0925
inbound testers, 121641, 182883638, 50801.0105556
other builders, 14690, 26990702, 7497.4172
other testers, 75170, 111729324, 31035.923
totals: 294709, 486014989.0, 135004.163611



The sheriffs and releng and I have been talking about this problem for 
the last month or two, knowing that we were running way in the red. We 
have a bunch of solutions, but we hadn't yet crunched the numbers to see 
what our best solution is on the way forward.  Our best solution is 
certainly going to be some combination of process change combined with 
some amount of technical optimizations.  But what we focus on when is 
the million dollar question.


Joel and I did some calculations:
* 200 pushes/day[1]
* 325 test jobs/push
* 25 builds/push
* .41 hours/test (on average, from above numbers)
* 1.1 hours/build (on average, based on try values from above)

Then you can approximate what the load of Kat's suggestion would look 
like: 200pushes/day * ((325test/push * .41hrs/test) + (25builds/push * 
1.1 hrs/bld)) = 32150hrs/day


So we need 32150 compute hours per day to keep up.
If you see above our totals for the week of data that gps provided us 
with you can see that we are currently running at: 135004hours/week / 
7days = 19286 compute hours/day


So, while I really like Kat's proposal from a sheriffing standpoint, my 
back of the napkin math here makes me worry that our current 
infrastructure can't support it.


The only way I see to do something like this approach would be to batch 
patches together in order to reduce the number of pushes, or to run 
tests intermittently like both dbaron and jmaher mentioned.


We need to figure out what we can do to make this better--it's a 
terrible problem. We're running way in the red, and we do need to see 
what we can do to be more efficient with the infrastructure we've got 
while at the same time finding ways to expand it. There are expansion 
efforts underway but expanding the physical infrastructure is not a 
short term project. This is on our goals (both releng and ateam) for Q2, 
and any help crunching numbers to understand our most effective path 
forward is appreciated.


Clint
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal for using a multi-headed tree instead of inbound

2013-04-03 Thread Clint Talbert

On 4/3/2013 4:28 PM, Joshua Cranmer 🐧 wrote:

On 4/3/2013 4:31 PM, Kartikaya Gupta wrote:



For what it's worth, I do recall there being release engineering talk
about some sort of "autoland" feature (which would automatically land
any patch that passed try or something), and I recall (my memory may
just be playing tricks on me) that it got to the prototyping phase but
was shelved because the "merge" part was too difficult/impossible. A
comment from anyone who worked on this would be helpful.

Actually, the issues I am aware of were around a security issue and a 
person-resource issue to complete the work.  The security issue has been 
fixed, and this current quarter (Q2), Release Engineering and the 
Bugzilla team are planning to team up to finish the work here.


One of the really nice things we can do with autoland is schedule 
non-time-sensitive patches for landing when the tree is quieter, like 
1AM pacific.


But my point is that autoland is coming back and we hope to have it on a 
BMO instance near you soon.


Clint
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


The War on Orange Needs YOU

2013-03-01 Thread Clint Talbert
The rate of intermittent failures (which is called the orange factor, 
and the effort to diminish it is called The War on Orange) in our 
automation has skyrocketed. Starting around February 17 on 
mozilla-inbound, we took off on an exponential curve that is making it 
extremely hard to sheriff the tree, land patches, and otherwise get work 
done at Mozilla [1]. If anyone knows what started happening on the 17th 
that caused this sudden change, please let us know; however, it looks 
like it was several changes over several days. Therefore, it might not 
be one single push.


We track this through a metric called the Orange Factor which is simply 
the average number of intermittent failures encountered on each push. 
This means, right now when you push, on *average* you are getting 8 
failures.  On February 17, you were averaging 2.  Something has gone 
horribly, horribly wrong.


If we solve the current top 10 intermittent issues [2] we will be back 
down to 4.47, which, while almost double where we were on Feburary 17 is 
a far, far better than where we are now (8.32).


I'm begging for volunteers to step forward and do everyone a favor and 
dig into one of these bugs below and for some set of brave souls to look 
critically at what landed during our exponential uptick for possible 
culprits.


* Bug 761987 - http://bit.ly/WmorJg - The worst offender. If anyone can 
help out, please do.
* Bug 833769 - http://bit.ly/ZK3sNQ - Memory leak that has recently 
spiked, Andrew McCreight is on the case
* Bug 711725 - http://bit.ly/14b1BlO - Jmaher and dividehex are digging 
into this. It started because tegras would reboot intermittently, we 
fixed those, and now the pandas are. We suspect pandas are overheating.

* Bug 835658 - http://bit.ly/13unJvC - Needs an owner
* Bug 824069 - http://bit.ly/13uSlNU - Needs an owner
* Bug 807230 - http://bit.ly/YEY5wk - Jmaher looking into this next
* Bug 764369 - http://bit.ly/Z3vWzY - Needs an owner
* Bug 754860 - http://bit.ly/WmoVz7 - Needs an owner
* Bug 818103 - http://bit.ly/Z3w6am - Needs an owner
* Bug 663657 - http://bit.ly/VjLlzF - Needs an owner, probably someone 
from my team or releng


And when you're weighing whether or not you want to jump in, remember we 
do have a goal to clean up the technical debt we've left ourselves in 
the rush to ship two 1.0 products, and this work falls (in my mind) 
squarely in line with that goal.  Please help out where you can.


Many thanks,

Clint


[1] http://bit.ly/XIogWW
[2] http://bit.ly/14aZgra
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


UA Tests

2013-02-25 Thread Clint Talbert
A few weeks ago in the Tuesday platform call, I said I'd look and see if 
I could find any tests in the tree that would capture us changing our 
User Agent string.  I looked and reviewed many of the tests that do 
*something* with the UA string, but I didn't find any that would 
actually fail if we changed the UA string inadvertently.


So I filed this bug: https://bugzilla.mozilla.org/show_bug.cgi?id=845100

I'll work on some tests next, but if you have better ideas of how to do 
this or want to jump on it, be my guest.


Thanks,

Clint
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Off-main-thread Painting

2013-02-12 Thread Clint Talbert
I agree in part with the assertion about testing - that the existing 
reftests will catch most regressions stemming from this. But I think we 
also need some measurements around scrolling/responsiveness in order to 
verify that off main thread painting is giving us the wins we hope it 
will give us. Part of that is knowing where we are now, and then seeing 
how that measurement will change once this work lands.


We can use Eideticker for doing some of this measurement (particularly 
for "blackbox" end-to-end measurements, but it would be even better if 
there were some way to instrument the paint pipeline so that we could 
get measurements from inside Gecko itself. If we can instrument it, we 
can run those tests per-checkin on all our products.  So I'd encourage 
you to put a bit of thought into how you might do some light weight 
instrumentation in this code so we can verify we make the gains we hope 
to make here.


Clint


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Minimum Required Python Version

2012-12-03 Thread Clint Talbert

On 12/1/2012 2:29 PM, Gregory Szorc wrote:

The bump to Python 2.6 seemed to go OK. So, I think it's time to finish
the transition and bump the minimum to Python 2.7.

Per the previous discussion on this list, I don't believe we have any
outstanding objections. So, I propose we move forward with this as soon
as we have confirmation that all builders are on Python 2.7.

https://bugzilla.mozilla.org/show_bug.cgi?id=804865 is tracking bumping
the /build/ requirement to Python 2.7. Some tests will still be running
on older Python. But, I believe everybody is on board with transitioning
everything to 2.7. So, we should pretend this transition effectively
means we can stop supporting 2.6 and below everywhere in the tree as
soon as 2.7 is deployed everywhere.

If there are any objections, please voice them now.


No objections at all. I just want to say that we should also review what 
the pending work is to bump the tests to 2.7 as well.  That will likely 
have to happen after B2G automation milestones in Q1 2013 though, as we 
have no bandwidth to address the necessary bugs before then.


Clint
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: automating gecko in-rprocess xpcom

2012-12-03 Thread Clint Talbert
I'm not sure what you mean because as best I understand it, the Selenium 
Firefox driver is an extension and thus runs in the same process space 
as Firefox itself.  If you're looking for a closer binding between your 
automation code and Firefox, you can take a look at our new automation 
mechanism called Marionette that uses the WebDriver protocol to drive 
Firefox. (Marionette is wired inside Gecko in order to make this kind of 
automation easier).


https://developer.mozilla.org/en-US/docs/Marionette

Hope that helps,

Clint

On 11/30/2012 10:04 PM, mozz wrote:

i've been using selenium firefox driver to automate firefox and it's too slow 
as communications between firefox and driver happens out-of-process.

possible to embed gecko in a process and drive it/interact with its DOM 
directly via xpcom?

thanks.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Adding Android Panda Boards to the Fennec Test Pool

2012-10-10 Thread Clint Talbert
We are in the process of adding PandaBoards [1] to the Fennec test pool. 
That means that simply stating "Android opt" on TBPL won't really enable 
us to distinguish between the Tegra Boards (android 2.2) and the 
PandaBoards (android 4.0).


So, I propose we add the versions to the row names for the Android rows, 
i.e. on TBPL you'd see:


Android 2.2 opt B M(1 2 3)
Android 4.0 opt M(1 2 3...)

Note that both the pandaboards and tegraboards run the same build of 
Fennec so we won't have another 'B' entry on the Android 4.0 row.


The details of this are being tracked in bug 800047 [2].

= B2G =
Yes, we are also going to use pandaboards to test B2G on device as well. 
We will have the discussion of what to call those when we get a little 
closer to having them ready.  For now, let's just focus on Android tests 
running on pandaboards for the purpose of this discussion.


Thanks,

Clint

[1]http://www.pandaboard.org/
[2]https://bugzilla.mozilla.org/show_bug.cgi?id=800047
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "ERROR: Logon failure: unknown user name or bad password." on mochitests

2012-09-27 Thread Clint Talbert

On 9/27/2012 6:52 AM, Armen Zambrano G. wrote:

On 12-09-27 7:03 AM, Neil wrote:



I'd suggest read & execute rather than full access ;-) Also, you might
find you actually need to grant permission on a containing folder.

Yeah you wouldn't want to leave it that way, but granting full access to 
it would quickly confirm that we're tracking down the right problem and 
not some blue goose.



On the other hand, do you think you could point me at the mozbase code
so I can determine what you guys do instead of
subprocess.Popen(["taskkill", "/F", "/PID", pid]).wait() [1]

Well, it's not something you can do in one line.  We use IOCompletion 
ports to be certain that we can kill all the child processes spawned 
from the parent.  Converting this over is going to be an undertaking, 
but it will be worth it in the end.


Here is the code: https://github.com/mozilla/mozbase/tree/master/mozprocess

Cheers,
Clint



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "ERROR: Logon failure: unknown user name or bad password." on mochitests

2012-09-26 Thread Clint Talbert

On 9/26/2012 8:32 AM, Armen Zambrano G. wrote:

On 12-09-26 6:45 AM, Neil wrote:

Perhaps the file system permissions are incorrect and NT
AUTHORITY\NETWORK SERVICE doesn't have permission to access
C:\WINDOWS\system32\wbem\wmiprvse.exe ?



You might be right.
How can I check this? I'm not very familiar with this concept.
You can go right click on the file, select the security tab and you 
should see what users have what permissions on the file. For a test, 
simply grant everyone full access to the file and try to run it.


It could be that one of the OPSI scripts either deleted or changed 
permissions for one of the users.


Mochitest itself doesn't log into anything, so this isn't from the 
harness. We have removed this method of killing tasks on windows in 
mozbase and this should get much more stable once we port mochitest to 
use the mozbase toolchain, tentatively slated for this coming quarter.


Clint
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Minimum Required Python Version

2012-09-13 Thread Clint Talbert

On 9/9/2012 2:03 PM, Justin Lebar wrote:

So, 2.6 or 2.7?


I'm totally in favor of using the latest and greatest that's available.


Me too. I'm in favor of putting all the automation (build & test) on 2.7.3:
https://bugzilla.mozilla.org/show_bug.cgi?id=724191

Clint

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform