On 2014-04-07, 6:20 AM, Lucas Rocha wrote:
Hi all,

Hi Lucas,

Thanks for the write-up.  Sounds like an interesting day.

Facebook organized the second edition of their Mobile Forum here in London last 
week (the first one was in SF, I think). It's an invite-only event with about 
50 attendees (mostly engineers) from large tech companies (Amazon, Microsoft, 
Skype, Facebook, SAP, etc) as well as some local startups.

The event is essentially a series of group discussions split into two sessions (morning and 
afternoon) with five parallel topics each. The topics ranged from technical bits of specific 
platforms to development tools and practices. Each session starts with a quick "problem 
statement" presentation for each topic, then the attendees split into smaller groups to 
discuss the topics they chose for about an hour, then everyone regroups for a discussion 
summary by each lead followed a Q&A session for each topic.

I've been invited to attend the event and lead one of the discussions. My topic was 
"Interaction between design and engineering teams". Below you'll find my 
personal highlights.

Congratulations!  Always nice to be invited.

A/B testing and data-driven development
---------------------------------------

Data-driven development is clear practice in the great majority of the mobile 
teams that attended the event. Some companies (Yammer, Facebook, Twitter) never 
release a new feature without A/B testing it for a few weeks first. Engagement 
is the main factor for deciding about what features should stick. The 
definition of 'engagement' varies from company to company, of course.

Yammer, for example, has a big stats/analytics team that is responsible for 
building their engagement indexes. Among other things, this team figures out 
correlations between specific user behaviours and engagement. It's not always 
obvious. For instance, they found out that if a user opens the app at specific 
time of the day, he/she is more likely to engage in certain app features.

There was discussion around how to lower the risk of shipping experiments and 
how to manage them in the builds. Hiding experiments behing build flags seems 
to be a common practice. The team works on the feature in the master branch 
throughout the whole process and flip the flag once they are ready for A/B 
testing. There was no clear answer as to when to remove the experiment's code 
from the repo. Some teams remove them while still in beta, which sounds a bit 
too risky to me.

Facebook, Twitter, Yammer, and others have a lot of infrastructure to run the 
A/B tests only for specific subgroups/demographics in their user base.

This whole A/B testing discussion got me curious about how we could do some of 
these things while still accounting for the stability and privacy issues that 
this kind of practice entails. Desktop Firefox has a bit more flexibility than 
us as they can change pretty much anything in the UI via addons. In Fennec, we 
might have to actually change UI code in the release in order to run 
experiments. We should probably do a brainstorming session around A/B tests in 
Fennec.

I think the Desktop team has been pushing "Telemetry Experiments" [1] [2] forward very quickly. It's not exactly A/B testing, but it's certainly in the area; and I believe that each experiment is a special add-on type, suggesting (on Desktop at least) that you could radically alter the UI, etc.

So there's precedent here, and perhaps even code to make some of this happen. Exciting times!

Best,
Nick

[1] https://mail.mozilla.org/pipermail/firefox-dev/2014-January/001359.html

[2] https://bugzilla.mozilla.org/show_bug.cgi?id=973990
_______________________________________________
mobile-firefox-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/mobile-firefox-dev

Reply via email to