9. (whoops!) Add some barriers to entry. Competitions like this often have some barriers to entry to preserve the spirit of the competition. These are designed to deter larger corporate applicants. For example, you could require that all corporate applicants have an annual revenue that is less than a certain amount. Although this might be impossible to verify in some cases, just having such a requirement would be enough of a deterrent because most ineligible companies probably wouldn't want to take the risk.
On May 14, 10:51 am, Michael Johnston <[EMAIL PROTECTED]> wrote: > I've been very self-critical over the past week to make sure I learned > everything I could from our experience with the Android Developer's > Challenge. But today I took some time to look outward rather than > inward. Here are some ideas for how to improve the judging process > for the next contest. > > 1. Use proportional gates rather than absolute. ADC 1 went from 1788 > submissions -> 100 -> 50. That jump from 1788 to 100 was harsh! A > better approach might be to have a series of proportional gates. For > example: based on X submissions, repeatedly narrow them down by 25% > until you arrive at the winners. For ADC 1 that would've translated > to 1788 submissions -> 447 -> 111 -> 50 winners. > > 2. Require HTML documentation with annotated screenshots and blurbs. > You could make it known to all applicants that these would be used in > order to perform at least the first 25% cut. It's this first cut that > requires a herculean judging effort. To lessen the burden, HTML > documents could be judged entirely via the web, with each judge > potentially looking at hundreds of applications over the course of > several days. This would also make the "marketing" aspect of the > judging process more explicit, and would more closely reflect how end > users will eventually select applications. > > 3. Simplify the scoring system for the first gate. Is it really all > that helpful to have 100+ different judges scoring 1788 applications > on a 10-point scale in 4 different categories? How much better is > this than, say, asking each judge to award 1 to 5 stars? A simpler > scoring system would allow more judges to rate more applications in > less time, and is probably just as fair and meaningful (if not > moreso). > > 4. Embrace the subjectivity of judges in the final gates. A scoring > system is helpful for narrowing down the huge number of submissions in > the first gates. But in the final gates when there are only, say, 161 > applications left, why not ask every judge to scan them all, try the > ones that interest them, and present their top 10 favorites, or their > top 3 favorites in different categories? Then, when it comes time to > make the final cut, you can lock your 5 or 10 most qualified judges in > a room (or conference them in) and choose the best applications > subjectively based on all the evidence gathered to date. > > 5. Give feedback. You could tell people what gate they reached and > consider telling them their average rating, too. Otherwise it feels > like all our efforts have disappeared into a black abyss. > > 6. In addition to offering rewards for the top X applications, you > could offer several special rewards (say $20,000 each) for those whose > applications didn't make it, like Best Community Contribution, Best > Development Tool, Most Creative, etc. > > 7. Solve the networking dilemma. Using standard laptops was a good > idea and solved the performance dilemma, but who knows what networks > the judges used in order to run their applications? Proxies and > overzealous firewalls could really screw things up. What if judges > failed to connect at all? Wireless connections can be tough and not > everyone has two working network jacks at their desk. Unfortunately I > don't have any ideas for this. > > 8. Solve the social dilemma. Given the huge potential for social > apps and their reliance on being, well, a bit social, I think they > deserve a separate testing protocol at least in the final gates. For > the final cuts I think social apps should be tested by multiple people > simultaneously who coordinate with one another to test features > together. I'm aware that we could simulate or otherwise fake a social > experience, but in my opinion that's insufficient. Alternatively, you > could give applicants a 3-hour window within which their app would be > tested, during which time they could ensure there are people online to > help. --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Android Challenge" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/android-challenge?hl=en -~----------~----~----~----~------~----~------~--~---
