No additional comment from me on the word choice. I do agree meritocracy is problematic given increasingly popular views on the model in other places - I've always been careful to warn that the difference between a meritocracy and a oligarchy is minimal. Which brings me to the recognition of merit.
I am never a fan of tools to solve a social problems. Tools do not bring visibility, they bring an opportunity to play the system. People shoulf take responsibility and actively demonstrate the way things should work. See a merit deserving action - call it out with a thank you. If you have the patience I'll illustrate this with a story, but feel free to stop reading now as I've made my point about tooling being potentially damaging. Years ago I was brought into a client to help them understand why things were failing for them with respect to customer satisfaction. They were hitting all their support metrics, such as mean time to resolution, but their CSAT scores remained low. Their engineering teams were becoming worried that despite a genuine focus on customer needs they seemed to be building something people didn't like. I was brought in as a consultant. I ignored the data they handed to me on day one. It came from the tools. I already knew that data was wrong since it told me they were meeting targets and their targets were good ones. I started talking to people about what they did to contribute to improving CSAT. I quickly found that engineering had a good process for prioritizing work that support tooling highlighted as problem areas. I found marketing were accurately representing current and future features. The sales teams were not over selling the product features. Support were picking up tickets and triaging them according to their best practice. I was baffled. So I went to the tools, despite my lack of trust in their data. I started with the customer support cases. It looked like every other data set I'd seen before. A few tickets took a long time to resolve but they had good attention from product, engineering and support, while most tickets were resolved fairly quickly. I ran queries on the raw data, rather than relying on the roll-up queries the tool provided. That's when I saw it. There was one particular team (they worked on 3 x 8 shifts around the globe) who was killing it with respect to closing tickets in a timely way, and the team that followed was struggling. From the roll up data it looked like the second team struggled with hard tickets, though their performance on easier tickets was comparable. When I queried on these hard tickets alone I realized they had a tendency to re-open previously resolved tickets - and they were almost always hard ones. I asked the support lead about these two teams. She told me that she had never managed to get the poorer team to really step up, there was no strong leader and while he and his team did good work, he just couldn't seem to hold his team together, new hires quickly quit or moved out of the support role. The better team was full of rockstars. She wasn't worried about hiring there because the manager just seemed to have a talent for finding the best. Her bigger worry there was having capacity to promote those people. So I dug deeper. What I found was that the "better" team was led by a manager who played the system. He taught his team, for example, to close tickets as duplicates of newer tickets, even if they weren't related. The tool reported their resolution numbers as healthy, but they weren't really working with the customer. The team that followed picked up the newer tickets, recognized the "mistake", reopened the original and resolved both. The tool reported their numbers as poorer, but in reality they were doing way more work. This is why the manager there couldn't hold his staff. They were fed up of correcting the bad work of others. As soon as they realized the others were getting promotions and their team was not they left. The tools hid the truth. Only talking to people can truly recognize contribution. It is the responsibility of the people in the community to cultivate and recognize merit. In my opinion the best way to do this is to have the lowest possible bar. Having an artificially high bar is the real problem for lack of recognition. With a low bar one only needs to see a relatively low amount of contribution. [If you are interested in the outcome of the above story... I asked the "struggling" manager why he didn't tell the lead what was happening. I never got a good answer. The outcome was that the lead fired the questionable manager, helped the other manager understand that she always needed to know when the process is broken and promoted him. She also put them in charge of training and process refinement across all three sites. Six months later they could see a very marked difference in their Customer Sat scores.] ________________________________________ From: Griselda Cuevas <g...@google.com.INVALID> Sent: Saturday, March 23, 2019 10:23 AM To: dev@community.apache.org Subject: Re: on "meritocracy" Hi Everyone, I read this thread and got a few comments according to how well I was able to track all statements and arguments. English is not my first language and it was hard for me to keep track of the current state of this discussion. Complicated words and sentence structure were used to state points by authors here, to the point I was almost giving up commenting. In any case, as Naomi stated, I don't want to muddy the waters, just wanted to exemplify why we often hear from specific people in certain mailing lists/projects, making it hard for us to truly follow a meritocracy model. I extracted 2 main convos 1) Choose a better term to use instead of meritocracy. 2) improve the way Apache projects and the foundation recognize project collaboration. For 1) I personally think the word meritocracy isn't the problem. If you translate it to Spanish, the word is fair and adecuate to describe the governance model the foundation follows. I believe the problem is that the concept of merit awarding is broken and often related to Bias from the group assigning the merit. This said, according to the article, I understand that the problem is that the core group of an organization awarding contributions would be Bias and therefore will perpetuate unfair development. My conclusion is that, the word isn't the problem, the problem is in the system. This being said, I have no preference in wether we should change it or not. Since Rich, Bertrand and others are putting efforts into defining better the language, I support that with no opossition. 2) From my personal experience supporting projects under the Apache model, I can say that the difficulty in assigning fair and unbiased merit to people in the community comes from lack of process and tools. For example, in Apache Beam, we've been working on several advocacy effort which have helped the project's brand grow, and surface areas of improvement for the tech. However, the PMC committee doesn't have full visibility in how much effort comes from each individual, since only one representative sits with us during planning meetings. On the other hand, when I talk to other PMC members about possible new committers, I hear names of people I don't know because I don't interact with their part of the project, making me think that it's unfair others get recognized over people organizing Summits or community efforts. We are in the middle of defining a better process to bring visibility of efforts to everyone in the project and make the committership process more fair and transparent. This aligns with the efforts Sally recently posted on non-code contribution recognition and I think solving this should support Naomi's original statement: meritocracy will beging to "work" and diversity will start to flourish. Happy to help build better practices and processes to make fair recognition and diversity win. --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@community.apache.org For additional commands, e-mail: dev-h...@community.apache.org