The Fairness, Accountability, Transparency, and Ethics (FATE) research group at 
Microsoft Research New York City is looking for a Post Doc Researcher to start 
July 2024: 
https://jobs.careers.microsoft.com/global/en/job/1667778/Post-Doc-Researcher%E2%80%93-FATE-%E2%80%93-Microsoft-Research
 We will begin to review applications for the position on January 3, 2024.

This two-year position is an ideal opportunity for an emerging scholar whose 
work focuses on the social implications of machine learning and AI.

As a Postdoctoral Researcher, you will define your own research agenda, driving 
forward an effective program of basic, fundamental, and applied research. You 
will also have the opportunity to collaborate with members of the research 
group, including Solon Barocas, Alexandra Chouldechova, Kate Crawford, Hal 
Daumé, Miro Dudík, Hanna Wallach, and Jennifer Wortman Vaughan, as well as 
others in the New York City lab and other Microsoft Research labs.

Microsoft Research offers an exhilarating and supportive environment for 
cutting-edge, multidisciplinary research, both theoretical and applied, with 
access to an extraordinary diversity of data sources, an open publications 
policy, and close links to top academic institutions around the world. 
Additionally, the position offers unique opportunities to engage with the 
broader responsible AI (RAI) ecosystem within Microsoft, including product 
teams, AI policy teams, and RAI practitioners.
We seek applicants with a demonstrated interest in FATE-related topics and a 
desire to work in a highly interdisciplinary environment that includes 
researchers from computer science, statistics, the social sciences, the 
humanities, and other fields. Successful candidates will also have an 
established research track record, evidenced by notable journal or conference 
publications and broader contributions to the research community.

We will consider candidates with a background in a technical field such as 
computer science (especially AI, machine learning, NLP, and computer vision), 
statistics, economics, and decision sciences as well as candidates with a 
socio-technical orientation, such as those in information science, sociology, 
anthropology, science and technology studies, media studies, law, and related 
fields.

We are especially interested in candidates who would like to pursue research 
aligned with one or more of the following themes:

• Computational, statistical, and sociotechnical approaches to fairness 
assessment: Data collection, experimental design, sample-efficient statistical 
methods, measurement, and visualization for fairness assessment; mixed-methods 
approaches, including participatory methods, for measuring fairness-related 
harms caused by AI and human-AI systems.

• Human-centered AI transparency: Explanation, evaluation, and uncertainty 
communication approaches to improve stakeholder understanding of AI models or 
systems; transparency approaches for improving human control, autonomy, 
oversight, and mitigation of AI harms; transparency in human-AI collaboration.

• Institutional, organizational, and economic challenges of AI development, 
deployment, and use: Challenges of translating real-world problems into machine 
learning tasks and integrating AI with existing institutional processes; 
incentives for and resistance to contributing training data; impacts of 
generative AI on the cultural industries; environmental impacts of generative 
AI systems.

• AI law and policy; AI for policymaking and regulation: How existing laws and 
policies apply to AI, and where new regulations might be necessary; AI as a 
tool for effective policymaking, regulation, and enforcement.
• Responsible AI in practice: Turning RAI principles into policies and 
practices; translating RAI research into practice; navigating organizational 
dynamics, competing incentives, and decision-making under uncertainty.

Candidates must have completed their PhD, including submission of their 
dissertation, prior to the start of the position (i.e., dissertation submitted 
and degree preferably conferred by July 2024). We encourage candidates with 
tenure-track job offers from other institutions to apply, provided they are 
able to defer their start date by at least one year to accept our position.

To be assured of full consideration, all application materials, including 
reference letters, need to be received by January 3, 2024. Applications 
received after that date may be considered until the position is filled.

This role is not to exceed two years.
_______________________________________________
Corpora mailing list -- corpora@list.elra.info
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to corpora-le...@list.elra.info

Reply via email to