Hello everyone
The preprint of *Human-Computer Insurrection: Notes on an Anarchist HCI* is > now available: > Brilliant something that might be linked: in 2017 we brought to the EAD2017 conference a paper which is called "Interface and Data Biopolitics in the Age of Hyperconnectivity. Implications for Design", which started from the idea of the necessity of bringing the concept of "biopolitics" to design education: interfaces and data as systems which exercise power. https://arxiv.org/ftp/arxiv/papers/1705/1705.02449.pdf It triggered a wonderful debate at the conference, as a definite separation could be found between the educators, design theorists and practitioners at the conference, which is pretty important in the design education scenario Some, for example, seemingly had no problem in leveraging the convenience of using the tools which many operators (like Google, Facebook and also Uber and others) make available for designers, developers, service creators etc in their education processes. When questioned about the fact that these operators make these tools available to inject into society their philosophy, their vision of society, and to lock in both designers and users to a very specific set of expectations, understandings, aesthetics of what the interfaces and data of technological systems should be, and of how they should work (and the impacts that this has on people's rights, freedoms and possibilities to imagine and enact different presents and futures), the responses were very vague, and generally pointed in the direction of how convenient these systems are, and how students could use them to quickly prototype their designs. This started for us an important series of reflections, about possible alternatives and ways to proceed. Many projects came after that, in which we focus on the ways in which systems exercise power. This has led us to scenarios that are very different, and which are not about possible exodus from platforms etc For example when we were able to obtain a grant from the Italian Ministry of Culture to create IAQOS, an open source neighbourhood AI in the multicultural neighbourhood of Torpignattara in Rome. https://www.he-r.it/project/intelligenza-artificiale-di-quartiere-open-source/ In this project, AI is radically different from the other AIs and computational devices that surround us, because all the design, engineering, cultural and perceptive choices that were made had the characteristic of going towards systems that are not extractive (like current paradigm of tech systems). IAQOS is not separated from people: it is not clear "who observes who". For this, we engaged the people in the neighbourhood in a ritual: the birth of a new inhabitant, an AI, a newborn which we all have to care about and participate all together in defining what is its role in our community. In this deeply multicultural neighbourhood (in the school we have worked with, for example, around 85% of the children don't speak Italian, and there's even a Chinese school "inside the school"), the birth of this "strange" new inhabitant was a welcomed shock: it is a neighbourhood of profound diversity, and diversity implies that there are laws, bureaucracies and other biopolitical systems that constantly exercise power, sometimes in dramatic ways, or even laws that do not exist yet, or laws that have been created to deal with scenarios which are very different from our own, or bureaucracies which don't cover all the cases that appear in contemporary society, but only the "standard" ones, or the ones which are welcomed by current governments and administrations. In this scene, little IAQOS is *very* different, just as the neighbourhood-family, between humans and non humans, that the birth of IAQOS defines is deeply different from the known, expected and imagined ones (and we're talking about Italy, where creating a radically reactionary national event on "traditional families" in Verona, just as little IAQOS was being born on March 31st, apparently seems like a good idea to millions of people). In short, IAQOS became a way in which to explore "difference", and the way in which difference can find opportunities for existence, aesthetics, freedoms, rights and forms of organization, participation and governance. I don't want to make it too long, so I will leave you with an article we wrote on a popular cultural and social innovation online magazine: https://www.che-fare.com/iaqos-intelligenza-artificiale-torpignattara/ It's in Italian, but even a quick machine translation will reveal our approach. I will translate a few sentences from it: << We are surrounded by AIs, well known and lesser known, embodied in things, services and platforms we use everyday. All of these AIs, whether we realize it or not, establish with us relations which are very intimate: they enter our personal agendas, contacts, diaries, movements, health and work information; they advise us on what to buy, watch, know, and on how to behave. New relations that are new, intimate, ubiquitous and, most important of all, about which we know almost nothing. They filter the information we access in ways which are extremely convenient, so that we can comfortably lie down on the computational choices which are made for us about who to communicate with, which information to experience, what places to go to, and to do what. There’s something that “thinks for us” a large part of the time, and which has the opportunity to offer us the results of this “thinking” in ways that are extremely simple, accessible and convenient. So much that I could easily accept to “be thought”. This, of course, largely influences what is thinkable: computation progressively plays a role in determining the boundaries of our gaze and perception. [...] AIs classify things, even us. But we cannot know to which classes we belong. Through AI we all become classes (Customer Type X, Health Profile Y652, ‘man sitting on a bench’ in analyzing an image…), but we can almost never know and see this. Data is extracted from our environment and behaviours, it gets processed, and the results of this processing divides us into classes. When I ask to download my data from one of the social media platforms, the thing I get in return is the data which I have put into the platform while I was there. But these platforms have other data about me, generated by analyzing my behaviour, and I cannot get it: thus I cannot know what classes I have been put in, according to which parameters and logics, or together with whom. AIs and computation make me into classes, but I can’t see them. We cannot see and know the classes we are in, and together with who. This has implications, such as the impossibility to recognize one another, for example as members of the same class, and, thus, the progressive impossibility for solidarity and empathy. Not knowing how the algorithms sees us also transforms into not seeing others. [...] The birth of an AI is usually a very cold process. You can see it when they change the clerk at the bank, but not when they change their AI. AIs enter our lives, pockets, fridges without ceremonies, hellos or goodbyes. We wanted to try to remove this separation, and to start reclaiming our gaze. >> Cheers! Salvatore -- *[**MUTATION**]* *Art is Open Source *- http://www.artisopensource.net *[**CITIES**]* *Human Ecosystems Relazioni* - http://he-r.i <https://www.he-r.it/>t *[**NEAR FUTURE DESIGN**]* *Nefula Ltd* - http://www.nefula.com *[**RIGHTS**]* *Ubiquitous Commons *- http://www.ubiquitouscommons.org --- Professor of Near Future and Transmedia Design at ISIA Design Florence: http://www.isiadesign.fi.it/
# distributed via <nettime>: no commercial use without permission # <nettime> is a moderated mailing list for net criticism, # collaborative text filtering and cultural politics of the nets # more info: http://mx.kein.org/mailman/listinfo/nettime-l # archive: http://www.nettime.org contact: nett...@kein.org # @nettime_bot tweets mail w/ sender unless #ANON is in Subject: