Alix Rübsaam
As a researcher in philosophy of technology and cultural analysis, Alix Rübsaam investigates the societal and cultural impact of various technologies. Her focus is on changing ideas about humanity and human activity in technological contexts such as Artificial Intelligence, (autonomous) robotics, information technologies, and digital environments as well as on deploying emerging technologies thoughtfully and responsibly.
Alix’ current research centres around two large scale projects. The first of these is a deep dive on responsible AI, algorithmic bias and exclusion. Within this project, the aim is to raise awareness about the impact of automated decision-making algorithms, to explicate how decisions made in the design process can lead to unwanted outcomes, and to empower leaders to build AI that is equitable, fair, and just. The second project comprises an analysis of the digitalisation of information, its effects on decision making, and on ethics. The purpose of this project is to reposition decision makers vis-à-vis the way they inform themselves, to investigate and challenge computational paradigms, and to envision an approach to ethics that befits 21st century dilemmas.
Alix is currently the Vice President of Research, Expertise and Knowledge at Singularity Education Group. In this role she oversees Singularity’s research efforts, body of knowledge, and community of experts. Prior to joining SEG, Alix studied as a PhD candidate at the University of Amsterdam and the Amsterdam School for Cultural Analysis (ASCA). Here, she researched the collaboration between human and nonhuman (technological) agents at the intersection of humans and computational systems. She has written and speaks about cyberpunk and science fiction literature, autonomous weapon systems, embodied robotics, artificial intelligence, and AI bias.
Human vs algorithm: Why the AI balance is key, with researcher Alix Rübsaam
Some presentations and workshops that Alix gave earlier:
Keynote: AI, Unintended Outcomes & The Opportunities of Responsible Technologies
When it comes to automating decision making with AI, algorithmic bias can become hard-coded into data-driven technologies, despite our best intentions. Applications abound: from hiring to manufacturing optimization, from supply chain management to recommendation algorithms. But all AI applications are at risk of perpetuating, and augmenting blind spots into their models. Biases like this do not just mean that these AI’s are unfair or unjust, they can also affect your bottom line, if you don’t understand where the blind spots are.
This talk unpacks the workings of the design and decision making that goes into automated systems, how to identify and analyze the mechanisms in AI and how to advocate for and build more socially responsible algorithms. Additionally, in this session we will dive into the limitations and possibilities of leveraging data driven technologies, and how to be a leader in the emerging space. Participants will learn to identify different kinds of algorithmic biases; will be able to identify opportunities for responsible AI; and will learn to assess an automated decision making system on its risk for unintended consequences.
Workshop: Responsible AI
This immersive and hands-on simulation exercise goes through the steps of designing and training an AI. Participants learn how decisions made in the design phase influence the workings and outcomes of the algorithms built during the simulation. They learn how to identify and analyze the mechanisms that lead to unwanted and unintended outcomes in AI, as well as how to advocate for and build more socially responsible algorithms. No formal training needed to participate.
During the workshop, participants will make decisions about the training and designing of algorithms in a simulation of real world implementation of technologies across industries. Shortening the distance between design, implementation, and outcomes, increases understanding of how our cultural background and assumptions become programmed into data-driven technologies.
Then, participants will learn to identify opportunities for responsible AI; pinpoint potential pitfalls for algorithmic bias; and learn to assess the risks for unintended outcomes. During the simulation, they will also become familiar with the design, implementation, and use of automated decision-making algorithms and Machine Learning systems.
Keynote: AI or Die? Redefining the Human in the Digital Age
AI as an existential risk for humanity has become a mainstay in news headlines. While some predict that the rise of Artificial Intelligence will mean the “end of humankind”, others see no future without algorithms and data-driven systems. What sense can we make from these predictions and warnings?
This talk unpacks the effects of thinking about our brain as a computer and what the impact of this thinking has on how we leverage a technology like AI. Our current ‘computational’ way of thinking has shaped our sense of self, and our culture. Software has been a much used metaphor to explain the way we think, or even to explain humanity as a whole. So if computers can think like we do, what does it still mean to be human? This talk places the perceived threat of AI and the brain-as-computer in a long tradition and history of ideas we have used to explain humanity. From this, we can learn how our technologies contribute to how we think about ourselves and our future, and challenge our understanding of Artificial Intelligence as a threat.
Alix Rübsaam | Being Human in the digital age | SingularityU Mexico Summit