top of page
Green woods near Brohm Lake_edited.jpg

Who's Responsible for ChatGPT?

Thursday, Friday & Saturday, Aug 3-5 2023

Thursday, August 3rd
1:00-6:00 PM EST

How does Sci-Fi shape how we think about ChatGPT?

1:00 EST
Presentation

Waleed Zuberi

Human-Computer Interaction and Design at Indiana University

A Survey of Beliefs and Attitudes toward Artificial Intelligence

Generative chatbots (e.g. ChatGPT or Bard), voice cloning software (e.g. Murf or Listnr), as well as image generators (e.g. Midjourney or Stable Diffusion), are publicly available, leaving society to wrestle with the potential and perceived benefits and drawbacks of Artificial Intelligence (AI). As a result of this rather rapid rollout and swift adoption of this technology, critical questions around the social and ethical implications of these technologies as well as potential threats need to be explored. For this reason, we present the results of an exploratory survey of 122 respondents, including their understanding, beliefs, and perceptions about the impact of AI as i) experienced in the real world when engaging with some of the named technologies, ii) as well as depicted and perceived via different Sci-Fi media, for example, Streaming Shows or Sci-Fi Movies. Our results indicate that people assume to have a baseline understanding of what the term AI means, with a lesser understanding of related, technical verbiage and concepts such as Neural Networks or Deep Learning. The respondents indicate being familiar with Sci-Fi, especially Sci-Fi Movies and Shows, and assess a certain degree of influence from Sci-Fi concerning their views of AI. In addition, our results show that people — when forced to decide between good and bad — in majority believe that AI will indeed have a beneficial impact on their future lives, however, with many of our participants stating in an open-ended follow-up question the potential for this emerging technology to cause harm. Waleed Zuberi is a graduate student in the Human-Computer Interaction and Design program at Indiana University. With a background in digital marketing and product management, he is passionate about leveraging design to create accessible, safe, and engaging experiences.

How will ChatGPT impact Higher Education?

1:40 EST
Lightning Talk

Jonathan Griffiths

Ancient Philosophy and Natural Language Generation at University of Tübingen

Can ChatGPT write my abstract for me?

When is my language no longer my own? When does my use of language cease to be my responsibility, and what are the limits or conditions of human authorship? Whilst these questions have long been asked by philosophers in the context of, e.g., politics, art and the law, the emergence of Large Language Models like ChatGPT poses new challenges for differentiating between ‘natural’ and ‘artificial’ language use. This is because ChatGPT has the capacity to produce natural-sounding and contentful human speech, yet that speech is also determined by the specific prompt of the human user. In my flash-talk I want to consider some issues which arise from using ChatGPT as a language assistant in the context of academic writing in the humanities. Can ChatGPT be regarded as a co-author in cases where it has provided meaningfully relevant speech, or do these instances still qualify as human authorship? If so, why? Jonathan Griffiths is a post doc philosopher at the University of Tübingen in Germany. Jonathan got into philosophy after studying ancient and modern languages by reading Plato and the philosophers of Ancient Greece; in particular, he was swept away by Plato’s conception of philosophy as dialogue, and such ideas as Socrates’ definition of thought as 'the soul being in conversation with itself’. It’s in that spirit of being a lover of discourse and philosophical communication that he’s now getting interested in the technology of natural language generation.

2:00 EST
Presentation

Charles Freiberg

Philosophy of Technology at Saint Louis University

ChatGPT and the Future of Liberal Arts Education

Since the introduction of ChatGPT, there has been considerable unease within educational institutions about what this AI means for the future of education, especially as it opens the possibility of undetectable forms of plagiarism. In this paper, I consider what programs like ChatGPT could mean for the future of liberal arts education. I will do this in two parts. First, I will situate ChatGPT within a larger history of the offloading of intellectual activity onto tools and the ways in which this offloading is a constitutive part of intellectual life that serves as both a possible remedy to human lack and a possible poison to human life. I suggest that while the offloading of writing itself onto technology may constitute new forms of intellectual life, it is a poison for the spirit of liberal arts education. Second, I will consider a possible remedy to this poison in terms of a new orality. The hope is to find a role for students beyond that of author or operator and editor of technology by giving over writing in its current form to the technology that is poised to take it. It is my suggestion that there is a final death of the (human) author that requires a new form of engagement with tradition that cannot be a simple return to an oral culture but cannot be the continuation of an education based on writing, and it’s a reimagination of orality and an education based on orality that may be the required therapy. Charles Freiberg is a PhD Candidate in philosophy at Saint Louis University working on a dissertation in the philosophy of technology. Charles is interested in questions about technology, place, education, and what it means to be human.

2:40 EST
Presentation

Paweł Łupkowski
Tomáš Ondráček

Economics at Adam Mickiewicz University; Psychology and CogSci at Masaryk University

Dear Professor, can I use ChatGPT to write my essay?
Official university statements concerning the use of ChatGPT

In his famous 1950 paper, Alan Turing considered the co-called Heads in the Sand Objection to the idea of thinking machines. The objection states, "The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so." With the rise of ChatGPT, we observe voices that - at first glance - resemble the aforementioned objection (see, e.g., "Pause Giant AI Experiments: And Open Letter"). In our talk, we would like to present an analysis of official statements presented by universities that address issues related to ChatGPT usage for academic purposes. We aim at grasping and identifying what issues are considered regarding ChatGPT, how it is framed, what recommendations, arrangements, provisions, etc., are recommended, and if present, what kind of justification and argumentation is made. The goal is to present how universities currently approach ChatGPT and what actions we can expect. Paweł Łupkowski is an associate professor at the Faculty of Psychology and Cognitive Science at Adam Mickiewicz University. His scientific interests are formal analysis of cognitive processes, conceptual foundations of AI and human-robot interaction. Tomáš Ondráček is an assistant professor at the Faculty of Economics at Masaryk University. His teaching focuses on philosophy, ethics, argumentation, psychology, and human resources.

3:20 EST
Presentation

Rich Eva
Nick Hadsell
Kyle Huitt

Should Philosophy Journals Accept AI Submissions?

Formal Epistemology; Parental Rights; Ethics and Political Philosophy, at Baylor University

AI is coming for philosophy journals, and we argue that we should welcome it. While philosophy has traditionally been a human endeavor, we think that there is room within philosophy journals for beneficial contributions from AI. Our positive case is that (especially in some subfields) AI stands to make significant contributions to ongoing projects, and it benefits the world of philosophy for those contributions to be published in journals, the primary purpose of which is to disseminate significant contributions to philosophy. We distinguish between different kinds of contributions that AI might be able to generate, and caution against publishing some of them. Among other things, we think AI will be excellent at synthesizing literatures that are otherwise impractical to read in their entirety, making progress in formal areas of philosophy where there are clear logical and mathematical frameworks in place, and conducting reviews of new literature. We consider objections that AI is incapable of original work, that AI is incapable of doing philosophy the right way, and that AI publications will disincentivize humans from publishing. Ultimately, we conclude that once AI is sufficiently advanced, there should be journals entirely dedicated to work done by AI and journals that are mostly dedicated to work done by humans. Philosophy is good for humans, and we think AI can help humans do philosophy better. Kyle Huitt is a doctoral student in philosophy at Baylor University specializing in formal epistemology and philosophy of religion. Nick Hadsell is a doctoral student in philosophy at Baylor University working on a dissertation about parental rights. Rich Eva is a PhD Candidate at Baylor University specializing in ethics and political philosophy.

Can Heidegger help us think about ChatGPT?

4:00 EST
Lightning Talk

Deepa Shukla

Question Concerning Technology:
Can Art Save us from Upcoming Technological Danger?

Philosophy of AI and Natural Language Processing at Indian Institute of Technology, Jodhpur

Heidegger long ago argued that Technology has the potential to enframe any natural thing. With time, it has also started reducing human beings; it has Enframed humans in a technological mode of being, where human beings are objectified and subordinated to the logic of efficiency and effectiveness. There is no denying that new AI technologies are helping humans in many forms, such as increased efficiency, productivity, and innovation. Still, there are also growing concerns about its impact on human autonomy, freedom, and dignity.   This paper aims to explore how AI is being used to enframe humans and examine the broader implications of this trend for human well-being and social justice. With this, we will move to our major concern, whether we really have a sweet spot where we can settle AI and its development. Heidegger proposed the ‘Art/Poetic mode of being’ as a middle path between shutting down technological development and losing control over unsafe instrumental use of technology. But, the question is ‘Can Art save us from the upcoming danger?’ ‘Do we really have a sweet safe spot?’ Deepa Shukla, PhD is a research scholar at Indian Institute of Technology, Jodhpur (India). Deepa's research interests are Philosophy of Artificial Intelligence, Natural Language Processing, Philosophy of Mind, Consciousness etc. Deepa is pursuing research in Philosophy of NLP, i.e., exploring the limitations of LLMs.

4:20 EST
Presentation

William Watkins

Phenomenology, Heidegger at Boston College

Heidegger & Existential Risk; A Conversation Towards AI

This paper investigates the nature of “existential threat,” using the current Artificial Intelligence debate as a touch-stone while utilizing the work of Martin Heidegger to conceptualize existential threat writ large. Heidegger’s distinction between fear and anxiety as presented in Being and Time serves as an initial resource for the typology of varying threats, so as to then redefine what sort of existential threat AI may pose, if any. As a result of Heidegger’s conception of Dasein, an existential threat is separable from simply physical threats of an “extinction event.” Using this framework, this paper briefly explicates Heidegger’s conception of language as a violence, and the nature of this violence as threat, as shown in Introduction to Metaphysics. Language is violence in that it allows humanity to falsely believe itself to have created the tool, while, instead, simultaneously being intellectually conformed to language, rather than acting as its “master.” Our relationship with AI functions similarly; we believe ourselves to be masters of this mode of information organization despite its ever increasing likelihood of developing a “life” of its own, so to speak. In this way, AI is an existential threat in that it is becoming the very fabric of our communication, leaving us vulnerable to undetectable, artificially implemented changes in social discourse by way of mediation. The primary, though not exclusive, concern is that if AI monitors our digital communication, and digitally mediated communication is becoming the overwhelming norm, then it can prevent substantial action against it from being taken, among other things. By way of conclusion, this paper supposes that Heidegger’s conception of threat ought to still be scrutinized as the relationship between his notions of fear, anxiety, and the resulting action of threat "identification" is consistent with his involvement in National Socialism as rector of Freiburg from 1933-34. His philosophy of threat along with his involvement in National Socialism should inform us of the risk which is taken on when declaring an existential threat as such. Such a declaration requires swift and decisive action while the declarer is simultaneously part of a nexus of narratives. In the case of AI, many of the narratives informing our declaration of threat are already being curated by the programs in question, such as Chat-GPT, search engine AI, and news feed algorithms. The fact that our social, decision making apparatus are mediated by that very thing about which we are deciding puts our reliability as decision-makers into question, thus increasing vulnerability to the threat itself. William Watkins is a Master’s student in Philosophy at Boston College and received his Bachelor’s in Philosophy at The College of William & Mary. William’s interests in the field include Epistemology, Metaphysics, Philosophy of Science, and Phenomenology with particular interest in Martin Heidegger.

Invited Panel: The AI Arms Race

5:00 EST
Panel

Everyone might acknowledge that slowing down and putting in guardrails on AI development are good ideas, but no one has the power to do that. So how do we slow down and implement greater foresight and anticipatory governance? How do we get the good stuff and avoid the bad?

Philosopher of Science, Technology, and Society

Suzanne.jpg

Environmental Strategist and Philosopher of Science

AI Ethics Writer and Researcher

Professor of Intelligent Systems Engineering

Friday, August 4th

1:00-6:00 PM EST

What happens when ChatGPT can see and hear?

1:00 EST
Presentation

Abouzar Moradian Tehrani

Advancing AI Language Models: Embracing Multimodal Perception and Selective Reinforcement Learning

Philosophy and Machine Learning Engineering at Texas A&M University

In the advent of advanced language models like ChatGPT, I propose that their reliability hinges on a shift towards multimodal perception and a judicious use of reinforcement learning. Currently, these models excel in participating in Language Games, yet they often fall into the pitfall of creating fictional responses. This unreliability primarily stems from their text-bound nature, which lacks any perceptual modalities to verify their outputs. Incorporating other modalities such as image and audio processing, akin to recent developments in GPT-4, can enhance the models' veridicality. The ability to corroborate textual information with other sensory data—much like how humans cross-check information—would curtail the generation of fictional responses. Furthermore, constant self-updating and cross-modality checking during the inference phase will augment the models' accuracy. Simultaneously, I caution against overreliance on reinforcement learning fine-tuning. As the large language model's goal is to predict the next token based on prior tokens, the models may default to generating agreeable and intelligible but possibly fallacious responses. Prioritizing user satisfaction over validity can aggravate this bias, thus reinforcement learning should aim to balance validity and agreeability. Lastly, I argue for a more sophisticated weighting of data sources during training. A hierarchical approach should be considered, where academically acclaimed and highly-cited sources take precedence over less credible online narratives. Such selective processing would equip AI models with a more discerning foundation of knowledge, thereby improving the validity of their output. In conclusion, for AI language models to graduate from mere linguistic players to reliable knowledge sources, a shift towards multimodal learning, valid knowledge integration, and nuanced reinforcement strategies is imperative. Abouzar Moradian Tehrani is a Ph.D. candidate in Philosophy at Texas A&M University. Abouzar also recently earned a master's degree in Computer Engineering. Abouzar is a machine learning engineer with areas of interest in Computer Vision and NLP.

1:40 EST
Presentation

Nikolai Ilinykh

ChatGPT goes into the physical world: on dangers and future of multi-modal language 

Computational Linguistics at the University of Gothenburg, Sweden

ChatGPT has been a force to be reckoned with in the field of Natural Language Processing. However, it has also raised concerns and attracted a lot of public attention towards AI and Computational Linguistics. In fact, the revolution began in 2017 with the introduction of a new type of language model known as the transformer architecture, which is basically the backbone of ChatGPT. These models have consistently demonstrated their ability to provide accurate solutions to various text-based problems, including solving math equations and generating coherent narratives. Nowadays, the researchers are actively working on developing an improved version of ChatGPT - a multi-modal architecture that can integrate text with other modalities like vision, sound, and senses. The ultimate goal of such research is to develop safer and less biased version of ChatGPT-like models. In my talk, I will address the critical issues surrounding large language models like ChatGPT in the language-and-vision domain. A common task for such models is to describe images in natural human-like language. I will present examples that highlight how these models tend to capture and amplify gender and racial biases when describing images. Additionally, I will explain the inner workings of statistical models like ChatGPT and emphasise what is important to keep in mind when playing around with such models. It is crucial to recognise that although these models exhibit biases, humans can mistakenly attribute human-like properties to them. I will shed light on the key components of ChatGPT and raise the question: Is ChatGPT itself the problem, or is it the information provided by humans during model training? Perhaps, it is a combination of both. The primary objective of my talk is to increase awareness and encourage a deeper understanding of ChatGPT's inner workings and its implications for society. Nikolai is a 4th-year doctoral student in Computational Linguistics at the University of Gothenburg, Sweden. His research focuses on building and analysing language agents that can perceive the real world and act in it accordingly. As the real world involves humans, studying the human mind and behaviour for inspiration to build better language agents is another central thesis in Nikolai’s research. In his free time, Nikolai is baking or playing instruments (piano and accordion).

How human is ChatGPT? 

2:20 EST
Presentation

Kiki Schirr

Researcher, Freelance Marketer, and Writer

Does AI deserve the truth?

Given that algorithms and AI such as ChatGPT are trained on publicly available datasets, whether their authors are users are fully aware of the training or not, do members of society who frequent sites populated by user generated content (UGC) have a moral obligation to not participate in activities or generate images or text that would dirty AI’s data pool? That is, do netizens have an obligation not to lie to AI? After all, a human being would know Abraham Lincoln wasn’t an inaugural member of the Flat Earth Society, but AI is only one viral meme away from saying Lincoln wore a Flat Earth hat. Many of Kant’s arguments for the “perfect duty” of truth are predicated on the observer’s “intrinsic worth” as a human being. When the “rational” decision maker of utilitarianism is a black box algorithm incapable of providing a proof for its conclusion, does this “perfect duty” become a fool’s errand? Worse, is there an argument to be made that sabotaging AI granted with making decisions about the fate of human beings (e.g. creditworthiness, benefits eligibility) is then a virtuous action? Is it possible that screwing with AI data pools is a moral obligation until such time as AI transparency and human oversight is enforced? Using real world examples of algorithmic decision making, AI/ML mistakes, and human data obfuscation, I hope to spend twenty minutes raising uncomfortable questions about the looming potential of humanist data clarity threat actors. Kiki Schirr is a writer and researcher who specializes in explaining rising technologies. Her past projects have included startup and ICO consulting, peer-to-peer distributed video chat, product launches, and drawing Tech Doodles.

3:00 EST
Presentation

Sidney Shapiro

Beyond Human Connections: Love in the Age of AI and the Evolution of Intimacy

Business Analytics at the University of Lethbridge

In this presentation, we explore the evolving landscape of interpersonal communication in a world immersed in generative AI, focusing specifically on the realm of online dating and the impact of AI-mediated chatbots on relationships and the concept of companionship. While the current state of AI is limited, advancements in computing offer the potential for the emergence of general intelligence AI. This prospect places us in a situation reminiscent of the Star Trek series, where we can envision a future that may be within reach, albeit with current technological limitations. What was once considered science fiction has now become a tangible possibility, captivating the collective imagination. As AI technologies continue to develop and gain power, it raises intriguing questions about our individual relationships with AI entities and how we interact with one another in a world increasingly mediated by AI-powered systems. The anthropomorphization of AI, wherein we attribute human-like qualities to these entities, further blurs the boundaries between humans and machines. This shift in perspective challenges traditional notions of companionship and love, as we navigate the complexities of forming emotional connections with AI. We will discuss the potential implications of these changes, exploring the social, psychological, and ethical dimensions of love in a world where AI plays an increasingly prominent role. By examining the evolving dynamics of online dating and AI-mediated interactions, we aim to shed light on the multifaceted nature of human-AI relationships and their impact on our society. Sidney Shapiro is an Assistant Professor of Business Analytics at the Dhillon School of Business, University of Lethbridge.

3:40 EST
Presentation

M. Hadi Fazeli

Philosophy of Responsibility at the Lund Gothenburg Responsibility Project, Sweden

AI-generated Misinformation: Who is Responsible?

Many philosophers argue that moral responsibility for an action or omission entails being appropriately held accountable for it. For instance, when people react to an agent spreading misinformation by distrusting, criticizing, shadow-banning, or blocking them, it indicates holding that agent accountable and assigning responsibility to them. However, the rise of AI and its potential for disseminating misinformation raises an important philosophical question: Who or what is the responsible entity when it comes to AI? If responsibility is determined by being appropriately held accountable, how do our reactions to AI-generated misinformation imply AI’s responsibility? In this talk, I argue that appropriate reactions to AI’s faults differ from those in interpersonal relationships. What may be appropriate for human agents may not apply to AI, and vice versa. After discussing different types of appropriate reactions to AI-generated misinformation, I propose that providing feedback to AI systems, specifically to facilitate their “self-correction” of mistakes, implies holding AI accountable for the information it generates and thereby establishes AI as a responsible entity. This new perspective on responsible AI allows for a more nuanced assessment of responsibility within the complex relationship between human agents and AI in these unprecedented times. M. Hadi Fazeli is a third year doctoral student, associated with the Lund Gothenburg Responsibility Project (LGRP), Sweden. M. Hadi's research focuses on examining the factors that contribute to reduced responsibility for individuals regarding their past actions.

4:20 EST
Presentation

Xiaomeng Ye

Computer Science and Machine Learning at Berry College

Parenting an AI

Developing and using AI shares many similarities with parenting a kid. A child is made by human (hence artificial). A child is intelligent. A child qualifies as an artificial intelligence system in the broader terms. This talk draws inspiration from parenting and maps the thinking and concepts onto AI. This talk throws in bunch of questions and invites more from the audience, for example: Who decides when to give birth to a child/AI? Who is responsible for raise and train a child/AI? What decides the training content, textbook, and curriculum for a child/AI? When a child/AI drives a car and causes an accident, who is responsible? When a child/AI has conflict with another child/AI, who is supposed to intervene? What decides the social norm for a child/AI, what decides the expectation in different social context? Where do we draw the line between benign actions (drawing on the wall) and harmful behaviors (play with fire?)? When does a child/AI grows up and starts to be responsible for their own action? When a grown up/AI works in collaboration with others in a group effort, how is their responsibility delineated? Before we talk about AI ethics, we need to talk about human ethics. Before we can create a socially accepted, morally justified, legally responsible AI, we need to think about how to raise a socially accepted, morally justified, legally responsible child. Xiaomeng Ye is a recent graduate from IUB's Computer Science PhD program and currently teaching at Berry College. His research interest is in building new AI/ML algorithms. He is new to AI ethics but still wants to ''throw a brick to attract jade''.

Invited Panel: The Social Effects of AI

5:00 EST
Panel

How might the rapid development and deployment of AI affect our social institutions and daily lives? We need a relatively unified political approach for distributing the benefits and burdens of AI. But how can academia and industry collaborate productively on ethics and safety research?

Philosophy of Moral Cognition, Science, and Society

Philosophy of Mind, Cognitive Science, and Artificial Intelligence

ken archer_edited.png

Data/Machine Learning Strategy, AI Ethics, and Philosophy of Technology

Saturday, August 5th
12:00-6:30 PM EST

Educating the Public about AI Ethics

12:00 EST
Lightning Talk

Cargile Williams
Ricky Mouser

Everyone is a Philosopher

Moral Responsibility;

Philosophy of Well-Being, at Indiana University

You’re a philosopher: You have deep commitments about what really matters and how to make sense of the world. You’ve probably already spent some time critically examining these commitments. Philosophy is crucial for thinking about your values in a rapidly changing world. But almost all philosophical education narrowly targets undergraduates between the ages of 18 and 22, unless you continue on to grad school. We think education should be a lifelong journey, and philosophy should be available to everyone. This conference is our first step towards reaching the broader public. So now we want to ask you: How can we reach you with philosophy in your everyday life? What modes of outreach are helpful for career professionals, AI researchers, and the public at large? We share some of our early results and invite you into a dialogue about how to reach the public. Cargile Williams is a PhD candidate at Indiana University who thinks a lot about moral responsibility and the stories we tell ourselves about ourselves. Ricky Mouser is a PhD candidate at Indiana University who thinks a lot about the future roles of play and work in human flourishing.

12:20 EST
Presentation

Shubhi Sinha

Advancing Responsible AI Education: Design and Implementation of an AI Ethics Curriculum for High School Learners

Life Sciences, Psychology, and Human-Computer Interaction at Indiana University

Despite the transformative impact of artificial intelligence (AI) on our daily lives, the design, development, and deployment of AI technologies often results in unfair and unintended consequences. As we navigate the rapidly evolving AI landscape, we must incorporate AI ethics literacy within K-12 education systems. Unfortunately, K-12 educators often struggle to develop research-based AI ethics curriculum suitable for learners with both technical and non-technical backgrounds. Here, we present the design and implementation of an AI ethics curriculum that aims to improve AI ethics literacy among high school learners. Our curriculum is designed to teach 11th and 12th grade students about the ethical framework that guides the development and outcomes of responsible AI. The curriculum consists five modules that cover four AI ethics principles: fairness, privacy, trustworthiness/transparency, and accountability. The ethical framework is informed by a pedagogical commitment to case based/user-centric issues, real-world implications for ICICLE’ use cases, general principles, and critical theory from science and technology studies (STS) on infrastructure. We aim to empower students to engage responsibly with AI by (1) critically analyzing its societal and ethical implications and (2) making informed decisions regarding the use of AI technologies. In doing so, we hope to cultivate responsibility among future AI users and developers. The modules were delivered to 80 students during a 5-day, Pre-College Summer Program hosted by IU’s Luddy School of Informatics, Computing, and Engineering. Student learning outcomes were measured through pre-assessments, post-assessments, focus groups, and individual interviews. Assessment results and findings are currently underway. Product of the Privacy, Accountability, and Data Integrity Work Group, and the Workforce Development and Broadening Participation in Computing Work Group. ICICLE Institute. Funded by The National Science Foundation, Award # OAC-2112606. Shubhi Sinha is an undergraduate senior at IU Bloomington pursuing a B.S. in Molecular Life Sciences, a minor in Psychology, and a certificate in Human-Computer Interaction. She is also a MediaMakers Lead Intern at the IU Center of Excellence for Women and Technology and hopes to drive change in the tech-health product space.

ChatGPT, Meaningful Work, and Job Replacement

1:00 EST
Lightning Talk

Cole Makuch

Senior Strategy Consultant and Reinforcement Learning Researcher at IBM

We should consider elevator operators

Until the 1970s most buildings with elevators had elevator operators. Automation has replaced almost all of them with buttons. I'd like to approach this change from two angles: 1. Capitalism's role in fears of job replacement from AI. In the early 20th century Bertrand Russell illustrated similar labor replacement concerns with an example of increased efficiency in a pin factory: if technology was invented that doubles the efficiency of pin manufacturing, factories would not let workers work half as hard for the same output: they would fire half the workforce. In reasonable cases of job replacement (automatic elevators), we should direct our fear not to the technology responsible for the replacement, but to the labor system that handles replaced workers. 2. Benefits of elevator operators beyond taking cars to the correct floor including, security, building management, and creating community that are lost with automatic elevators. When we consider the prospect of AI job replacement, we should consider both angles. Personally, I think the economy should have proper civil services such that people don't need to be pushing buttons just for the sake of having something to do. We also need to carefully consider the holistic impact of human labor before we replace positions with AI. Cole is a Senior Strategy Consultant for IBM where day-to-day he works on projects related to IT modernization for large enterprises. Cole also collaborates with IBM Research, where he has contributed to publications on Reinforcement Learning (RL) AI and has interviewed industry professionals for their perspectives on use cases for RL and AI.

1:20 EST
Presentation

Carter Hardy

CompassionGPT: The Need to Prepare for AEI in Emotional Labor

Bioethics and Moral Psychology at Worcester State University

A recent study found that communication with ChatGPT seemed more empathetic to patients than human doctors. While many factors contribute to this, the study demonstrates the desire for AI to perform emotional labor, as well as the development of artificial emotional intelligence (AEI) to do so. It is important that we prepare for this development and the inevitable ethical issues to follow. I briefly address three. Theoretically, we should be cautious about the authenticity of AEI developed purely on cognitive theories of emotion. Do they really empathize and does it matter? Practically, we should be sure that AEI will achieve comparable results when performing emotional labor. Do patient actually share more openly and why? Psychologically, we should be cautious about offloading a key aspect of our moral psychology—our moral emotions—to another (potential) moral agent. How will this ultimately affect us as moral agents and how can we mitigate? Carter Hardy is an assistant professor of philosophy at Worcester State University, specializing in bioethics and moral psychology. His work has focused on the role that moral emotions such as empathy, sympathy, and calm play in medical practice.

Disaster Capitalism, Disability, and ChatGPT

2:00 EST
Presentation

Mich Ciurria

LLMs and Crisis Epistemology: The Business of Making Old Crises Seem New

Ethics, Marxist Feminism, and Critical Disability Theory

Large Language Models (LLMs) like ChatGPT have set in motion a series of crises that are seen as imminent and unprecedented. These crises include disruptions to (i) the labor force, (ii) education, and (iii) democracy. Naomi Klein (2023) points out that we cannot trust tech CEOs to solve these crises because they have a vested interest in perpetuating them as beneficiaries of disaster capitalism, i.e., a political economy that exploits instability to entrench oppression. Who, then, can solve the AI crisis? I submit that the answer is: oppressed people with intergenerational knowledge of crises. To oppressed folks, tech-related crises are not new, but merely an extension of hundreds of years of uninterrupted subjugation. The popular misconception of the AI crisis as without precedent is an example of what Kyle Whyte calls “crisis epistemology,” a pretext of newness used to dismiss the accumulated wisdom of intergenerationally oppressed peoples. If AI-related crises are new, then what do Indigenous people know about them? Nothing. In this paper, I explain how mainstream philosophy is using crisis epistemology to dismiss the testimony of racialized and disabled peoples on AI. I then point to some solutions offered by oppressed peoples – solutions that take aim at neoliberalism. A common theme of anti-oppressive discourse is a rejection of neoliberal economics. This situated perspective is missing from dominant philosophical discourses on AI. Mich Ciurria (she/they) is a queer, gender-variant, disabled philosopher who works on ethics, Marxist feminism, and critical disability theory. She completed her PhD at York University in Toronto and subsequently held postdoctoral fellowships at Washington University in St. Louis and the University of New South Wales, Sydney. She is the author of An Intersectional Feminist Theory of Moral Responsibility (Routledge, 2019) and a regular contributor to BIOPOLITICAL PHILOSOPHY, the leading blog on critical disability theory.

2:40 EST
Presentation

Tekla Babyak

AI Stands for Ableist Intelligence: Bias Against Job Seekers with Disabilities

Independent Scholar, Musicologist, and Disability Activist

Across many industries and companies, interviews are profoundly biased against disabled job seekers. Interviewers often unfairly assess candidates on qualities such as eye contact, a firm handshake, and straight posture, most of which are irrelevant to the qualifications and duties of the job, as I have argued in “My Intersecting Quests as a Disabled Independent Scholar” (published in Current Musicology, Fall 2020). What happens to these ableist dynamics when AI is tasked with conducting video interviews and making hiring decisions? Regrettably, it seems that all of these human biases are replicated and perhaps even amplified. Hiring bots tend to give higher scores to candidates who resemble current employees in terms of body language, facial expressions, and speech patterns (see Haley Moss, “Screened Out Onscreen: Disability Discrimination, Hiring Bias, and Artificial Intelligence,” Denver Law Review, 2021). Such algorithms screen out disabled and neurodiverse candidates who might have atypical eye movements, unusual speech intonations, etc. In recent months, some laws have been passed to regulate these AI hiring platforms (https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence), but so far these laws seem to be inconsistently enforced and ADA compliance remains limited. In my presentation, speaking from my vantage point as an unemployed scholar with MS, I will propose that the algorithms should be trained with more data from disabled people, in order to mitigate the devastating consequences of the ableist intelligence tools that shape today’s workforce. Currently based in Davis, CA, Tekla Babyak has a PhD in Musicology from Cornell. She is an independent scholar and disability activist with multiple sclerosis.

How can we fix ChatGPT’s bias problem? 

3:20 EST
Lightning Talk

Jovy Chan

‘Garbage In, Quality Out’: How ChatGPT can counteract the spread of misinformation

Social and Political Philosophy at the University of Toronto

With the rapid rise of ChatGPT, many warned against the accuracy of the information it produces. The common phrase ‘garbage in, garbage out’ highlights the difficulty faced by such Large Language Models that trained on vast amount of textual data. Given the amount of misinformation and fake news spreading throughout the internet, we can only expect the program be as good (or bad) as the data on which it is trained. I, however, remain optimistic. This is because the mistakes and biases we make are often systematic, and not random. And systematic errors are easier to pre-empt and offset. Knowing that human beings often make the same mistakes in the same way, it might be possible for the AI models to counteract its effects by simply adjusting its learning instructions to accommodate for such systematic bias. Jovy is a PhD candidate at the University of Toronto and works in social and political philosophy. In particular, she looks at the natural human tendency to conform to the crowd and how it might affect their belief-formation and decision-making processes.

3:40 EST
Presentation

Anwar ul Haq

Ethics and Philosophy of Technology at the University of Pittsburgh

Is regulation of training data for LLMs possible and desirable?

The recent advances in LLMs are enabled by immense computing power and the availability of extremely large data sets. And we know that the scale of the quantity of training data and the ability of models to produce human-like responses are proportional to each other. But the sheer quantity of data is by itself blind to its ethical quality. It is common knowledge, for instance, that a lot of freely available data on the internet is biased against minority groups. We need a framework to moderate training data for LLMs through regulation so that consequent LLMs are well-trained or ‘well-educated’. It is not easy to devise a framework for the aforementioned data moderation. One wonders who will decide on the ethical quality of data sets. Many thinkers, such as Marietje Schaake, have suggested that this must not be too hard since we have already achieved analogous regulation in other spheres, with bodies such as the FDA. But it’s hard to see that the analogy is strong. I dispute this analogy in the paper and offer a better model for thinking about the present issue. Anwar ul Haq is a graduate student in philosophy at University of Pittsburgh. Anwar has a background in technology and is interested in questions at the intersection of ethics and technology.

4:20 EST
Presentation

Soribel Feliz

CEO and Co-founder of Responsible AI = Inclusive AI

Algorithms are Personal

Understanding how algorithms shape people's behaviors and people's lives is a prerequisite to developing sound AI policy and fair regulations. It is our duty to raise awareness about the ways algorithms affect our everyday lives, but especially how algorithms perpetuate the biases that marginalized groups already experience, at scale. From extending institutional prejudices that deny opportunities to minorities (in schools, banks, courts, workplaces) to categorizing certain demographics as at-risk or high-risk groups to using surveillance tactics for keeping people ‘in check’, it is imperative that we rein in a technology that threatens to increase wealth and power inequality to levels never before seen in American society while keeping certain groups ‘in their place’. In this presentation, I will discuss 2-3 case studies where I showcase why it is important for us to be aware of, and understand when algorithms are being used and how, what variables go into those algorithms, and who is responsible and accountable for creating algorithms that harm people. We should not wait for an investigative journalist to go digging for a story to know that an algorithm was used, it should be clear to us as citizens, users and consumers, that an automated systems is making life-changing decisions for us. Some highlights include: •The importance of transparency and accountability when using algorithms and automated systems •Demanding that there is more explainability in algorithmic systems; reducing the ‘black-box’ justification •The importance of demanding respect for our privacy and our data. Soribel Feliz is a thought leader in the fields of Responsible AI and emerging tech. She is CEO and co-founder of Responsible AI = Inclusive AI, a responsible tech consulting company. Previously, she worked for the trust and safety team within the content regulations team at Meta. Before joining Meta, she was a Foreign Service Officer for the US Department of State for a decade, with both domestic and overseas postings.

Keynote: Quantification and the Limits of Scale

5:00 EST
Keynote

Data is a very specific kind of thing: it is information that has been prepared to travel between contexts. This creates the potential for massive big-data aggregations, but also imposes a very specific limitation: the data collection procedure imposes a sharp filter. It typically leaves out the kinds of understanding that are highly context-sensitive, nuanced, or dependent on non-standardized forms of expertise. Algorithmic procedures built on those feedback loops can obscure those limitations—especially in the cases of quantified values, goals, and targets. 

Philosophy of Trust, Art, Games, and Communities

bottom of page