top of page

Want to Present?  

Submissions are due June 19 at 5PM EST.

What kind of talk do you want to give?

Thanks for your submission! Be in touch soon.

Sign Up to Attend

online workshop series

Past Philosophy Hours
chats about our values

Check out some of our takeaway slides from past workshops, and come join us for future discussions!

 

We work hard to keep things kind, constructive, and fun, without shying away from real issues.

Spots are limited. Sign up now!

Takeaway Slides

ChatGPT and Bullshit.jpg

From "ChatGPT and Bullshit"

how we got here

Our Story

“It’s war, the soul of humanity is at stake, and the discipline that has been in isolation training for 2000 years for this very moment is too busy pointing out tiny errors in each other’s technique to actually join the fight.”

—C. Thi Nguyen

Manifesto for Public Philosophy

We're two PhD candidates who have been in academic philosophy long enough to see just how isolated it is from a real world on the precipice of massive change.

In The Age of AI, technology is already reshaping our world, challenging our grip on what's real, and concentrating power in the hands of Big Tech.

The occasional op-ed won't cut it anymore.​

We need to bring everyone into the conversation.

We'll also have

Invited Panels

30 minute discussion
30 minute q+a

Presentations

15-20 minute talk
20-25 minute q+a

Lightning Talks

5-7 minute talk

13-15 minute q+a

the conference

Who's Responsible for ChatGPT?
Building a Public Vision for AI

August 3-5 over Zoom

What personal, corporate, and political policies should regulate the use and development of ChatGPT?

 

Who pays when ChatGPT causes harm?

The conference is free over Zoom and open to everyone.

Philosophy aimed at improving the lives of humans,
made accessible to humans.

Our Mission

what's next

Here's what we're thinking...

Image by Sincerely Media

Future Conferences

AI, Labor, and Knowledge

sunset over rippling water

Nature Retreats

AI Ethics in the Wilderness

The Death of Socrates

Socratic Small Groups

Discussions on Pressing Issues

man performs stand-up comedy

Storytelling + Stand-up

Truly Public Philosophy

Fern leaf

Conference Schedule


Thursday, Friday, & Saturday, Aug 3-5

Thursday, August 3rd
1:00-6:00 PM EST

What Preconceived Beliefs do We Have About ChatGPT?

1:00 EST
Presentation

Waleed Zuberi

Human-Computer Interaction and Design

A Survey of Beliefs and Attitudes toward Artificial Intelligence

Generative chatbots (e.g. ChatGPT or Bard), voice cloning software (e.g. Murf or Listnr), as well as image generators (e.g. Midjourney or Stable Diffusion), are publicly available, leaving society to wrestle with the potential and perceived benefits and drawbacks of Artificial Intelligence (AI). As a result of this rather rapid rollout and swift adoption of this technology, critical questions around the social and ethical implications of these technologies as well as potential threats need to be explored. For this reason, we present the results of an exploratory survey of 122 respondents, including their understanding, beliefs, and perceptions about the impact of AI as i) experienced in the real world when engaging with some of the named technologies, ii) as well as depicted and perceived via different Sci-Fi media, for example, Streaming Shows or Sci-Fi Movies. Our results indicate that people assume to have a baseline understanding of what the term AI means, with a lesser understanding of related, technical verbiage and concepts such as Neural Networks or Deep Learning. The respondents indicate being familiar with Sci-Fi, especially Sci-Fi Movies and Shows, and assess a certain degree of influence from Sci-Fi concerning their views of AI. In addition, our results show that people — when forced to decide between good and bad — in majority believe that AI will indeed have a beneficial impact on their future lives, however, with many of our participants stating in an open-ended follow-up question the potential for this emerging technology to cause harm. Waleed Zuberi is a graduate student in the Human-Computer Interaction and Design program at Indiana University. With a background in digital marketing and product management, he is passionate about leveraging design to create accessible, safe, and engaging experiences.

ChatGPT and the Future of the Humanities

1:40 EST
Lightning Talk

Jonathan Griffiths

Ancient Philosophy

Can ChatGPT write my abstract for me?

When is my language no longer my own? When does my use of language cease to be my responsibility, and what are the limits or conditions of human authorship? Whilst these questions have long been asked by philosophers in the context of, e.g., politics, art and the law, the emergence of Large Language Models like ChatGPT poses new challenges for differentiating between ‘natural’ and ‘artificial’ language use. This is because ChatGPT has the capacity to produce natural-sounding and contentful human speech, yet that speech is also determined by the specific prompt of the human user. In my flash-talk I want to consider some issues which arise from using ChatGPT as a language assistant in the context of academic writing in the humanities. Can ChatGPT be regarded as a co-author in cases where it has provided meaningfully relevant speech, or do these instances still qualify as human authorship? If so, why? Jonathan Griffiths is a postdoc philosopher at the University of Tübingen in Germany. Jonathan got into philosophy after studying ancient and modern languages by reading Plato and the philosophers of Ancient Greece; in particular, he was swept away by Plato’s conception of philosophy as dialogue, and such ideas as Socrates’ definition of thought as 'the soul being in conversation with itself’. It’s in that spirit of being a lover of discourse and philosophical communication that he’s now getting interested in the technology of natural language generation.

2:00 EST
Presentation

Charles Freiberg

Philosophy of Technology

ChatGPT and the Future of Liberal Arts Education

Since the introduction of ChatGPT, there has been considerable unease within educational institutions about what this AI means for the future of education, especially as it opens the possibility of undetectable forms of plagiarism. In this paper, I consider what programs like ChatGPT could mean for the future of liberal arts education. I will do this in two parts. First, I will situate ChatGPT within a larger history of the offloading of intellectual activity onto tools and the ways in which this offloading is a constitutive part of intellectual life that serves as both a possible remedy to human lack and a possible poison to human life. I suggest that while the offloading of writing itself onto technology may constitute new forms of intellectual life, it is a poison for the spirit of liberal arts education. Second, I will consider a possible remedy to this poison in terms of a new orality. The hope is to find a role for students beyond that of author or operator and editor of technology by giving over writing in its current form to the technology that is poised to take it. It is my suggestion that there is a final death of the (human) author that requires a new form of engagement with tradition that cannot be a simple return to an oral culture but cannot be the continuation of an education based on writing, and it’s a reimagination of orality and an education based on orality that may be the required therapy. Charles Freiberg is a PhD Candidate in philosophy at Saint Louis University working on a dissertation in the philosophy of technology. Charles is interested in questions about technology, place, education, and what it means to be human.

2:40 EST
Presentation

Paweł Łupkowski
Tomáš Ondráček

Psychology & Cognitive Science;

Economics

Dear Professor, can I use ChatGPT to write my essay?
Official university statements concerning the use of ChatGPT.

In his famous 1950 paper, Alan Turing considered the co-called Heads in the Sand Objection to the idea of thinking machines. The objection states, "The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so." With the rise of ChatGPT, we observe voices that - at first glance - resemble the aforementioned objection (see, e.g., "Pause Giant AI Experiments: And Open Letter"). In our talk, we would like to present an analysis of official statements presented by universities that address issues related to ChatGPT usage for academic purposes. We aim at grasping and identifying what issues are considered regarding ChatGPT, how it is framed, what recommendations, arrangements, provisions, etc., are recommended, and if present, what kind of justification and argumentation is made. The goal is to present how universities currently approach ChatGPT and what actions we can expect. Paweł Łupkowski is an associate professor at the Faculty of Psychology and Cognitive Science at Adam Mickiewicz University. His scientific interests are formal analysis of cognitive processes, conceptual foundations of AI and human-robot interaction. Tomáš Ondráček is an assistant professor at the Faculty of Economics at Masaryk University. His teaching focuses on philosophy, ethics, argumentation, psychology, and human resources.

3:20 EST
Presentation

Rich Eva
Nick Hadsell
Kyle Huitt

Should Philosophy Journals Accept AI Submissions?

Ethics & Political Philosophy;

Philosophy of Parental Rights;

Formal Epistemology & Philosophy of Religion

AI is coming for philosophy journals, and we argue that we should welcome it. While philosophy has traditionally been a human endeavor, we think that there is room within philosophy journals for beneficial contributions from AI. Our positive case is that (especially in some subfields) AI stands to make significant contributions to ongoing projects, and it benefits the world of philosophy for those contributions to be published in journals, the primary purpose of which is to disseminate significant contributions to philosophy. We distinguish between different kinds of contributions that AI might be able to generate, and caution against publishing some of them. Among other things, we think AI will be excellent at synthesizing literatures that are otherwise impractical to read in their entirety, making progress in formal areas of philosophy where there are clear logical and mathematical frameworks in place, and conducting reviews of new literature. We consider objections that AI is incapable of original work, that AI is incapable of doing philosophy the right way, and that AI publications will disincentivize humans from publishing. Ultimately, we conclude that once AI is sufficiently advanced, there should be journals entirely dedicated to work done by AI and journals that are mostly dedicated to work done by humans. Philosophy is good for humans, and we think AI can help humans do philosophy better. Rich Eva is a PhD Candidate at Baylor University specializing in ethics and political philosophy. Nick Hadsell is a doctoral student in philosophy at Baylor University working on a dissertation about parental rights. Kyle Huitt is a doctoral student in philosophy at Baylor University specializing in formal epistemology and philosophy of religion.

Can Heidegger help us think about ChatGPT?

4:00 EST
Lightning Talk

Deepa Shukla

Question Concerning Technology:
Can Art Save us from Upcoming Technological Danger?

Philosophy of Natural Language Processing

Heidegger long ago argued that Technology has the potential to enframe any natural thing. With time, it has also started reducing human beings; it has Enframed humans in a technological mode of being, where human beings are objectified and subordinated to the logic of efficiency and effectiveness. There is no denying that new AI technologies are helping humans in many forms, such as increased efficiency, productivity, and innovation. Still, there are also growing concerns about its impact on human autonomy, freedom, and dignity.   This paper aims to explore how AI is being used to enframe humans and examine the broader implications of this trend for human well-being and social justice. With this, we will move to our major concern, whether we really have a sweet spot where we can settle AI and its development. Heidegger proposed the ‘Art/Poetic mode of being’ as a middle path between shutting down technological development and losing control over unsafe instrumental use of technology. But, the question is ‘Can Art save us from the upcoming danger?’ ‘Do we really have a sweet safe spot?’ Deepa Shukla, Ph.d. is a research scholar at Indian Institute of Technology, Jodhpur (India). Deepa's research interests are Philosophy of artificial intelligence, Natural Language Processing, Philosophy of Mind, Consciousness etc. Deepa is pursuing research in Philosophy of NLP, i.e. exploring the limitations of LLM's

4:20 EST
Presentation

William Watkins

Philosophy, esp. Heidegger

Heidegger & Existential Risk; A Conversation Towards AI

This paper investigates the nature of “existential threat,” using the current Artificial Intelligence debate as a touch-stone while utilizing the work of Martin Heidegger to conceptualize existential threat writ large. Heidegger’s distinction between fear and anxiety as presented in Being and Time serves as an initial resource for the typology of varying threats, so as to then redefine what sort of existential threat AI may pose, if any. As a result of Heidegger’s conception of Dasein, an existential threat is separable from simply physical threats of an “extinction event.” Using this framework, this paper briefly explicates Heidegger’s conception of language as a violence, and the nature of this violence as threat, as shown in Introduction to Metaphysics. Language is violence in that it allows humanity to falsely believe itself to have created the tool, while, instead, simultaneously being intellectually conformed to language, rather than acting as its “master.” Our relationship with AI functions similarly; we believe ourselves to be masters of this mode of information organization despite its ever increasing likelihood of developing a “life” of its own, so to speak. In this way, AI is an existential threat in that it is becoming the very fabric of our communication, leaving us vulnerable to undetectable, artificially implemented changes in social discourse by way of mediation. The primary, though not exclusive, concern is that if AI monitors our digital communication, and digitally mediated communication is becoming the overwhelming norm, then it can prevent substantial action against it from being taken, among other things. By way of conclusion, this paper supposes that Heidegger’s conception of threat ought to still be scrutinized as the relationship between his notions of fear, anxiety, and the resulting action of threat "identification" is consistent with his involvement in National Socialism as rector of Freiburg from 1933-34. His philosophy of threat along with his involvement in National Socialism should inform us of the risk which is taken on when declaring an existential threat as such. Such a declaration requires swift and decisive action while the declarer is simultaneously part of a nexus of narratives. In the case of AI, many of the narratives informing our declaration of threat are already being curated by the programs in question, such as Chat-GPT, search engine AI, and news feed algorithms. The fact that our social, decision making apparatus are mediated by that very thing about which we are deciding puts our reliability as decision-makers into question, thus increasing vulnerability to the threat itself. William Watkins is a Master’s student in Philosophy at Boston College and received his Bachelor’s in Philosophy at The College of William & Mary. William’s interests in the field include Epistemology, Metaphysics, Philosophy of Science, and Phenomenology with particular interest in Martin Heidegger.

Invited Panel: The AI Arms Race

5:00 EST
Panel

Everyone might acknowledge that slowing down and putting in guardrails on AI development are good ideas, but no one has the power to do that. So how do we slow down and implement greater foresight and anticipatory governance? 

Adam Briggle

University of North Texas

A0F378F4-056E-452D-B228-D85D9A2C81C6.jpeg

Suzanne Kawamleh

Cummins Inc.

Fiona J McEvoy

All Tech is Human

Prateek Sharma

Indiana University

Friday, August 4th

1:00-6:00 PM EST

The Benefits and Drawbacks of Multimodal Perception 

1:00 EST
Presentation

Abouzar Moradian Tehrani

Advancing AI Language Models: Embracing Multimodal Perception and Selective Reinforcement Learning

Philosophy & Machine Learning Engineering

In the advent of advanced language models like ChatGPT, I propose that their reliability hinges on a shift towards multimodal perception and a judicious use of reinforcement learning. Currently, these models excel in participating in Language Games, yet they often fall into the pitfall of creating fictional responses. This unreliability primarily stems from their text-bound nature, which lacks any perceptual modalities to verify their outputs. Incorporating other modalities such as image and audio processing, akin to recent developments in GPT-4, can enhance the models' veridicality. The ability to corroborate textual information with other sensory data—much like how humans cross-check information—would curtail the generation of fictional responses. Furthermore, constant self-updating and cross-modality checking during the inference phase will augment the models' accuracy. Simultaneously, I caution against overreliance on reinforcement learning fine-tuning. As the large language model's goal is to predict the next token based on prior tokens, the models may default to generating agreeable and intelligible but possibly fallacious responses. Prioritizing user satisfaction over validity can aggravate this bias, thus reinforcement learning should aim to balance validity and agreeability. Lastly, I argue for a more sophisticated weighting of data sources during training. A hierarchical approach should be considered, where academically acclaimed and highly-cited sources take precedence over less credible online narratives. Such selective processing would equip AI models with a more discerning foundation of knowledge, thereby improving the validity of their output. In conclusion, for AI language models to graduate from mere linguistic players to reliable knowledge sources, a shift towards multimodal learning, valid knowledge integration, and nuanced reinforcement strategies is imperative. I am a Ph.D. candidate in Philosophy at Texas A&M University. I also recently got my master's degree in Computer Engineering. I am a machine learning engineer and my area of interest is Computer Vision and NLP.

1:40 EST
Presentation

Nikolai Ilinykh

ChatGPT goes into the physical world: on dangers and future of multi-modal language 

Computational Linguistics

ChatGPT has been a force to be reckoned with in the field of Natural Language Processing. However, it has also raised concerns and attracted a lot of public attention towards AI and Computational Linguistics. In fact, the revolution began in 2017 with the introduction of a new type of language model known as the transformer architecture, which is basically the backbone of ChatGPT. These models have consistently demonstrated their ability to provide accurate solutions to various text-based problems, including solving math equations and generating coherent narratives. Nowadays, the researchers are actively working on developing an improved version of ChatGPT - a multi-modal architecture that can integrate text with other modalities like vision, sound, and senses. The ultimate goal of such research is to develop safer and less biased version of ChatGPT-like models. In my talk, I will address the critical issues surrounding large language models like ChatGPT in the language-and-vision domain. A common task for such models is to describe images in natural human-like language. I will present examples that highlight how these models tend to capture and amplify gender and racial biases when describing images. Additionally, I will explain the inner workings of statistical models like ChatGPT and emphasise what is important to keep in mind when playing around with such models. It is crucial to recognise that although these models exhibit biases, humans can mistakenly attribute human-like properties to them. I will shed light on the key components of ChatGPT and raise the question: Is ChatGPT itself the problem, or is it the information provided by humans during model training? Perhaps, it is a combination of both. The primary objective of my talk is to increase awareness and encourage a deeper understanding of ChatGPT's inner workings and its implications for society. Nikolai is a 4th-year doctoral student in Computational Linguistics at the University of Gothenburg, Sweden. His research focuses on building and analysing language agents that can perceive the real world and act in it accordingly. As the real world involves humans, studying the human mind and behaviour for inspiration to build better language agents is another central thesis in Nikolai’s research. In his free time, Nikolai is baking or playing instruments (piano and accordion).

How Human is ChatGPT? 

2:20 EST
Presentation

Xiaomeng Ye

Computer Science

Parenting an AI

Developing and using AI shares many similarities with parenting a kid. A child is made by human (hence artificial). A child is intelligent. A child qualifies as an artificial intelligence system in the broader terms. This talk draws inspiration from parenting and maps the thinking and concepts onto AI. This talk throws in bunch of questions and invites more from the audience, for example: Who decides when to give birth to a child/AI? Who is responsible for raise and train a child/AI? What decides the training content, textbook, and curriculum for a child/AI? When a child/AI drives a car and causes an accident, who is responsible? When a child/AI has conflict with another child/AI, who is supposed to intervene? What decides the social norm for a child/AI, what decides the expectation in different social context? Where do we draw the line between benign actions (drawing on the wall) and harmful behaviors (play with fire?)? When does a child/AI grows up and starts to be responsible for their own action? When a grown up/AI works in collaboration with others in a group effort, how is their responsibility delineated? Before we talk about AI ethics, we need to talk about human ethics. Before we can create a socially accepted, morally justified, legally responsible AI, we need to think about how to raise a socially accepted, morally justified, legally responsible child. Xiaomeng Ye is a recent graduate from IUB's Computer Science PhD program and currently teaching at Berry College. His research interest is in building new AI/ML algorithms. He is new to AI ethics but still wants to ''throw a brick to attract jade''.

3:00 EST
Presentation

Sidney Shapiro

Beyond Human Connections: Love in the Age of AI and the Evolution of Intimacy

Business Analytics

In this presentation, we explore the evolving landscape of interpersonal communication in a world immersed in generative AI, focusing specifically on the realm of online dating and the impact of AI-mediated chatbots on relationships and the concept of companionship. While the current state of AI is limited, advancements in computing offer the potential for the emergence of general intelligence AI. This prospect places us in a situation reminiscent of the Star Trek series, where we can envision a future that may be within reach, albeit with current technological limitations. What was once considered science fiction has now become a tangible possibility, captivating the collective imagination. As AI technologies continue to develop and gain power, it raises intriguing questions about our individual relationships with AI entities and how we interact with one another in a world increasingly mediated by AI-powered systems. The anthropomorphization of AI, wherein we attribute human-like qualities to these entities, further blurs the boundaries between humans and machines. This shift in perspective challenges traditional notions of companionship and love, as we navigate the complexities of forming emotional connections with AI. We will discuss the potential implications of these changes, exploring the social, psychological, and ethical dimensions of love in a world where AI plays an increasingly prominent role. By examining the evolving dynamics of online dating and AI-mediated interactions, we aim to shed light on the multifaceted nature of human-AI relationships and their impact on our society. I am an Assistant Professor of Business Analytics at the Dhillon School of Business, University of Lethbridge.

3:40 EST
Presentation

M. Hadi Fazeli

AI-generated Misinformation: Who is Responsible?

Philosophy of Responsibility

Many philosophers argue that moral responsibility for an action or omission entails being appropriately held accountable for it. For instance, when people react to an agent spreading misinformation by distrusting, criticizing, shadow-banning, or blocking them, it indicates holding that agent accountable and assigning responsibility to them. However, the rise of AI and its potential for disseminating misinformation raises an important philosophical question: Who or what is the responsible entity when it comes to AI? If responsibility is determined by being appropriately held accountable, how do our reactions to AI-generated misinformation imply AI’s responsibility? In this talk, I argue that appropriate reactions to AI’s faults differ from those in interpersonal relationships. What may be appropriate for human agents may not apply to AI, and vice versa. After discussing different types of appropriate reactions to AI-generated misinformation, I propose that providing feedback to AI systems, specifically to facilitate their “self-correction” of mistakes, implies holding AI accountable for the information it generates and thereby establishes AI as a responsible entity. This new perspective on responsible AI allows for a more nuanced assessment of responsibility within the complex relationship between human agents and AI in these unprecedented times. I am M. Hadi Fazeli, a doctoral student in my third year, associated with the Lund Gothenburg Responsibility Project (LGRP), Sweden. My research focuses on examining the factors that contribute to reduced responsibility for individuals regarding their past actions.

4:20 EST
Presentation

Christine Schirr

Does AI deserve the truth?

Writer and Researcher of Rising Technologies

Given that algorithms and AI such as ChatGPT are trained on publicly available datasets, whether their authors are users are fully aware of the training or not, do members of society who frequent sites populated by user generated content (UGC) have a moral obligation to not participate in activities or generate images or text that would dirty AI’s data pool? That is, do netizens have an obligation not to lie to AI? After all, a human being would know Abraham Lincoln wasn’t an inaugural member of the Flat Earth Society, but AI is only one viral meme away from saying Lincoln wore a Flat Earth hat. Many of Kant’s arguments for the “perfect duty” of truth are predicated on the observer’s “intrinsic worth” as a human being. When the “rational” decision maker of utilitarianism is a black box algorithm incapable of providing a proof for its conclusion, does this “perfect duty” become a fool’s errand? Worse, is there an argument to be made that sabotaging AI granted with making decisions about the fate of human beings (e.g. creditworthiness, benefits eligibility) is then a virtuous action? Is it possible that screwing with AI data pools is a moral obligation until such time as AI transparency and human oversight is enforced? Using real world examples of algorithmic decision making, AI/ML mistakes, and human data obfuscation, I hope to spend twenty minutes raising uncomfortable questions about the looming potential of humanist data clarity threat actors. Kiki Schirr is a writer and researcher who specializes in explaining rising technologies. Her past projects have included startup and ICO consulting, peer-to-peer distributed video chat, product launches, and drawing Tech Doodles.

Invited Panel: Bridging the Gap Between Industry and Academia

5:00 EST
Panel

How might the rapid development and deployment of AI affect our social institutions and daily lives? We need a relatively unified political approach for distributing the benefits and burdens of AI—how can academia and industry collaborate productively on ethics and safety research?

Regina Rini

York University

Cameron Buckner

University of Houston

ken archer_edited.png

Ken Archer

Twitch

Saturday, August 5th
1:00-6:30 PM EST

ChatGPT and the Future of Labor 

1:00 EST
Lightning Talk

Cole Makuch

Senior Strategy Consultant

We should consider elevator operators

Until the 1970s most buildings with elevators had elevator operators. Automation has replaced almost all of them with buttons. I'd like to approach this change from two angles: 1. Capitalism's role in fears of job replacement from AI. In the early 20th century Bertrand Russell illustrated similar labor replacement concerns with an example of increased efficiency in a pin factory: if technology was invented that doubles the efficiency of pin manufacturing, factories would not let workers work half as hard for the same output: they would fire half the workforce. In reasonable cases of job replacement (automatic elevators), we should direct our fear not to the technology responsible for the replacement, but to the labor system that handles replaced workers. 2. Benefits of elevator operators beyond taking cars to the correct floor including, security, building management, and creating community that are lost with automatic elevators. When we consider the prospect of AI job replacement, we should consider both angles. Personally, I think the economy should have proper civil services such that people don't need to be pushing buttons just for the sake of having something to do. We also need to carefully consider the holistic impact of human labor before we replace positions with AI. Cole is a Senior Strategy Consultant for IBM where day-to-day he works on projects related to IT modernization for large enterprises. Cole also collaborates with IBM Research, where he has contributed to publications on Reinforcement Learning (RL) AI and has interviewed industry professionals for their perspectives on use cases for RL and AI.

1:20 EST
Presentation

Carter Hardy

CompassionGPT: The Need to Prepare for AEI in Emotional Labor

Philosophy of Bioethics & Moral Psychology

A recent study found that communication with ChatGPT seemed more empathetic to patients than human doctors. While many factors contribute to this, the study demonstrates the desire for AI to perform emotional labor, as well as the development of artificial emotional intelligence (AEI) to do so. It is important that we prepare for this development and the inevitable ethical issues to follow. I briefly address three. Theoretically, we should be cautious about the authenticity of AEI developed purely on cognitive theories of emotion. Do they really empathize and does it matter? Practically, we should be sure that AEI will achieve comparable results when performing emotional labor. Do patient actually share more openly and why? Psychologically, we should be cautious about offloading a key aspect of our moral psychology—our moral emotions—to another (potential) moral agent. How will this ultimately affect us as moral agents and how can we mitigate? Carter Hardy is an assistant professor of philosophy at Worcester State University, specializing in bioethics and moral psychology. His work has focused on the role that moral emotions such as empathy, sympathy, and calm play in medical practice.

Indigenous and Disabled Perspectives on ChatGPT 

2:00 EST
Presentation

Mich Ciurria

LLMs and Crisis Epistemology: The Business of Making Old Crises Seem New

Philosophy of Ethics, Marxist Feminism, & Critical Disability Theory

Large Language Models (LLMs) like ChatGPT have set in motion a series of crises that are seen as imminent and unprecedented. These crises include disruptions to (i) the labor force, (ii) education, and (iii) democracy. Naomi Klein (2023) points out that we cannot trust tech CEOs to solve these crises because they have a vested interest in perpetuating them as beneficiaries of disaster capitalism, i.e., a political economy that exploits instability to entrench oppression. Who, then, can solve the AI crisis? I submit that the answer is: oppressed people with intergenerational knowledge of crises. To oppressed folks, tech-related crises are not new, but merely an extension of hundreds of years of uninterrupted subjugation. The popular misconception of the AI crisis as without precedent is an example of what Kyle Whyte calls “crisis epistemology,” a pretext of newness used to dismiss the accumulated wisdom of intergenerationally oppressed peoples. If AI-related crises are new, then what do Indigenous people know about them? Nothing. In this paper, I explain how mainstream philosophy is using crisis epistemology to dismiss the testimony of racialized and disabled peoples on AI. I then point to some solutions offered by oppressed peoples – solutions that take aim at neoliberalism. A common theme of anti-oppressive discourse is a rejection of neoliberal economics. This situated perspective is missing from dominant philosophical discourses on AI. Mich Ciurria (she/they) is a queer, gender-variant, disabled philosopher who works on ethics, Marxist feminism, and critical disability theory. She completed her PhD at York University in Toronto and subsequently held postdoctoral fellowships at Washington University in St. Louis and the University of New South Wales, Sydney. She is the author of An Intersectional Feminist Theory of Moral Responsibility (Routledge, 2019) and a regular contributor to BIOPOLITICAL PHILOSOPHY, the leading blog on critical disability theory.

2:40 EST
Presentation

Tekla Babyak

AI Stands for Ableist Intelligence: Bias Against Job Seekers with Disabilities

Independent Scholar & Disability Activist

Across many industries and companies, interviews are profoundly biased against disabled job seekers. Interviewers often unfairly assess candidates on qualities such as eye contact, a firm handshake, and straight posture, most of which are irrelevant to the qualifications and duties of the job, as I have argued in “My Intersecting Quests as a Disabled Independent Scholar” (published in Current Musicology, Fall 2020). What happens to these ableist dynamics when AI is tasked with conducting video interviews and making hiring decisions? Regrettably, it seems that all of these human biases are replicated and perhaps even amplified. Hiring bots tend to give higher scores to candidates who resemble current employees in terms of body language, facial expressions, and speech patterns (see Haley Moss, “Screened Out Onscreen: Disability Discrimination, Hiring Bias, and Artificial Intelligence,” Denver Law Review, 2021). Such algorithms screen out disabled and neurodiverse candidates who might have atypical eye movements, unusual speech intonations, etc. In recent months, some laws have been passed to regulate these AI hiring platforms (https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence), but so far these laws seem to be inconsistently enforced and ADA compliance remains limited. In my presentation, speaking from my vantage point as an unemployed scholar with MS, I will propose that the algorithms should be trained with more data from disabled people, in order to mitigate the devastating consequences of the ableist intelligence tools that shape today’s workforce. Currently based in Davis, CA, Tekla Babyak has a PhD in Musicology from Cornell. She is an independent scholar and disability activist with multiple sclerosis.

How Can We Fix ChatGPT’s Bias Problem? 

3:20 EST
Lightning Talk

Jovy Chan

‘Garbage In, Quality Out’: How ChatGPT can counteract the spread of misinformation

Social & Political Philosophy

With the rapid rise of ChatGPT, many warned against the accuracy of the information it produces. The common phrase ‘garbage in, garbage out’ highlights the difficulty faced by such Large Language Models that trained on vast amount of textual data. Given the amount of misinformation and fake news spreading throughout the internet, we can only expect the program be as good (or bad) as the data on which it is trained. I, however, remain optimistic. This is because the mistakes and biases we make are often systematic, and not random. And systematic errors are easier to pre-empt and offset. Knowing that human beings often make the same mistakes in the same way, it might be possible for the AI models to counteract its effects by simply adjusting its learning instructions to accommodate for such systematic bias. Jovy is a PhD candidate at the University of Toronto and works in social and political philosophy. In particular, she looks at the natural human tendency to conform to the crowd and how it might affect their belief-formation and decision-making processes.

3:40 EST
Presentation

Anwar ul Haq

Ethics & Philosophy of Technology

Is regulation of training data for LLMs possible and desirable?

The recent advances in LLMs are enabled by immense computing power and the availability of extremely large data sets. And we know that the scale of the quantity of training data and the ability of models to produce human-like responses are proportional to each other. But the sheer quantity of data is by itself blind to its ethical quality. It is common knowledge, for instance, that a lot of freely available data on the internet is biased against minority groups. We need a framework to moderate training data for LLMs through regulation so that consequent LLMs are well-trained or ‘well-educated’. It is not easy to devise a framework for the aforementioned data moderation. One wonders who will decide on the ethical quality of data sets. Many thinkers, such as Marietje Schaake, have suggested that this must not be too hard since we have already achieved analogous regulation in other spheres, with bodies such as the FDA. But it’s hard to see that the analogy is strong. I dispute this analogy in the paper and offer a better model for thinking about the present issue. I’m a graduate student in philosophy at University of Pittsburgh. I have a background in technology and I’m interested in questions at the intersection of ethics and technology.

4:20 EST
Presentation

Soribel Feliz

placeholder

Is regulation of training data for LLMs possible and desirable?

placeholder

Keynote on the Costs of Information Gathering

5:00 EST
Keynote

Placeholder

Keynote Speaker
C. Thi Nguyen

C. Thi Nguyen thinks about how technology and social structures influence our values and agency.

 

His book Games: Agency as Art explores how games shape our agency by letting us temporarily live out what it would be like to pursue different goals and values.

Check out his interview on The Ezra Klein Show, or his manifesto for public philosophy.

C. Thi Nguyen smiles while giving a talk
bottom of page