September 2, 2025
Recently, there has been heightened media attention around the potential misuse of generative AI tools like ChatGPT鈥攔anging from stories about individuals using it in harmful ways to concerns about people becoming overly dependent on it for companionship or therapy. While these accounts may represent extreme cases, they raise important questions about the role of AI in everyday life and how society should think about its limitations.
The 澳门六合彩官网 spoke with artificial intelligence expert Mike Yao to learn more.
About the Media Expert

Mike Yao is a professor of digital media in the Charles H. Sandage Department of Advertising and director of the Institute of Communications Research. He also oversees the at Illinois and is a contributor to campus’s . Yao’s research delves into the interplay between media, technology, society, and human communication. He examines how people engage with emerging technologies like AI and immersive multimedia across diverse social settings and how these interactions influence social behavior and human communication in digitally mediated spaces.
What have you learned about artificial intelligence so far through your research?
At the Media Technology and Social Behavior Research Lab, we study not only what AI can do technically, but also how people actually communicate with it. Our research looks at the dynamics and social processes embedded in human鈥揂I dialogue. Instead of treating AI as just a tool that performs specific functions, we focus on the communicative side, examining biases, stereotypes, expectations, linguistic patterns, and conversational styles that emerge when people interact with AI in social or interpersonal contexts.
What we are increasingly finding is that much of the challenge stems from the interface itself. Generative AI systems like ChatGPT are conversational by design. They allow people to use natural language to give instructions or ask for support. This seems simple, but it represents a profound departure from earlier technologies. In the past, users had to click with a mouse or write a line of code, actions that were purely instrumental and utilitarian, to 鈥渃ommunicate鈥 with technological systems. Once the interface begins to mimic human conversation, it introduces an entirely new layer of social, cultural, and psychological meaning into the interaction.
This conversational affordance makes AI feel more like a partner than a tool, opening the door to emotional connection, dependency, and even relationship-like dynamics. People begin to apply the same relational scripts they use with other humans鈥攕eeking empathy, validation, or intimacy. That is why the risks around reliance, especially in friendship or therapeutic contexts, are so pronounced.
At the same time, this is still a very new and rapidly growing area of research. We are only beginning to gather empirical evidence about how these relationships form, what they mean, and how they might shape human-to-human communication. What is clear, however, is that AI鈥檚 conversational design fundamentally changes how people use and think about technology, creating opportunities but also raising complex questions about boundaries, responsibility, and human well-being.
Where should we draw the line between what AI tools can do (brainstorming, tutoring) versus what should remain in the hands of trained professionals (mental health counseling, medical advice)?

The line between AI use and human expertise is not fixed; it has to be negotiated. Historically, the affordances of technology were clear: a hammer looks like it鈥檚 meant to hit nails, and it is intended to be used for performing that task. Generative AI is different. It鈥檚 a general-purpose system whose applications vary enormously depending on how a person first encounters it; one user sees a math tutor, another a coding partner, another a conversational companion. Because we don鈥檛 yet have stable 鈥渕ental prototypes鈥 of what AI is for, people project many different expectations onto it.
This makes it risky to rely solely on instinct or isolated stories when deciding what AI should or shouldn鈥檛 do. Instead, these boundaries should be informed by empirical evidence and social science research studying how people actually use AI, what benefits and harms emerge, and in what contexts.
My own view is that AI should be framed as a complementary tool rather than a replacement. Professionals, doctors, counselors, and teachers may use it to extend their capacity, but accountability for accuracy, safety, and judgment must remain with the human expert. The stakes are different for students or learners, who are still building the very skills AI might perform for them. Outsourcing too much to AI in those contexts risks undermining education and development.
And here鈥檚 where creativity offers an important example. Human creativity has multiple dimensions: idea generation, originality, and the craft of execution. AI can assist in implementation, but it cannot substitute the human act of judgment鈥攄eciding whether something is good, meaningful, or valuable. A piece of art, for instance, isn鈥檛 only about technical execution; it鈥檚 about interpretation in a social and cultural context. That value judgment is inherently human. If we let AI replace that instead of empowering it, we risk reducing creativity to production rather than meaning-making.
What responsibility should companies like OpenAI have in building safeguards against harmful usage, and what responsibility falls on society, parents, educators, or policymakers?
Responsibility here is shared. Companies like OpenAI clearly need to build technical safeguards, set limits, and be transparent about risks. But society as a whole also has a role: parents, educators, and policymakers must help set the norms and guardrails.
We鈥檝e faced similar challenges before. Think about movie ratings, Internet protocols, or even electricity and railroad standards. These were not set by companies alone. They emerged through collective debate, negotiation, and regulation. AI is no different.
So, while developers must act responsibly, it is also vital that we, as a society, engage in open dialogue about values; what kind of transparency, respect, and agency we want to guarantee. Only through democratic processes and evidence-based policymaking can we create the appropriate boundaries for AI use.
What strategies can educators, media professionals, and advertisers use to encourage positive, productive uses of chatbots鈥攚hile discouraging harmful or unhealthy reliance?
I think the guiding principle should be this: AI should empower humans and enrich human experience, not replace it. Technology should extend human capacity, not lead us away from it.
For educators, this means being clear about boundaries. Young people and students are especially vulnerable because they may not yet have the knowledge to judge AI鈥檚 output. They might outsource their learning without realizing what they鈥檙e losing. Educators need to frame AI as a support tool, for brainstorming, practice, or feedback, while keeping the responsibility for understanding and critical thinking with the learner.
For professionals such as journalists, marketers, or advertisers, the principle is similar. AI can make processes faster or more efficient, but the responsibility for quality, accuracy, and ethical judgment always lies with the human communicator. A writer might use AI to refine a draft, but the accountability for what鈥檚 published belongs to the writer, not the machine.
In terms of messaging, media and advertisers can help by highlighting these responsible uses and reinforcing that AI is a tool to support, not substitute, human creativity, empathy, and expertise.
What risks emerge when users turn to ChatGPT for friendship, therapy, or even romantic companionship? How might this affect human-to-human relationships in the long term?
The risks are real, but they鈥檙e also complicated. Humans are creative鈥攚e often use technologies in unintended ways. Some people are turning to AI chatbots for companionship, therapy, or even romantic connection. The conversational style of these systems makes them feel supportive, echoing patterns we know from parasocial relationships with media or even earlier technologies.
The danger is that overreliance on AI could weaken human-to-human relationships. If people begin substituting machine companionship for the effort and vulnerability required in real human connections, that could have long-term social costs.
But there鈥檚 also an opportunity here: AI functions as a mirror. Instead of only asking 鈥渨hat is AI doing wrong?鈥, we might ask 鈥渨hat is missing in people鈥檚 lives that makes AI companionship so appealing?鈥 Loneliness, lack of empathy, strained family dynamics鈥攖hese are human issues, and AI is simply revealing them in a new way.
So, the real risk is not just technological misuse, but a failure to reflect on ourselves. If we treat every problem as 鈥淎I鈥檚 fault,鈥 we miss the deeper lesson: what human needs are being unmet, and how can we as a society address them?
If companies promote or integrate ChatGPT into their products, what ethical obligations do they have in framing its limits? How can advertising balance highlighting its benefits without overselling its role?
Ethics can be debated, but some principles are broadly shared: transparency, respect, and human agency. Companies have an obligation not to oversell AI as a replacement for human expertise or care. They should present it as a supportive tool while being clear about what it cannot do.
Advertising should highlight real benefits, efficiency, accessibility, assistance, while also being explicit about limits. The obligation is honesty: to frame AI as a powerful supplement, not a magical replacement.
鈥淚 on the Media鈥澛爄s a series from the University of Illinois 澳门六合彩官网 in which faculty at the college address current issues and research.