AI Expert Answers Your Questions about Emerging Technologies
By: Andrea Azzo, Senior Communications Coordinator, Center for Advancing Safety of Machine Intelligence (CASMI) at Northwestern University.
In December 2024, Northwestern University Computer Science Professor Kristian Hammond joined Alicia Malek, Center for Talent Development (CTD) teaching assistant, for a conversation about artificial intelligence. Hammond answered questions from CTD caregivers and students. The full conversation can be viewed on YouTube. A condensed transcript of the conversation has been developed into the following blog.
Artificial intelligence (AI) is quickly being integrated into our daily lives. Email and text messages suggest words for us. Social media feeds are algorithmically curated to show us content that is designed to keep us engaged. And millions of people are using generative AI tools like ChatGPT to assist them with tasks such as content creation, education, and brainstorming.
The Center for Talent Development (CTD) staff wanted to know what questions our families had about these emerging technologies. To answer them, CTD collaborated with the Northwestern Center for Advancing Safety of Machine Intelligence (CASMI), a research hub that is dedicated to advancing AI safety. Its director, Kristian Hammond, recently joined us for a chat to educate the CTD community about AI’s risks and benefits.
Question: How does AI work?
Answer: AI isn’t one thing. The suite of technologies can recommend and predict things. They can also generate language, images, and video. AI tools are designed to learn from humans. They learn about what we do best.
Question: Some CTD students wanted to know how AI tools are able to learn. For example, they watched this video of AI learning to play tag. How does that work?
Answer: Usually, when you’re dealing with gaming environments, you’re building two agents. They know how to move, but they don’t know why. You establish what’s good and what’s bad. For example, it’s bad when you’re tagged; it’s good when you tag someone. The agents are rewarded when they win and punished when they lose. This is called reinforcement learning. The systems learn sequences of actions and the circumstances under which they take actions. That’s why they start out badly and then learn as they play. The actions or behaviors are reinforced.
Question: How can AI benefit us? How is it important?
Answer: Think of AI as another intelligent person. What would it mean to get advice, guidance, or additional skills from a machine? There are ways it can make you better at what you do. At CASMI, we’re big proponents of partnership. We don’t want machines to replace the things we do. We want machines to partner with us and help us do things.
Question: What are the negative aspects of using AI?
Answer: The danger is in ceding responsibility. At the end of the day, I don’t care if machines are driving cars for us because nothing human depends upon us having car-driving skills. But our skills begin to go away in other areas in which humans thrive, such as communication. That’s dangerous.
We’re already seeing the negative effects of machine over-reliance. Websites like Amazon, Netflix, LinkedIn, and YouTube use engines that recommend things for you. They’re useful. But imagine a recommendation engine that is perfect. Every time you want something, it says, “You should buy this. You should look at this. You should read this.” What happens to you? You are no longer choosing. You’re letting it happen. Our ability to make decisions gets eroded by that kind of interaction.
It’s up to us to think about how we’ll use these tools. When you look at AI technologies, don’t just accept that it’s done. You should tell technologists how you want it to be.
Question: AI can make school better but can also lead to cheating. Where should we draw the line regarding its use in education?
Answer: We’re entering an interesting new world. How do you teach someone to write when they have an expert writer on their desktop? Their tendency is to use it. Educators need to think about what we want people to learn, know, and understand. We want them to be able to write and communicate. They should know how they want to frame and structure what they want to say. Educators should also aim systems at teaching people what we want them to do.
Question: Does AI go rogue?
Answer: There’s the worry that AI systems will take over and rule the world. Perhaps we shouldn’t give them the ability to do that. We shouldn’t give them the keys to the castle.
AI goes a little rogue all the time. Systems can make false or confusing suggestions. But because they’re confident and persuasive, it sounds like we should accept their responses.
What’s the difference between a system making mistakes and going rogue? In both cases, the system is not doing a good job of attending to the goals we give it.
Right now, we’re not giving systems goals before they go off and kill everybody. We’re not building systems that badly.
Question: Does AI have a personality?
Answer: Language models like ChatGPT do have a personality. But they’re trained to have one. They’re trained to be polite, apologetic, confident, and persuasive. This is designed to make us feel comfortable using the systems.
It’s also easy to shape them. ChatGPT can write in the style of authors such as Ernest Hemingway or James Joyce. It understands how to access all the text associated with those authors. This looks like it has a different personality. But the reality is that it’s just using the words in different ways, depending upon what you ask it to do.
If I train a system on text that is angry, then the system will be angry. Systems are trained on masses of text, and in the refinement process, specific pieces of text are given to the system to move it in a particular direction.
Interestingly, sometimes machines claim to have human experiences. Getting them to act like a machine is difficult because we don’t have a lot of text that is associated with being a machine.
Question: When will we see the benefits of AI?
Answer: We’ve already seen benefits. AI is in almost every aspect of our lives.
Medicine is an exemplar. Specific kinds of AI systems are incredibly helpful. Machines can see hundreds of thousands of cancer images, while doctors see fewer. This means we can do a better job of rapid diagnosis and treatment. But you want to make sure you’re working in tandem with the machine. The machine can suggest that you look at something, based upon what it has seen, as opposed to saying, “This person has cancer.”
Question: Can AI be used to teach?
Answer: Absolutely! We are beginning to see opportunities in personalized education. If a machine knows what you know and what you care about, then it can craft examples that are already interesting to you.
Machines can also identify gaps in your understanding of the world. For example, standardized tests often give you two numbers: a raw score and a percentile. Those numbers are meaningless. But AI systems can give you an analysis of every question that you answered incorrectly, which can diagnose your misconceptions and identify where your thinking has gone wrong.
This is something that humans could do, but nobody has time. A machine can do it in seconds. That means we can improve and focus education without leaving anybody behind.
Question: Can AI teach any subject?
Answer: You have to build systems that do things. Language models aren’t capable of doing math on their own. They can invoke other programs that do math. But teaching math would be a little odd for language models. We should leverage what they are into teaching experiences that make sense. In the long run, they’ll be able to teach anything.
Question: Will AI replace teachers?
Answer: I’m not worried about this because AI tends to partner with humans.
However, I would love for AI to replace me! But understand what replacing means to me. There’s a finite amount of me. I can do all sorts of things, but I only have a certain set of time. If I could take who I am and move it into a machine, that means it scales. It’s not replacement. It’s scaling me! If I could take my skills and transfer it into a machine, then I could teach 100,000 students a quarter. That’s incredibly exciting to me.
Education currently falls a little short. We’re not giving everybody all we could give them. We don’t have enough people and time. But with the machine, we might be able to fix that.
I love the idea that, no matter where you are in the world, there’s always something there that will help guide and teach you. It’s not about replacing you. Instead, it’s always moving you forward.
Question: Why do AI results need human surveyance?
Answer: Right now, there’s enough in the way of error where you always need to have humans involved. The goal is not to have completely self-sufficient machines that do everything on their own without us paying attention to them. The goal is to partner with them.
Even if you have a machine that is smarter than us, you still want it to be partnered with a human being so that it elevates us.
Question: Will we stop relying on writing and math skills because of AI?
Answer: We don’t want to build a world in which human beings lack basic skills about how to deal with the world. One of the most wonderful things about us is our ability to communicate. Language is almost a miracle. We do not want to lose that. Having an AI system that is doing all the communication and writing for us is not something we want to achieve.
We want AI to be smart, but we don’t want it to replace us. The moment we have something that does it for us, then we’ll lose that skill.
A lot of people think language models can write for us, but they only write for us when we tell them exactly what to write. That’s the skill. It may articulate better than we can but only if we shape it, frame it, and tell it exactly what to do.
Question: What can I do to protect myself from being affected by its mistakes, including those caused by third parties' use?
Answer: We are living in a world with malicious actors and content that is designed to change our minds. We have to be aware of that and skeptical of it. We need a new approach to the way we see things. In fact, with the rise of deepfakes and disinformation, we often will not be able to believe our own eyes.
To prevent mistakes, be aware that AI systems will make mistakes if you aren’t engaging or interacting with them. Even if you use a tool to help you do something, it’s still yours. It’s your material. It’s your responsibility to do a good job with it. Be diligent. Have pride of ownership. Getting people there might be difficult.
Question: How can we use AI to enhance, not hinder, our students’ education?
Answer: To make education better, we have to figure out how we want it to be better. AI systems are not imposed on us. It’s the notion of us dictating what we want the interactions to be.
Do we want AI systems to do all the writing for us? No. Do we want AI systems to critique our writing? Yes.
Educators should put a set of stakes in the ground about what we want and how we want it to be. Think in terms of how we can teach the things we believe need to be taught. Use the machine to help in that process. But don’t use the machine to override those skills in our students.
To learn more about CTD's AI courses and other Technology and Engineering course offerings, please view our Explore Courses page.