The rise of artificial intelligence has forced many industries — including education — to adapt.
“We’ve had a shift in our perspective regarding AI,” says C. Edward Watson, vice president for digital innovation at the American Association of Colleges and Universities. “The first sort of signal that the United States might have competition in the AI space is when China released a tool called DeepSeek. With lower development costs, it’s a tool that performs as well as American-produced AI, such as ChatGPT.”
China also announced earlier this year plans to introduce mandatory AI courses for schoolchildren. Following suit, U.S. President Donald Trump signed an executive order in April 2025 that aims to provide “comprehensive AI training for educators” and foster “early exposure” to AI technology and concepts. “We have an AI space race that’s taking place right now,” Watson says. “I do believe that we’ll see federal funding that will follow that executive order to support those in K-12 settings to engender AI literacy skills in students just like they’re now doing in China.”
Watson co-authored the 2025 Student Guide to Artificial Intelligence series — a collaboration between AAC&U and Elon University in North Carolina — to promote student success when engaging with AI in college. Here’s what he has to say about the opportunities and challenges facing education as more students begin to use AI. The following interview has been edited for length and clarity.
Q: What are the advantages and challenges of college students using AI as a learning tool and resource for coursework and research?
The key advantages I see in terms of students and their learning is that they now have access to a tutor on every subject imaginable. So if they’re in class and they didn’t understand the concept very well, they can go back to their apartment or dorm room and ask AI, ‘Hey, can you teach me this concept but use basketball metaphors or use stories from the town where I grew up’ or whatever it might be. We can customize instruction to further extend what they’re learning in the classroom or what they may have missed in the classroom. There are also tools that will listen to the lecture and take notes along with the student. After class is over, it will generate a quiz. It can help you study more efficiently, help you master what’s being taught in class each day and also help you learn things that you didn’t quite understand.
Students might misuse AI or use it in ways that would be against a professor or university’s academic integrity policy. I think that there was a very clear understanding of what cheating was prior to AI. If someone wrote something that wasn’t yours and you turned it in with your name on it, that’s cheating. But AI is so much more sophisticated in that I could ask AI to provide me with a detailed outline regarding how to respond to a writing assignment. So I get this detailed outline, write every word and then I turn in that paper. I think for some students it’s not clear whether that’s cheating. And for some faculty, they’re still sort of discerning: Would that diminish student learning if they did ask it to do part of the work? The answer is probably yes. So I think there’s a lot of communication that we still need to do from one faculty member to the next to students to help them discern the gray areas regarding cheating.
[READ: Premeds: Learn to Navigate Ethical Challenges of AI in Medicine]
Q: How is AI likely to affect students’ critical thinking skills?
That’s definitely a conversation that we’re hearing a lot today, that AI could diminish critical thinking skills. But I think ultimately, it’s like any new technology. It’s going to shift some things, but it really becomes a problem of teaching and learning. So if we recognize this tool might ultimately diminish students’ critical thinking, then we need to attack that problem. We see similar concerns raised ever so often, whether it’s when television first became really popular or too much screen time for children as they grow up. But in the same way that there were concerns about too much screen time or too much exposure to social media, there’s action steps to take. If we have a concern about AI possibly diminishing critical thinking, then this should be something that we place more emphasis on within the larger curriculum or also think about how we teach students to use AI — not just in terms of whether it’s going to impact critical thinking, but is it ethical to use it for certain kinds of problems? This is part of the larger AI literacy program that I think the executive order signals. We as a nation are beginning to put an emphasis on preparing students to use AI well. Part of that is not using it in ways that would diminish the development of other types of learning.
Q: According to an AAC&U/Elon survey report, 54% of higher ed leaders don’t think their faculty can recognize AI-curated content. Do you think professors are adapting to the rise of AI use by college students, especially when it comes to cheating?
I think that AI is effective at writing text and we humans, let alone professors, aren’t good at telling when text was produced by AI versus a real person. So the challenge then is how to respond to that. Some campuses are using technologies that are termed AI detectors. But the downside with those is that they tend to have a false positive issue. Accusing one student falsely has huge ramifications. My opinion is that AI detection is something we should not be using in higher education. I think there’s an array of different strategies that collectively could be effective in helping students not get into trouble with AI. Number one is being very clear in terms of course policy — maybe even down to the assignment level. On each assignment, faculty should be saying if AI can be used and how it can be used. Also answer the question, “Why?” “This is why you can’t use AI on this one” or “Here’s why you can use AI on this one.” That discussion needs to happen where we provide clarity for students regarding what the rules are from one assignment to the next.
[READ: Is AI Affecting College Admissions?]
Q: How are professors being trained on using AI tools effectively in the classroom?
At AAC&U, we’re doing quite a bit of work in helping faculty use AI tools effectively in the classroom. Faculty are being taught that there are opportunities to leverage AI in their own course development. The thought here is any of us working by ourselves can do really good work. But if we have a partner to bounce ideas off of and to get feedback and to provide suggestions, that collaboration would only improve the quality of the work. And now really, everyone has access to an assistant, regardless of what kind of work they do and that certainly includes professors. Faculty are learning that there’s a lot of different ways AI can help with course development, whether it’s putting together a syllabus, a course outline, lecture notes or PowerPoint slides, as well as assignments and the like. So faculty are definitely moving in that direction. They’re seeing those opportunities. You’re really seeing that in K-12 now actually, quite significantly, as well as higher ed.
Q: What role do professors play in teaching their students the ethical use of AI?
Thinking about higher education, it’s like any new learning outcome. In other words, any goal that we might have for our students — we must define what we want students to learn. In short, this becomes a curricular problem. We want students to develop ethical reasoning in terms of how to use AI or to develop broader AI skills. Where in the curriculum should we teach that? Would that be best taught at the freshman level as part of general education, or should that be the last semester course before students graduate? It’s not that every faculty member has to be teaching AI or AI literacy or AI ethics, but someone should. I think we all agree that the world of work has changed and there’s an expectation that students should leave college knowing how to use AI well. So how do we do that? I think many universities are indeed grappling with that challenge. We at AAC&U have an Institute on AI, Pedagogy, and the Curriculum. It’s specifically designed to help faculty, programs and universities figure out where in the curriculum these skills belong and then change teaching practice to be able to ensure students achieve those goals.
[READ: Schooling Teachers on How AI Can Be More Than Just a ‘Cheatbot’]
Q: How important is it for students to develop AI skills while in college to succeed in a post-college career?
If you’re going to hire an accountant, you still need someone with accounting skills. I think that degrees still matter and those learning outcomes — previous to AI — still matter, such as critical thinking and being able to write well. But there is now additional expectations among many employers associated with AI skills. There was a survey from Microsoft that reported the following: 66% of leaders said that they wouldn’t hire someone if they didn’t have some level of AI competency. Of course, that’s leaders rather than hiring managers and also that was a survey from Microsoft, so take it with a little bit of a grain of salt. But there’s just survey after survey from Amazon World Services, LinkedIn, Forbes and others that continue to show growing interest or growing expectation from employers that those they hire have some competency with AI. I think this is becoming permanent in the employability landscape and as a result, it’s now becoming essential learning for college students. We should have it somewhere within the higher education curriculum so that students don’t just find it sort of latchkey on the street, but rather learn how to use these skills in ways that mirror best practice within their discipline or chosen career field.
Q: How do you expect to see AI embraced more in the future in college and the workplace?
I do believe it’s going to become a permanent fixture for multiple reasons. I think the national security imperative associated with AI as a result of competing against other nations is going to drive a lot of energy and support for AI education. We also see shifts across every field and discipline regarding the usage of AI beyond college. We see this in a broad array of fields, including health care and the field of law. I think it’s here to stay and I think that means we’re going to see AI literacy being taught at most colleges and universities, and more faculty leveraging AI to help improve the quality of their instruction. I feel like we’re just at the beginning of a transition. In fact, I often describe our current moment as the ‘Ask Jeeves’ phase of the growth of AI. There’s a lot of change still ahead of us. AI, for better or worse, it’s here to stay.
To learn more about the college admissions process, sign up for U.S. News’ twice-monthly Extra Help newsletter, which provides advice to parents as their teens navigate applying to and paying for college.
More from U.S. News
How to Use AI to Help With Law School Applications
11 Apps to Help High Schoolers Organize Their College Search
14 Useful Apps for College Students
Q&A: Artificial Intelligence in Education and What Lies Ahead originally appeared on usnews.com