AI-aware math teaching; also, the agents are coming


AI-Aware Math Teaching

A few years ago, it was pretty easy for math educators to ignore generative AI. The chatbots of 2022 and 2023 were notoriously bad at math. But that’s no longer true! Today’s frontier AI models are very good at math—to the point of proving mathematical conjectures that have been open for decades.

This week on the podcast, I have a roundtable discussion with some of my favorite math educators about the ways they're responding to AI's impact on the teaching of mathematics. How good is AI at doing math? Why do some students trust AI's math output while others don't? Should we change what we teach in light of AI's mathematical capabilities? We dive into these questions, and a lot more, thanks to our excellent panelists:

  • Lew Ludwig is professor of mathematics at Denison University and co-author with Todd Zakrajsek of the new book The Science of Learning Meets AI, out this very week! Lew shares how AI tools have enabled him to shift his course into standards-based grading.
  • Chloe Lewis is assistant professor of mathematics at the University of Wisconsin-Eau Claire. She has a fantastic "critique the chatbot" assignment where students evaluate the so-called proof generated by ChatGPT of a mathematical statement that the students disproved earlier in the course.
  • Amy Langville and Kathryn Pedings-Behling return to the podcast to talk about the role of embodied learning in their online math courses (courtesy of their Deconstruct Calculus journal-style textbooks) and to share the custom AI tutor bot they have designed for their courses.

We cover a lot of ground in the roundtable, and while some of the discussion is a little inside baseball (Lew mentions Wronskians and Abel's theorem briefly), I think the conversation will be both accessible and interesting to folks who don't teach math.

You can listen to the AI-aware math teaching here, or search for "Intentional Teaching" in your podcast app.

It's Time to Talk about Agentic AI

Back in February, there was that whole hullabaloo about Einstein AI, the agentic AI tool that could in theory complete entire online courses for students. Einstein quickly folded, but concerns remain about students using newer AI tools not just to take online assessments but to complete all activities in an online course. I don't have a solution to this, although there are others working on this problem in various ways.

The connection I want to make is to something that Lew Ludwig said in this week's podcast episode. He was speaking to the paradoxical experience of hearing from some faculty that AI chatbots are bad at math and from other faculty that AI tools are very good at math ("it will do all of undergraduate mathematics"). Lew points out that there are big differences in the capabilities of free AI tools compared with paid AI tools:

"You have to be careful when you say AI can't do things. Free model? When I give my talks, I usually put up a three-speed Schwinn bicycle from the 1960s. That's the free model. The paid-for model is usually Doc Brown's DeLorean from Back to the Future. Very fast, it'll take you, but dangerous, right?"

Lew's comment reminded me of a blog post last week from historian Mark Humphries titled "The Agents Are Waking Up." He made a similar point about the differences in perception of the potential of AI tools and how that's a function of access to the paid tools:

"All of this made it harder to get people using the same models and setup. The effect has been that most people I know are still forming their intuitions about LLMs based on far less capable versions of the technology than what is accessible at the frontier. But what they read on X, Substack, or in the media describes something that sounds like the same product when its actually based on something fundamentally different."

(Humphries also points to the success that ChatGPT has had in solving Erdős problems!)

I've been reading reports from colleagues about the impressive things they can do with AI in their professional work. For instance, Robert Talbert didn't have a record of the 68 learning objectives he used with his first attempt at standards-based grading back in 2015. (As Robert notes, that was way too many standards.) So he took the LMS archive file for that course and asked Claude Cowork to reconstruct that list, either by finding explicit references to objectives in the course documents or by inferring objectives from course materials. Claude did the job, reconstructing his original list of 68 objectives with high fidelity.

Claude Cowork, like Einstein AI, is an example of the kind of AI agent that Humphries refers to in the title of his post ("The Agents Are Waking Up"). While Einstein is one, I don't expect Claude Cowork (and its sibling, Claude Code) to vanish anytime soon. Humphries does a good job explaining what AI agents are, how they work, and what one can (currently) do with them. I won't try to summarize that here; instead, I encourage readers interested in the next wave of AI opportunities / challenges / nightmares to read Humphries' post. Or listen to statistician Teddy Svoronos explain AI agents on Bonni Stachowiak's Teaching in Higher Ed podcast earlier this month.

And once you're up to speed on agentic AI, I would love to hear your thoughts on how these technologies are going to affect teaching and learning in higher ed! I'm anticipating that where 2025-2026 was the year of custom AI chatbots (at least in my work), 2026-2027 will be the year of agentic AI. Have you been experimenting with AI agents in your work? Hit reply and tell me about it.

Thanks for reading

If you found this newsletter useful, please forward it to a colleague who might like it! That's one of the best ways you can support the work I'm doing here at Intentional Teaching.

Or consider subscribing to the Intentional Teaching podcast. For just $3 US per month, you can help defray production costs for the podcast and you get access to the occasional subscriber-only podcast bonus episodes.

Intentional Teaching with Derek Bruff

Welcome to the Intentional Teaching newsletter! I'm Derek Bruff, educator and author. The name of this newsletter is a reminder that we should be intentional in how we teach, but also in how we develop as teachers over time. I hope this newsletter will be a valuable part of your professional development as an educator.

Read more from Intentional Teaching with Derek Bruff

It's Webinar Season With The Norton Guide to AI-Aware Teaching coming out this summer, I've started lining up a few webinar appearances to help get the word out about the book. Here are three coming up in the next two months, all of which are free to attend. Cutting through the AI Noise: Claims about Learning, Cognition, and Critical ThinkingApril 22, 2026, 11am Central, hosted by Alchemy Conversations about AI in education often swing between extremes, from claims that it is “rotting brains”...

Teaching Civic Engagement Back in 2024, I asked political scientist and faculty developer Bethany Morrison on my podcast to share some strategies for teaching in U.S. presidential election year. She had so many resources to share that I then invited her to curate a collection of resources for the University of Virginia Teaching Hub on the topic of teaching for democratic engagement and civic learning. Once that collection was posted, a former Vanderbilt colleague and current English professor...

Around the Web This is the part of the newsletter where I link to things that I find interesting in the hopes that you do, too. This week, this is the entire newsletter! Education as the Lighting of a Fire: Personal Connection Strikes the Match - This is a preprint of a study by Steven Most and Nathan Clout of the University of New South Wales Sydney. Two groups of participants heard the same recorded lecture. One group was given a "relatable" backstory about the lecturer, the other was told...