I recently gave a keynote on AI at the durable skills-themed D2L Ignite conference in Orlando. I took the following positions:
- Durable skills, unlike so many educational buzzwords, is a genuine civilizational shift that requires our urgent attention. AI does not cause it. It just made the change obvious.
- AI genuinely will cause profound and unforeseeable changes to the way we live. I gave a highly personal example to make this point vivid.
- Teaching skills are durable skills that translate quite well to the AI world.
- Other skills, such as those required to design and test solutions to complex humans, are durable skills.
As usual, I tried to cram an hour-long talk into 45 minutes, so I rushed some parts and left a few dots unconnected. In this post, I’ll the video and restate the elements of the third bullet point to ensure they’re clear. I’m putting the video at the bottom of the post because I’m hoping you’ll read it before watching the talk and keeping the post short because the idea that you’ll both read a blog post and watch a 45-minute talk is expecting a lot.
To be clear, I’m not arguing that teaching skills are durable skills because generative AI works like the human mind. It doesn’t. I’ll briefly explain why each teaching skill I discuss transfers to AI. The reasons are different from point to point.
Here are the skills:
- Scaffolding: In education, scaffolding is rooted in Vygotsky’s notion of the Zone of Proximal Development (ZPD). We’re helping students stretch beyond what they could learn on their own by providing them with temporary supports or building blocks, progressively removing the support as we go. With AI, we focus the model on the right pieces by providing context and examples. It knows a lot, but it needs context. So, to get good results, we remind it of basic concepts it already knows, similarly to how we teach students of the basic concepts they need to solve complex problems. As with human students, we feed it more complex pieces to put together until it is thinking the way we need it to. The AI has something akin to the ZPD in the sense that it doesn’t always need scaffolding. Some things it can figure out on its own. Other things it can’t figure out even with help. Even though the reasons are entirely different, we get better results when we act as if the AI has a ZPD and apply scaffolding when we find ourselves working within that zone.
- Formative assessments: Much is made of the fact that the AI is a black box. Little is made of the fact that the human mind is also a black box. We don’t know what students understand. In fact, good teachers probe continuously, in part because we are constantly trying to get a read on what the student understands and because students change. They learn. AIs don’t learn in the same way that students do, but they can change over time. ChatGPT is better at understanding some things than it was six months ago. And some of those improvements aren’t obvious. We have to design probes to test.
- Worked examples: This one is crucial and goes beyond using generative AIs to actually building or fine-tuning models. With students, we show them how to solve a problem: here’s the question, here’s the answer, and here’s how we got from the question to the answer. If we’re making full use of this technique, we’ll show students a series of subtly but importantly different worked examples so that they can learn nuances. With AI, whether we are writing a prompt or constructing a training data set, the ideal input is a series of examples where we say to the machine, here is the input, here is the desired output, and here is an annotation explaining why this is good output. Particularly with model training, we want to provide a series of subtly but meaningfully different examples so that it can learn to differentiate.
- Writing: To do almost anything with generative AI, you must be a good, clear, precise writer. We stress out about ChatGPT causing the loss of writing skills, forgetting that the majority of interactions most people have with the technology is, in fact, through writing. And better writing gets better answers or, if you’re training a model, better input data.
That’s the short but (hopefully) clear version of the third part of the talk.
The example I use for the second part of my talk is how ChatGPT helped me cope with the stream of medical information I was receiving about my little sister, who recently suffered a life-threatening brain hemorrhage. I recorded this video on my iPhone with no intention of sharing it with anyone but my close family. My sister is a teacher. I wanted to show her how the story of her struggle is helping other educators (and to show her a little bit of what I do for a living, which I have trouble explaining). I told her story to the D2L conference audience with her husband’s permission and with no intention of taking it further. I have been urged by a few people who were there that day to share it more widely. And so, with the blessing of my brother-in-law, I am publishing it. (My sister, by the way, is making amazing progress in her recovery. I hope she will be able to watch the video herself soon.) If you watch it and find it valuable, please comment below. She will find it meaningful to know that her story is helping other educators.
This is for you, Sharon.