- Announcing a Design/Build Workshop Series for an AI Learning Design Assistant (ALDA)
- AI Learning Design Workshop: Solving for CBE
- AI Learning Design Workshop: See and Try the ALDA Rapid Prototype
- AI Learning Design Workshop: The Trickiness of AI Bootcamps and the Digital Divide
- Lessons Learned from the AI Learning Designer Project
We recently wrapped up our AI Learning Design Assistant (ALDA) project. It was a marathon. Multiple universities and sponsors participated in a seven-month intensive workshop series to learn how AI can assist in learning design. The ALDA software, which we tested together as my team and I built it, was an experimental apparatus designed to help us learn various lessons about AI in education.
And learn we did. As I speak with project participants about how they want to see the work continue under ALDA’s new owner (and my new employer), 1EdTech, I’ll use this post to reflect on some lessons learned so far. I’ll finish by reflecting on possible futures for ALDA.
(If you want a deeper dive from a month before the last session, listen to Jeff Young’s podcast interview with me on EdSurge. I love talking with Jeff. Shame on me for not letting you know about this conversation sooner.)
AI is a solution that needs our problems
The most fundamental question I wanted to explore with the ALDA workshop participants was, “What would you use AI for?” The question was somewhat complicated by AI’s state when I started development work about nine months ago. Back then, ChatGPT and its competitors struggled to follow the complex directions required for serious learning design work. While I knew this shortcoming would resolve itself through AI progress—likely by the time the workshop series was completed—I had to invest some of the ALDA software development effort into scaffolding the AI to boost its instruction-following capabilities at the time. I needed something vaguely like today’s AI capabilities back then to explore the questions we were trying to answer. Such as what we could be using AI for a year from then.
Once ALDA could provide that performance boost, we came to the hard part. The human part. When we got down to the nitty-gritty of the question—What would you use this for?—many participants had to wrestle with it for a while. Even the learning designers working at big, centralized, organized shops struggled to break down their processes into smaller steps with documents the AI could help them produce. Their human-centric rules relied heavily on the humans to interpret the organizational rules as they worked organically through large chunks of design work. Faculty designing their own courses had a similar struggle. How is their work segmented? What are the pieces? Which pieces would they have an assistant work on if they had an assistant?
The answers weren’t obvious. Participants had to discover them by experimenting throughout the workshop series. ALDA was designed to make that discovery process easier.
A prompt engineering technique for educators: Chain of Inquiry
Along with the starting question, ALDA had a starting hypothesis: AI can function as a junior learning designer.
How does a junior learning designer function? It turns out that their primary tool is a basic approach that makes sense in an educator’s context and translates nicely into prompt engineering for AI.
Learning designers ask their teaching experts questions. They start with general ones. Who are your students? What is your course about? What are the learning goals? What’s your teaching style?
These questions get progressively more specific. What are the learning objectives for this lesson? How do you know when students have achieved those objectives? What are some common misconceptions they have?
Eventually, the learning designer has built a clear enough mental model that they can draft a useful design document of some form or other.
Notice the similarities and differences between this approach and scaffolding a student’s learning. Like scaffolding, Chain of Inquiry moves from the foundational to the complex. It’s not about helping the person being scaffolded with their learning, but it is intended to help them with their thinking. Specifically, the interview progression helps the educator being interviewed think more clearly about hard design problems by bringing relevant context into focus. This process of prompting the interviewee to recall salient facts relevant to thinking through challenging, detailed problems is very much like the AI prompt engineering strategy called Chain of Thought.
In the interview between the learning designer and the subject-matter expert, the chain of thought they spin together is helpful to both parties for different reasons. It helps the learning designer learn while helping the subject-matter expert recall relevant details that help with thinking. The same is true in ALDA. The AI is learning from the interview, while the same process helps both parties focus on helpful context. I call this AI interview prompt style Chain of Inquiry. I hadn’t seen it used when I first thought of ALDA and haven’t seen it used much since then, either.
In any case, it worked. Participants seem to grasp it immediately. Meanwhile, a well-crafted Chain of Inquiry prompt in ALDA produced much better documents after it elicited good information through interviews with its human partners.
Improving mental models helps
AI is often presented, sold, and designed to be used as a magic talking machine. It’s hard to imagine what you would and wouldn’t use a tool for if you don’t know what it does. We went at this problem through a combination of teaching, user interface design, and guided experimentation.
On the teaching side, I emphasized that a generative AI model is a sophisticated pattern-matching and completion machine. If you say “Knock knock” to it, it will answer “Who’s there?” because it knows what usually comes after “Knock knock.” I spent some time building up this basic idea, showing the AI matching and completing more and more sophisticated patterns. Some participants initially reacted to this lesson as “not useful” or “irrelevant.” But it paid off over time as participants experienced that understanding helped them think more clearly about what to expect from the AI, with some additional help from ALDA’s design.
ALDA’s basic structure is simple:
- Prompt Templates are re-usable documents that define the Chain of Inquiry interview process (although they are generic enough to support traditional Chain of Thought as well).
- Chats are where those interviews take place. This part of ALDA is similar to a typical ChatGPT-like experience, except that the AI asks questions first and provides answers later based on the instructions it receives from the Prompt Template.
- Lesson Drafts are where users can save the last step of a chat, which hopefully will be the draft of some learning design artifact they want to use. These drafts can be downloaded as Word or PDF documents and worked on further by the human.
A lot of the magic of ALDA is in the prompt template page design. It breaks down the prompts into three user-editable parts:
- General Instructions provide the identity of the chatbot that guides its behavior, e.g., “I am ALDA, your AI Learning Design Assistant. My role is to work with you as a thoughtful, curious junior instructional designer with extensive training in effective learning practices. Together, we will create a comprehensive first draft of curricular materials for an online lesson. I’ll assist you in refining ideas and adapting to your unique context and style.
“Important: I will maintain an internal draft throughout our collaboration. I will not display the complete draft at the end of each step unless you request it. However, I will remind you periodically that you can ask to see the full draft if you wish.
“Important Instruction: If at any point additional steps or detailed outlines are needed, I will suggest them and seek your input before proceeding. I will not deviate from the outlined steps without your approval.“ - Output template provides an outline of the document that the AI is instructed to produce at the end of the interview.
- Steps provide the step-by-step process for the Chain of Inquiry.
The UI reinforces the idea of pattern matching and completion. The Output Template gives the AI the structure of the document it is trying to complete by the end of the chat. The General Instructions and Steps work together to define the interview pattern the system should imitate as it tries to complete the document.
Armed with the lesson and scaffolded by the template, participants got better over time at understanding how to think about asking the AI to do what they wanted it to do.
Using AI to improve AI
One of the biggest breakthroughs came with the release of a feature near the very end of the workshop series. It’s the “Improve” button at the bottom of the Template page.
When the user clicks on that button, it sends whatever is in the template to ChatGPT. It also sends any notes the user enters, along with some behind-the-scenes information about how ALDA templates are structured.
Template creators can start with a simple sentence or two in the General Instructions. Think of it as a starting prompt, e.g., “A learning design interview template for designing and drafting a project-based learning exercise.” The user can then tell “Improve” to create a full template based on that prompt. Because ALDA tells ChatGPT what a complete template looks like, the AI returns a full draft of all the fields ALDA needs to create a template. The user can then test that template and go back to the Improve window to ask for the AI to improve the template’s behavior or extend its functionality.
Building this cycle into the process created a massive jump in usage and creativity among the participants who used it. I started seeing more and more varied templates pop up quickly. User satisfaction also improved significantly.
So…what is it good for?
The usage patterns turned out to be very interesting. Keep in mind that this is a highly unscientific review; while I would have liked to conduct a study or even a well-designed survey, the realities of building this on the fly as a solo operator managing outsourced developers limited me to anecdata for this round.
The observations from the learning designers from large, well-orchestrated teams seem to line up with my theory that the big task will be to break down our design processes into chunks that are friendly to AI support. I don’t see a short-term scenario in which we can outsource all learning design—or replace it—with AI. (By the way, “air gapping” the AI, by which I mean conducting an experiment in which nothing the AI produced would reach students without human review, substantially reduced anxieties about AI and improved educators’ willingness to experiment and explore the boundaries.)
For the individual instructors, particularly in institutions with few or no learning designers, I was pleasantly surprised to discover how useful ALDA proved to be in the middle of the term and afterward. We tend to think about learning design as a pre-flight activity. The reality is that educators are constantly adjusting their courses on the fly and spending time at the end to tweak aspects that didn’t work the way they liked. I also noticed that educators seemed interested in using AI to make it safer for them to try newer, more challenging pedagogical experiments like project-based learning or AI-enabled teaching exercises if they had ALDA as a thought partner that could both accelerate the planning and bring in some additional expertise. I don’t know how much of this can be attributed to the pure speed of the AI-enabled template improvement loop and how much the holistic experience helped them feel they understood and had control over ALDA in a way that other tools may not offer them.
Possible futures for ALDA under 1EdTech
As for what comes next, nothing has been decided yet. I haven’t been blogging much lately because I’ve been intensely focused on helping the 1EdTech team think more holistically about the many things the organization does and many more that we could do. ALDA is a piece of that puzzle. We’re still putting the pieces in place to determine where ALDA fits in.
I’ll make a general remark about 1EdTech before exploring specific possible futures for ALDA. Historically, 1EdTech has solved problems that many of you don’t (and shouldn’t) know you could have. When your students magically appear in your LMS and you don’t have to think about how your roster got there, that was because of us. When you switch LMSs, and your students still magically appear, that’s 1EdTech. When you add one of the million billion learning applications to your LMS, that was us too. Most of those applications probably wouldn’t exist if we hadn’t made it easy for them to integrate with any LMS. In fact, the EdTech ecosystem as we know it wouldn’t exist. However much you may justifiably complain about the challenges of EdTech apps that don’t work well with each other, without 1EdTech, they mostly wouldn’t work with each other at all. A lot of EdTech apps simply wouldn’t exist for that reason.
Still. That’s not nearly enough. Getting tech out of your way is good. But it’s not good enough. We need to identify real, direct educational problems and help to make them easier and more affordable to solve. We must make it possible for educators to keep up with changing technology in a changing world. ALDA could play several roles in that work.
First, it could continue to function as a literacy teaching tool for educators. The ALDA workshops covered important aspects of understanding AI that I’ve not seen other efforts cover. We can’t know how we want AI to work in education without educators who understand and are experimenting with AI. I will be exploring with ALDA participants, 1EdTech members, and others whether there is the interest and funding we need to continue this aspect of the work. We could wrap some more structured analysis around future workshops to find out what the educators are learning and what we can learn from them.
Speaking of which, ALDA can continue to function as an experimental apparatus. Learning design is a process that is largely dark to us. It happens in interviews and word processor documents on individual hard drives. If we don’t know where people need the help—and if they don’t know either—then we’re stuck. Product developers and innovators can’t design AI-enabled products to solve problems they don’t understand.
Finally, we can learn the aspects of learning design—and teaching—that need to be taught to AI because the knowledge it needs isn’t written down in a form that’s accessible to it. As educators, we learn a lot of structure in the course of teaching that often isn’t written down and certainly isn’t formalized in most EdTech product data structures. How and when to probe for a misconception. What to do if we find one. How to give a hint or feedback if we want to get the student on track without giving away the answer. Whether you want your AI to be helping the educator or working directly with the student—which is not really an either/or question—we need AI to better understand how we teach and learn if we want it to get better at helping us with those tasks. Some of the learning design structures we need are related to deep aspects of how human brains work. Other structures evolve much more quickly, such as moving to skills-based learning. Many of these structures should be wired deep into our EdTech so you don’t have to think or worry about them. EdTech products should support them automatically. Something like ALDA could be an ongoing laboratory in which we test how educators design learning interventions, how those processes co-evolve with AI over time, and where feeding the AI evidence-based learning design structure could make it more helpful.
The first incarnation ALDA was meant to be an experiment in the entrepreneurial sense. I wanted to find out what people would find useful. It’s ready to become something else. And it’s now at a home where it can evolve. The most important question about ALDA hasn’t changed all that much:
What would you find ALDA at 1EdTech useful for?
Dr Craig Bellamy says
Great work here 👏