In my last post, I introduced the idea of thinking about different generative AI models as coworkers with varying abilities as a way to develop a more intuitive grasp of how to interact with them. I described how I work with my colleagues Steve ChatGPT, Claude Anthropic, and Anna Bard. This analogy can hold (to a point) even in the face of change. For example, in the week since I wrote that post, it appears that Steve has finished his dissertation, which means that he’s catching up on current events to be more like Anna and has more time for long discussions like Claude. Nevertheless, both people and technologies have fundamental limits to their growth.
In this post, I will explain “hallucination” and other memory problems with generative AI. This is one of my longer ones; I will take a deep dive to help you sharpen your intuitions and tune your expectations. But if you’re not up for the whole ride, here’s the short version:
Hallucinations and imperfect memory problems are fundamental consequences of the architecture that makes current large language models possible. While these problems can be reduced, they will never go away. AI based on today’s transformer technology will never have the kind of photographic memory a relational database or file system can have. When vendors tout that you can now “talk to your data,” they really mean talk to Steve, who has looked at your data and mostly remembers it.
You should also know that the easiest way to mitigate this problem is to throw a lot of carbon-producing energy and microchip-cooling water at it. Microsoft is literally considering building nuclear reactors to power its AI. Their global water consumption post-AI has spiked 34% to 1.7 billion gallons.
This brings us back to the coworker analogy. We know how to evaluate and work with our coworkers’ limitations. And sometimes, we decide not to work with someone or hire them for a particular job because the fit is not good.
While anthropomorphizing our technology too much can lead us astray, it can also provide us with a robust set of intuitions and tools we already have in our mental toolboxes. As my science geek friends say, “All models are wrong, but some are useful.” Combining those models or analogies with an understanding of where they diverge from reality can help you clear away the fear and the hype to make clear-eyed decisions about how to use the technology.
I’ll end with some education-specific examples to help you determine how much you trust your synthetic coworkers with various tasks.
Now we dive into the deep end of the pool. When working on various AI projects with my clients, I have found that this level of understanding is worth the investment for them because it provides a practical framework for designing and evaluating immediate AI applications.
Are you ready to go?
How computers “think”
About 50 years ago, scholars debated whether and in what sense machines could achieve “intelligence,” even in principle. Most thought they could eventually sound pretty clever and act rather human. But could they become sentient? Conscious? Do intelligence and competence live as “software” in the brain that could be duplicated in silicon? Or is there something about them that is fundamentally connected to the biological aspects of the brain? While this debate isn’t quite the same as the one we have today around AI, it does have relevance. Even in our case, where the questions we’re considering are less lofty, the discussions from back then are helpful.
Philosopher John Searle famously argued against strong AI in an argument called “The Chinese Room.” Here’s the essence of it:
Imagine sitting in a room with two slots: one for incoming messages and one for outgoing replies. You don’t understand Chinese, but you have an extensive rule book written in English. This book tells you exactly how to respond to Chinese characters that come through the incoming slot. You follow the instructions meticulously, finding the correct responses and sending them out through the outgoing slot. To an outside observer, it looks like you understand Chinese because the replies are accurate. But here’s the catch: you’re just following a set of rules without actually grasping the meaning of the symbols you’re manipulating.
This is a nicely compact and intuitive explanation of rule-following computation. Is the person outside the room speaking to something that understands Chinese? If so, what is it? Is it the man? No, we’ve already decided he doesn’t understand Chinese. Is it the book? We generally don’t say books understand anything. Is it the man/book combination? That seems weird, and it also doesn’t account for the response. We still have to put the message through the slot. Is it the man/book/room? Where is the “understanding” located? Remember, the person on the other side of the slot can converse perfectly in Chinese with the man/book/room. But where is the fluent Chinese speaker in this picture?
If we carry that idea forward to today, however much “Steve” may seem fluent and intelligent in your “conversations,” you should not forget that you’re talking to man/book/room.
Well. Sort of. AI has changed since 1980.
How AI “thinks”
Searle’s Chinese room book evokes algorithms. Recipes. For every input, there is one recipe for the perfect output. All recipes are contained in a single bound book. Large language models (LLMs)—the basics for both generative AI and semantic search like Google—work somewhat differently. They are still Chinese rooms. But they’re a lot more crowded.
The first thing to understand is that, like the book in the Chinese room, a large language model is a large model of a language. LLMs don’t even “understand” English (or any other language) at all. It converts words into its native language: Math.
(Don’t worry if you don’t understand the next few sentences. I’ll unpack the jargon. Hang in there.)
Specifically, LLMs use vectors. Many vectors. And those vectors are managed by many different “tensors,” which are computational units you can think of as people in the room handling portions of the recipe. They do each get to exercise a little bit of judgment. But just a little bit.
Suppose the card that came in the slot of the room had the English word “cool” on it. The room has not just a single worker but billions, or tens of billions, or hundreds of billions of them. (These are the tensors.) One worker has to rate the word on a scale of 10 to -10 on where “cool” falls on the scale between “hot” and “cold.” It doesn’t know what any of these words mean. It just knows that “cool” is a -7 on that scale. (This is the “vector.”) Maybe that worker, or maybe another one, also has to evaluate where it is on the scale of “good” to “bad.” It’s maybe 5.
We don’t yet know whether the word “cool” on the card refers to temperature or sentiment. So another worker looks at the word that comes next. If the next word is “beans,” then it assigns a higher probability that “cool” is on the “good/bad” scale. If it’s “water,” on the other hand, it’s more likely to be temperature. If the next word is “your,” it could be either, but we can begin to guess the next word. That guess might be assigned to another tensor/worker.
Imagine this room filled with a bazillion workers, each responsible for scoring vectors and assigning probabilities. The worker who handles temperature might think there’s a 50/50 chance the word is temperature-related. But once we add “water,” all the other workers who touch the card know there’s a higher chance the word relates to temperature rather than goodness.
The large language models behind ChatGPT have hundreds of billions of these tensor/workers handing off cards to each other and building a response.
This is an oversimplification because both the tensors and the math are hard to get exactly right in the analogy. For example, it might be more accurate to think of the tensors working in groups to make these decisions. But the analogy is close enough for our purposes. (“All models are wrong, but some are useful.”)
It doesn’t seem like it should work, does it? But it does, partly because of brute force. As I said, the bigger LLMs have hundreds of billions of workers interacting with each other in complex, specialized ways. Even though they don’t represent words and sentences in any form that we might intuitively recognize as “understanding,” they are uncannily good at interpreting our input and generating output that looks like understanding and thought to us.
How LLMs “remember”
The LLMs can be “trained” on data, which means they store information like how “beans” vs. “water” modify the likely meaning of “cool,” what words are most likely to follow “Cool the pot off in the,” and so on. When you hear AI people talking about model “weights,” this is what they mean.
Notice, however, that none of the original sentences are stored anywhere in their original form. If the LLM is trained on Wikipedia, it doesn’t memorize Wikipedia. It models the relationships among the words using combinations of vectors (or “matrices”) and probabilities. If you dig into the LLM looking for the original Wikipedia article, you won’t find it. Not exactly. The AI may become very good at capturing the gist of the article given enough billions of those tensor/workers. But the word-for-word article has been broken down and digested. It’s gone.
Three main techniques are available to work around this problem. The first, which I’ve written about before, is called Retrieval Augmented Generation (RAG). RAG preprocesses content into the vectors and probabilities that the LLM understands. This gives the LLM a more specific focus on the content you care about. But it’s still been digested into vectors and probabilities. A second method is to “fine-tune” the model. Which predigests the content like RAG but lets the model itself metabolize that content. The third is to increase what’s known as the “context window,” which you experience as the length of a single conversation. If the context window is long enough, you can paste the content right into it…and have the system digest the content and turn it into vectors and probabilities.
We’re used to software that uses file systems and databases with photographic memories. LLMs are (somewhat) more like humans in the sense that they can “learn” by indexing salient features and connecting them in complex ways. They might be able to “remember” a passage, but they can also forget or misremember.
The memory limitation cannot be fixed using current technology. It is baked into the structure of the tensor-based networks that make LLMs possible. If you want a photographic memory, you’d have to avoid passing through the LLM since it only “understands” vectors and probabilities. To be fair, work is being done to reduce hallucinations. This paper provides a great survey. Don’t worry if it’s a bit technical. The informative part for a non-technical reader is all the different classifications of “hallucinations.” Generative AI has a variety of memory problems. Research is underway to mitigate them. But we don’t know how far those techniques will get us, given the fundamental architecture of large language models.
We can mitigate these problems by improving the three methods I described. But that improvement comes with two catches. The first is that it will never make the system perfect. The second is that reduced imperfection often requires more energy for the increased computing power and more water to cool the processors. The race for larger, more perfect LLMs is terrible for the environment. And we may not need that extra power and fidelity except for specialized applications. We haven’t even begun to capitalize on its current capabilities. We should consider our goals and whether the costliest improvements are the ones we need right now.
To do that, we need to reframe how we think of these tools. For example, the word “hallucination” is loaded. Can we more easily imagine working with a generative AI that “misremembers”? Can we accept that it “misremembers” differently than humans do? And can we build productive working relationships with our synthetic coworkers while accommodating and accounting for their differences?
Here too, the analogy is far from perfect. Generative AIs aren’t people. They don’t fit the intention of diversity, equity, and inclusion (DEI) guidelines. I am not campaigning for AI equity. That said, DEI is not only about social justice. It is also about how we throw away human potential when we choose to focus on particular differences and frame them as “deficits” rather than recognizing the strengths that come from a diverse team with complementary strengths.
Here, the analogy holds. Bringing a generative AI into your team is a little bit like hiring a space alien. Sometimes it demonstrates surprising unhuman-like behaviors, but it’s human-like enough that we can draw on our experiences working with different kinds of humans to help us integrate our alien coworker into the team.
That process starts with trying to understand their differences, though it doesn’t end there.
Emergence and the illusion of intelligence
To get the most out of our generative AI, we have to maintain a double vision of experiencing the interaction with the Chinese room from the outside while picturing what’s happening inside as best we can. It’s easy to forget the uncannily good, even “thoughtful” and “creative” answers we get from generative AI are produced by a system of vectors and probabilities like the one I described. How does that work? What could possibly going on inside the room to produce such results?
AI researchers talk about “emergence” and “emergent properties.” This idea has been frequently observed in biology. The best, most accessible exploration of it that I’m aware of (and a great read) is Steven Johnson’s book Emergence: The Connected Lives of Ants, Brains, Cities, and Software. The example you’re probably most familiar with is ant colonies (although slime molds are surprisingly interesting).
Imagine a single ant, an explorer venturing into the unknown for sustenance. As it scuttles across the terrain, it leaves a faint trace, a chemical scent known as a pheromone. This trail, barely noticeable at first, is the starting point of what will become colony-wide coordinated activity.
Soon, the ant stumbles upon a food source. It returns to the nest, and as it retraces its path, the pheromone trail becomes more robust and distinct. Back at the colony, this scented path now whispers a message to other ants: “Follow me; there’s food this way!” We might imagine this strengthened trail as an increased probability that the path is relevant for finding food. Each ant is acting independently. But it does so influenced by pheromone input left by other ants and leaves output for the ants that follow.
What happens next is a beautiful example of emergent behavior. Other ants, in their own random searches, encounter this scent path. They follow it, reinforcing the trail with their own pheromones if they find food. As more ants travel back and forth, a once-faint trail transforms into a bustling highway, a direct line from the nest to the food.
But the really amazing part lies in how this path evolves. Initially, several trails might have been formed, heading in various directions toward various food sources. Over time, a standout emerges – the shortest, most efficient route. It’s not the product of any single ant’s decision. Each one is just doing its job, minding its own business. The collective optimization is an emergent phenomenon. The shorter the path, the quicker the ants can travel, reinforcing the most efficient route more frequently.
This efficiency isn’t static; it’s adaptable. If an obstacle arises, disrupting the established path, the ants don’t falter. They begin exploring again, laying down fresh trails. Before long, a new optimal path emerges, skirting the obstacle as the colony dynamically adjusts to its changing environment.
This is a story of collective intelligence, emerging not from a central command but from the sum of many small, individual actions. It’s also a kind of Chinese room. When we say “collective intelligence,” where does the intelligence live? What is the collective thing? The hive? The hive-and-trails? And in what sense is it intelligent?
We can make a (very) loose analogy between LLMs being trained and hundreds of billions of ants laying down pheromone trails as they explore the content terrain they find themselves in. When they’re asked to generate content, it’s a little bit like sending you down a particular pheromone path. This process of leading you down paths that were created during the AI model’s training is called “inference” in the LLM. The energy required to send you down an established path is much less than the energy needed to find the paths. Once the paths are established, traversing them seems like science fiction. The LLM acts as if there is a single adaptive intelligence at work even though, inside the Chinese room, there is no such thing. Capabilities emerge from the patterns that all those independent workers are creating together.
Again, all models are wrong, but some are useful. My analogy substantially oversimplifies how LLMs work and how surprising behaviors emerge from those many billions of workers, each doing its own thing. The truth is that even the people who build LLMs don’t fully understand their emergent behaviors.
That said, understanding the basic mechanism is helpful because it provides a reality check and some insight into why “Steve” just did something really weird. Just as transformer networks produce surprisingly good but imperfect “memories” of the content they’re given, we should expect to hit limits to gains from emergent behaviors. While our synthetic coworkers are getting smarter in somewhat unpredictable ways, emergence isn’t magic. It’s a mechanism driven by certain kinds of complexity. It is unpredictable. And not always in the way that we want it to be.
Also, all that complexity comes at a cost. A dollar cost, a carbon cost, a water cost, a manageability cost, and an understandability cost. The default path we’re on is to build ever-bigger models with diminishing returns at enormous societal costs. We shouldn’t let our fear of the technology’s limitations or fantasy about its future perfection dominate our thinking about the tech.
Instead, we should all try to understand it as it is, as best we can, and focus on using it safely and effectively. I’m not calling for a halt to research, as some have. I’m simply saying we may gain a lot more at this moment by better understanding the useful thing that we have created than by rushing to turn it into some other thing that we fantasize about but don’t know that we actually need or want in real life.
Generative AI is incredibly useful right now. And the pace at which we are learning to gain practical benefit from it is lagging further and further behind the features that the tech giants are building as they race for “dominance,” whatever that may mean in this case.
Learning to love your imperfect synthetic coworker
Imagine you’re running a tutoring program. Your tutors are students. They are not perfect. They might not know the content as well as the teacher. They might know it very well but are weak as educators. Maybe they’re good at both but forget or misremember essential details. That might cause them to give the students they are tutoring the wrong instructions.
When you hire your human tutors, you have to interview and test them to make sure they are good enough for the tasks you need them to perform. You may test them by pretending to be a challenging student. You’ll probably observe them and coach them. And you may choose to match particular tutors to particular subjects or students. You’d go through similar interviewing, evaluation, job matching, and ongoing supervision and coaching with any worker performing an important job.
It is not so different when evaluating a generative AI based on LLM transformer technology (which is all of them at the moment). You can learn most of what you need to know from an “outside-the-room” evaluation using familiar techniques. The “inside-the-room” knowledge helps you ground yourself when you hear the hype or see the technology do remarkable things. This inside/outside duality is a major component that participating teams in my AI Learning Design Workshop (ALDA) design/build exercise will be exploring and honing their intuitions about with a practical, hands-on project. The best way to learn how to manage student tutors is by managing student tutors.
Make no mistake: Generative AI does remarkable things and is getting better. But ultimately, it’s a tool built by humans and has fundamental limitations. Be surprised. Be amazed. Be delighted. But don’t be fooled. The tools we make are as imperfect as their creators. And they are also different from us.