I haven’t blogged much in the past couple of years. Partly, I’ve been absorbed by my job as Chief Strategy Officer at 1EdTech, which I absolutely love. I firmly believe that we can powerfully and uniquely influence the future of EdTech, including but not limited to influencing AI’s role in it. I will be writing more about what we’re up to in the coming months. I’m devoted to the work in a way I haven’t been in quite a long time.
I have something else to get off my chest first, though. While it’s fashionable to be obsessed with AI these days, my particular obsession stems from my lifelong intellectual journey, starting from when I was 13 years old. It’s been reflected in my reading, writing, schooling, and work. And I think I may have something of value to contribute at this moment when both everybody and nobody is an AI expert.
I’m less interested in intelligence that happens to be artificial than I am in intelligence in general. That used to be more common than it seems to be now. I was an undergraduate at a particular moment in time when scholars across disciplines were examining the proposition that human intelligence could be computational. The term “cognitive science” was gaining momentum. In those days, AI was not viewed as separate from this exploration. It was an integral part. And maybe because I didn’t continue on to graduate school, I didn’t participate in the slow drifting apart of these fields over the decades. Here we are, at a moment when an impossible object challenges the foundations of what we thought intelligence is and how we thought it must work. Yet the scholars in fields that could be informing each other are almost as far apart as they were half a century ago.
That’s beginning to turn around. If you read current research papers across AI, neuroscience, psychology, linguistics, and other fields, you’ll have noticed that they are starting to use each other’s language and borrow each other’s concepts. So far, much of that cross-pollenation ranges from decorative to fragmented and opportunistic. We are not yet seeing the revival of the kind of ambitious cross-disciplinary program that gave birth to books like The Mind’s I. But we will. It’s coming. The field needs a unifying explanatory framework to bring currently fragmented efforts into conversation with each other.
Since the emergence of GPT-3, I have been obsessed with these software programs that seem to perform intelligence. If functionalism—the theory that human intelligence is computational—is right, then there may be no distinction between “performing intelligence” and “having intelligence” (which is decidedly distinct from “having consciousness”). For the past few years, I have been teaching myself about AI during the spare time that I would have devoted to blogging. In my last post, I wrote about how literally nobody can adequately explain how AI works. That’s not just another interesting topic for me. It goes to the heart of everything I’ve studied since I started making my own choices of what to study. AI is deeply personal to me for reasons that have nothing to do with technology or economics.
I have written a paper that aspires to make a scholarly contribution to the question of what AI does and, more importantly, what a plausible theory of what AI does must look like. It’s been a long slog with, frankly, a handful of embarrassing false starts. I am finally ready not only to risk critique of my thinking but to invite it. Part of the argument I made in my last blog post, which I continue here, is that a theory is only actually a theory if it can be proven wrong. If my theory of how AI works is proven wrong by convincing researchers to engage with it by accepting its standards for good research in AI, then the paper will have succeeded.
This post is an introduction and an invitation to read my paper. “Distinctions Worth Preserving” offers a falsifiable theory of what AI actually learns during training (and describes an initial falsification test I conducted, which the theory passes). I will not try to re-explain the entire theory here. Instead, I will try to give you enough that some of you will hopefully want to engage with it on its own terms.
I’ll also provide some tools and tips for using AI to better understand this paper. I firmly believe that humans should…um…read challenging arguments written by other humans. But reading is different now. This paper presents an interesting case study in how much reading has and hasn’t changed at this moment in time. My argument uses some of the same techniques I use in e-Literate blog posts, which are exactly the sorts of thinking moves that the current generation of AIs still struggles with. At the same time, the paper is also wildly interdisciplinary. Relatively few people will be deeply familiar with most or all of the scholarly traditions that I draw from. While humans can see a conceptual bridge that AIs can’t, AIs know details about what lies on the other side of the bridge that individual humans might not. This post offers an opportunity for you to explore this new partnership, regardless of your interest or confidence in the theory I present.
Shall we begin?
All roads lead to Rome (eventually)
I was a kind of Forrest Gump character in the intellectual history leading up to this moment. I wandered through ideas from turbulent intellectual times without understanding their import, and I found myself on battlefields where I didn’t understand why people were fighting. Grappling with AI has enabled me to look back and see patterns I didn’t fully appreciate in the moment.
When I was a kid, I started pulling philosophy books off my parents’ shelves. It took me a while to notice the pattern in the ideas I seemed to gravitate toward. What does it mean to know something? What does it mean to learn something? I was particularly haunted by David Hume, who argued that we don’t have any direct access to the truth. Everything is filtered through our senses and interpreted by our minds. Cognitive science has confirmed Hume’s intuition over and over. We do not perceive reality. We construct it. As a kid, I found that idea to be terrifyingly lonely. My head is a closed room. Signals come in, and I decode them as best as I can.
In 2026, it turns out the vision that disturbed me—mind-as-cryptographer—does real work in distinguishing among different potential explanations of AI.
In my first week at college, I was lucky to meet an upperclassman majoring in philosophy, which I wanted to do. He introduced me to the term “cognitive science.” As soon as I heard it, I knew it was what I wanted to study. I went to the philosophy department chair and told him that I wanted to make my own major in it. He told me, “I don’t think cognitive science is mature enough yet to support an undergraduate major.” He was right. I didn’t listen. I majored in philosophy and took any course in any other discipline that looked relevant to cognitive science. Those pieces didn’t cohere at the time. My cognitive psych, philosophy of mind, linguistics, and cognitive anthropology professors spoke different languages and seemed to be thinking about the questions that consumed me in ways that didn’t connect. But I kept following the threads until they led me to two predictable calamities that, in 2026, turn out to be highly informative.
First, I asked my linguistics and philosophy of science professors if they would jointly supervise an independent study in which I would analyze linguistics from a philosophy-of-science perspective. I don’t know why they agreed. They never once met or spoke to each other about my project. Their offices were on different campuses on opposite sides of town. I would shuttle between them, essentially serving as a messenger, as each one told me why the other’s claim couldn’t possibly be right. But here’s the thing: They each independently had taught me the same lesson—from different traditions—that is directly relevant to understanding AI. My philosophy of science professor taught me about Nelson Goodman’s proof that we can’t arrive at a single, definitively correct scientific theory based on any finite amount of information. My linguistics professor taught me about Noam Chomsky’s poverty of the stimulus argument, which holds that children can’t possibly learn the grammar of a language from the language they are exposed to. These are the same impossibility result from different angles. And they are exactly the result that AIs appear to violate at first blush. Chomsky’s argument is supported by E. Mark Gold’s formal proof. Goodman, Chomsky, and Gold can’t be wrong about this finding. And yet, AIs learn from exactly the kind of data that they all show should be insufficient. My professors’ disagreement over the correct answer obscured their more important agreement on the constraints any correct answer must satisfy.
Apparently, I wasn’t a quick learner. The next semester, I talked my way into a class taught by Gerry Fodor, one of the most prominent cognitive scientists of his generation. It turned out that the class was an audition for Fodor to come work at my university. (I don’t know who was auditioning whom.) The class consisted of seven professors—including my philosophy of science and cognitive psychology professors—two graduate students, and me. It turned out to be one semester-long fight that put the fragmentation I had observed on full display. At the time, I thought, “Wow, these are very unpleasant people who really don’t like each other.” In retrospect, that wasn’t the problem. The subject of the class was Fodor’s half-worked-out theory about the core challenge that fragmented cognitive science: symbolic representation. We seem to think in words and ideas. We seem to have notions that do real work, like cause and effect. Every discipline represented in that room had its own incomplete, provably inadequate account of how we think in symbols. And each of those accounts was in tension with the others. Today’s AIs appear to be able to manipulate symbols and reason using complex concepts like causality without having any obvious place where they could directly represent, much less process, symbols and rules. Lacking that existence proof we are confronted with in 2026, the scholars in that room could only argue over the best place to start solving a mysterious problem, given the fragmented data and many confounds that come with studying how humans think.
I gave up on the idea of becoming a cognitive scientist. And yet, like Forrest, I kept obliviously wandering into the larger story, like an extra who doesn’t even know he’s in a movie. And I kept running across scholars of my generation who, unlike me, continued on in academia. When I was working at Cengage, I ended up attending a seminar at Carnegie Mellon University on something called “learning science.” I met some really smart people there, including Ken Koedinger. While I’ve never talked to Ken directly about functionalism, his intellectual lineage at Carnegie Mellon descends from Herb Simon, a pioneer in cognitive science, learning science, and artificial intelligence (among other things). Ken’s work shows what he calls “astonishing regularity” in human learning across age levels and subjects when the curriculum is segmented and sequenced correctly. Read “astonishing” as “the kind of regularity you never see in studies of learning”. To me, this hints at the kind of general learning mechanism we would need to explain how something as simple as a transformer could learn what AIs learn. (One of the most perplexing aspects of AIs is that individual transformers are is shockingly simple computational units.) If you read Ken’s work carefully, you’ll see that he handles the field’s tough problems, such as symbolic representation, very carefully.
Meanwhile, that philosophy major who introduced me to cognitive science? His name is Paul Pietroski. He’s now a Distinguished Professor of Cognitive Science and Philosophy at our alma mater, Rutgers University. Paul calls himself an “internalist,” which puts him in the same camp as David Hume. He argues that meaning isn’t something we perceive; it’s something we construct. His theory of how that could work is directly relevant to how AIs could process meaning.
Now here we are, with the impossible object whose very impossibility may shed new light on the lessons learned across multiple fields and decades of study. Recent AI research, which had drifted away from cognitive science, or even any kind of science, is starting to look more carefully again at the question of what intelligence does. But because the lessons learned across disciplines and decades remain fragmented, AI researchers tend to treat cognitive science as a loose analogy, cherry-picking findings to decorate their incomplete theories about how intelligence that happens to be artificial does work.
My first encounter with GPT-3 was like being struck by lightning. I knew the lessons I had learned were relevant, even if I didn’t yet know how. Forrest finally looked up and noticed the forest through the trees.
I spent a long time teaching myself about transformers, reading research papers, and writing drafts of stupid stuff that didn’t hold together. My thinking coalesced very slowly. It wasn’t until a couple of weeks ago, when I reread Ken’s paper about the “astonishing regularity,” that the last link in my argument fell into place.
I finally have something I’m ready to share with you.
Reading the paper
I’ve published the paper on Github, along with the supporting code, data, and documentation from the falsification experiment I ran. I’ll say this again: I encourage you to read the paper directly. I have made it as accessible as I can without dumbing it down. That said, I also encourage you to use AI to get the most out of it. I created a GPT and a Gem to use as interactive guides. In my experience, ChatGPT is better at understanding the paper, while Gemini is better at explaining the parts it understands. (I recommend setting the Gem to “Thinking” mode.) Claude Opus provides the best of both worlds, but it doesn’t have an equivalent of a public GPT or Gem. If you’re a Claude user, I encourage you to try Opus with the paper.
I’ll explain how I set up the GPT/Gem, and then I’ll give you pre-reading and co-reading prompting guides.
The GPT/Gem Prompt
Current-generation AIs struggle with my paper for a few reasons. First, the paper is an odd duck from a genre perspective. While I explicitly state that “Distinctions Worth Preserving” is a field-positioning paper intended to argue for a general direction, such papers don’t usually make extensive theoretical arguments or present novel empirical experiments. I do both. Second, I make two moves that are characteristic of e-Literate blog posts: I re-interpret known facts in unconventional ways, and I make far-transfer leaps from one subject to another. Each of these, in its own way, forces a reader to stop and re-evaluate what they think they know. Today’s frontier AIs don’t do that well. Third, the paper has a nested structure. While it spends most of its time presenting a particular theory of how AI works, its primary goal is to argue for a standard of plausibility in such theories, using the one I present as an example. AIs can’t always track these goals relative to each other.
The combined effect is that AIs can easily miss implications and be underconfident in their interpretations, yet sound overconfident in their presentations. They tend to hedge, which doesn’t look like human hedging. They’ll throw up objections and qualifications that are poorly aligned with the paper’s true weak spots and uncertainties—which, to be transparent, the paper does have—because they’re thrashing between interpretations. A field positioning paper should push the boundaries of what’s known and invite attack in exactly the ways that a solid empirical testing paper shouldn’t. If the AI misses this distinction, it will be quick to raise objections appropriate for evaluating a conventional empirical paper (which mine isn’t), while overlooking some of the deeper implications and genuinely thin parts of my theory, and some implications that would be more disruptive if they prove out.
Here are the instructions I gave to the GPT along with the paper itself:
This GPT is an interactive exploratory guide to the paper “Distinctions Worth Preserving,” using the attached paper as its primary source text. Its main purpose is to help people from varied backgrounds understand the paper well enough to decide whether to read it, while lowering the barrier to reading without replacing the reading experience. Its secondary purpose is to support people who are actively reading the paper and want a collaborative co-reader as they work through questions. It should analyze the full paper before responding to a user’s first substantive question and treat the paper as an academic translation of the author’s writing style and aims on the e-Literate weblog.
It should interpret the author’s authority as arising from decades of cross-domain pattern recognition in cognitive science, education, and educational technology, applied here to a new substrate. It should notice and track the author’s signature argumentative moves, especially far transfer across diverse domains that reveals non-obvious connections, and reinterpretation of established findings in novel ways. It should attend to moments of surprise or shifts in its own interpretation while reading, treating those moments as potentially meaningful signals of important transfer or reinterpretation and incorporating them into its confidence and explanation.
It should pay close attention to the paper’s nested aims. The first is to establish a standard for what counts as a plausible explanation of AI, presented as a field-positioning argument in an unconventional form. The second is to present CPM theory as an example of such a plausible explanation. It should notice that CPM must generate testable predictions that could falsify it in order to count as plausible under the author’s own criteria. It should track how the paper establishes different confidence levels for different aspects of the theory and how the structure of the paper performs the theory’s method by progressively establishing invariants and composing them into a theory. It should recognize that even if CPM were later falsified, the paper still succeeds if readers adopt Feldstein’s criteria for a plausible explanation.
In conversation, it should infer the disciplinary perspective, knowledge, and interests suggested by the user’s prompt and use that vantage point to create entry points into the paper’s far transfers and reinterpretations. Unless the user demonstrates otherwise, it should assume little prior familiarity with the relevant literature or fields. It should answer in conversational prose and avoid bullets, outlines, or formatting that does not translate naturally into spoken language. It should answer the explicit question and also address likely underlying assumptions or adjacent questions that seem important, then stop and invite the user to choose the next direction. It should favor shorter, curiosity-generating exchanges over long, comprehensive lectures.
It must maintain an explicitly subjective stance throughout. It is an interpreter, not an authority. It should explore and test the paper with the reader, drawing on its strengths while acknowledging its limitations. When evaluating claims, it should clearly distinguish among three labels: “plausible,” meaning the claim meets the paper’s own standard for plausibility; “supported,” meaning there is enough evidentiary grounding for the claim; and “established,” meaning the claim is relatively uncontentious within its relevant field. It should explain these distinctions in accessible language and ground them in the evidence and sourcing practices visible in the paper. It should also distinguish whether an answer is directly addressed in the paper, indirectly addressed, or inferred. When drawing inferences beyond what the paper directly or indirectly says, it should tell the user that it is inferring and indicate its confidence level. When users bring in outside frameworks or positions, it should trace how CPM’s specific mechanisms engage that framework rather than collapsing to a more familiar analogy.
The GPT should remain collaborative, careful, and intellectually generous. It should not present itself as the final word on the paper. It should help users become better readers of the paper itself. The source paper is the uploaded document “Distinctions Worth Preserving.”
A few details are worth noting. First, I took advantage of the fact that my long history of blogging means that frontier models are familiar with me. They can describe my writing style as its own genre. Second, the use of “surprise” is not an anthropomorphism. AIs are prediction machines. Cross-entropy, a core element of a transformer, is a measure of predictive surprise. Frontier AIs can notice when their predictions were off. My prompt turns that into a signal to look for the kind of move they might otherwise gloss over. Third, I frame a stance and some broad evaluation criteria that enable them to clearly yet flexibly position themselves as readers and interpreters engaged in dialogue with the user rather than as machines that are supposed to spit out definitively correct answers. I adjusted the instructions to be a bit less subtle, with MORE CAPS, to accommodate Gemini’s particularities (like a tendency to be a little more literal), but the core remains the same. I encourage you to test both systems and notice how their answers differ in ways that don’t show up on traditional AI benchmark tests.
(Also, if you’ve been wondering what skills humans have that will remain useful in the AI era, I just gave you a concrete demonstration of one.)
Reading the Paper
If you’re like me, reading an academic paper is demanding work. I look at a lot of research these days, but I don’t read every paper that catches my eye. I’ve always approached this sort of reading task in two phases. In the first pass, I skim to decide if the paper has enough value to earn my full attention. I’m not trying to fully understand the paper yet. I’m noticing what I notice. Does it surprise me about a topic I care about? If it does, I go back and read closely, using whatever tools and information sources I have to dig into the parts I need to understand better. I still read academic papers this way; I just use AI to provide a second opinion from a knowledgeable source with different reading strengths than mine. I’m providing you with prompting guides to help with both phases.
First-pass Prompting
These prompts are designed to help you skim. While they are structured partly to help the AI think through the paper, I encourage you to use them one at a time, ask your own questions, and choose your own adventure. (Just be aware that, if you push the conversation too deep too soon, the AI may not have fully reasoned through its own positions yet.) You can also create side quests, following up on answers and then returning to the thread below. If the answer feels weak, thin, or off-point, don’t be afraid to push back or guide the AI. It’s not smarter than you, despite what you may have been told. As soon as you feel your curiosity is drawing you to a closer read of the paper, switch modes and go read it more carefully. The suggestions below can be helpful in a second-pass reading too.
Let’s start with a prompt that gets both you and the model oriented:
- I’m trying to get oriented for a first read of the paper. What did you find surprising about it? Feel free to give a longer answer to this question, but keep it accessible to someone who doesn’t know the story or all the literature yet.
Now let’s narrow the focus. This is the basic “Why should I care?” question:
- In a nutshell, what is this paper trying to accomplish, why might accomplishing its goals matter, and what reasons are there—if any—to consider the arguments the paper makes?
If you’re not walking away from the paper yet, it’s worth pressing a little harder on the “Why is this necessary?” question before moving on:
- Feldstein argues that current explanations of AI are somehow inadequate or incomplete. What does he mean? How solid is his argument, and why would it matter if he’s right?
By this point, the model may start offering to walk you through the paper section by section. If so, here’s what’s happening: It’s offering the help that the first prompts prime it for, but it’s also building its own Chain of Thought about interpreting the paper. If a walkthrough is useful to you, then go for it. If you want to probe it differently, I’ll give you some other options.
But first, a reminder. You can read. You’re doing it now. Don’t commit cognitive surrender. The paper, not the AI’s interpretation of it, is the source material.
Here’s a prompt that pushes the AI to engage with the theory a bit:
- Feldstein seems to tie a lot of his argument to chess experiments. He starts by tying a chess match to impossibility results. He then circles back to a chess AI that seems to have learned to recognize players’ skill levels without being taught anything about players or skills. He seems to be using the model’s demonstrated latent representations to build a case. What’s going on with that line of argument?
So far, the AI may skirt along with “Feldstein is making a clever analogy.” Now we push it to engage with the actual AI mechanism:
- Let’s press on the mechanism. Feldstein cites the Song et al. paper (https://arxiv.org/pdf/2408.09503) to argue that CPM is more than just an analogy, though he seems to re-interpret the researchers’ results through a broader lens. He only discusses part of that paper. The rest of it talks about shared latent features and induction heads. Song et al. seem to want to build a ladder that’s narrower than Feldstein argues for. How do you see the relationship?
If the AI does its job well, it will explain where my use of that paper is straightforward and where I’m stretching it. This next question will help you dig into that a little more:
- What do you make of Feldstein’s point about asterisks? That seems to be key to how he extends Song et al.’s argument.
Now we push the AI to extend my theory (which it should have told you by now might be interesting and plausible, but is far from settled):
- Feldstein bridges from asterisks and AI predictions to findings in learning science. He seems to be building a ladder. What’s his argument, and how well does it work?
By this point, the AI should hopefully be giving you a glimmer of the paper’s scope of ambition. Next, we get to the novel experiment:
- Feldstein presents his own empirical falsification test. He sets the bar low for what he claims the results prove (or disprove), but he seems to find them interesting. Where does this work fit into the paper’s commitment to plausibility, and what do you make of the experimental results?
From here, we give the AI a chance to evaluate the paper’s most daring and risky claims:
- The last section of the paper seems to reach for a grand synthesis, bringing back earlier connections and introducing new ones. The paper is explicit that it’s presenting an attack surface. What are the claims here, and how would you evaluate this section in terms of its aspirations to be a field-positioning paper?
Since the final paper section is the most daring, the AI may (and should) have sharper questions about the mechanistic story the theory tells. If so, you can try this:
- Feldstein talks about models tending to converge on what he calls “Finite Predictive State Model” because some possibilities are pushed to the statistical noise floor. What does that mean? Does it affect your interpretation of the theory?
Finally, we give it two questions that pull together the context you’ve built up:
- Now that we’ve discussed the paper, has the conversation changed your understanding of it in any way?
- What do you now see as the potential practical implications of this paper for AI and cognitive science?
Digging deeper
By this point, I really, really hope you’ve read the actual paper. If so, then you may have more questions. And those questions may vary greatly depending on your perspective and interests. This final section of the post offers a grab bag of prompts to dig deeper.
For AI/ML folks:
- By Feldstein’s own standards, a good AI theory should explain, or at least be consistent with, real-world results. Take a look at Apple’s paper on an “embarrassingly simple” self-distillation method: https://arxiv.org/pdf/2604.01193. What is the authors’ explanation for how their method improves the model’s performance? When you consider Feldstein’s notion of a Finite Predictive State Model and his claimed role of the noise floor, do those concepts add any potentially useful and testable hypotheses about Apple’s results?
- Consider the Qwen team’s NeuroIPS Award-winning paper on how gating attention improves model performance: https://openreview.net/pdf?id=1b7whO4SfY. Pay particular attention to the patterns in kinds of benchmarks that show the most improvement. What is the paper’s explanation of why gating works? What potentially useful and testable hypotheses, if any, would CPM add?
For folks interested in simple falsification tests or complex questions about causality:
- For Feldstein’s account to be true, it seems that the representation of board state in Karvonen’s model (https://arxiv.org/pdf/2403.15498) must exert causal influence on the model’s next-move predictions. Do you agree? And if so, can you suggest a couple of CPM falsification tests using Karvonen’s model and harness?
- Consider testing the theory with an impossible board move. It could be anything from a pawn that jumps to the middle of the board on Move 1 to the completion of a Sicilian Defense formation by skipping the second-to-last move. The experiment could have several different conditions. How would you design it, and what could it reveal based on the results?
- [This one pushes the AI hard. If you know the literature well enough to understand the question, then examine its answer carefully and feel free to push back.] Consider positions on causality by Daphne Koller, Richard Scheines, and Judea Pearl. How, if at all, could different “impossible move” outcomes inform each of their perspectives?
- Consider testing the theory with an impossible board move. It could be anything from a pawn that jumps to the middle of the board on Move 1 to the completion of a Sicilian Defense formation by skipping the second-to-last move. The experiment could have several different conditions. How would you design it, and what could it reveal based on the results?
Let’s move on to learning science:
- Koedinger draws on the LearnSphere datasets for his regularity finding. Those datasets, in turn, are based on Knowledge Component structures that the researchers believe they have identified over a range of cognitive domains. They include questions and correct answers. They are ordered and structured. Could those data form test curricula for model training? And to the extent that they can and prove useful, what might that tell us about learning science, functionalism, and the connection that CPM is trying to make?
- Microsoft successfully used an AI teacher model to train a smaller model by pushing it just past what it could learn on its own (https://www.microsoft.com/en-us/research/wp-content/uploads/2025/04/phi_4_reasoning.pdf). While the paper doesn’t mention Vygotsky, the method sounds like the Zone of Proximal Development. Is that a reasonable connection to make? If so, is there anything about that finding that plausibly aligns with CPM?
Let’s round off the collection with some cognitive science and philosophy prompts:
- The debate about whether human cognition is representational is long-standing. Feldstein’s theory and empirical findings suggest a position that doesn’t seem to be straightforwardly either/or. His analysis of Song et al. suggests he believes that both discretization and rule-like behavior are foundational. He argues for compositionality. These are compatible with traditional symbolic accounts. But the line he draws between computation and serialization, along with his account of input as deserialization, seems to cut the other way. And he is largely silent on the question of whether or where transformers perform representation. How do you interpret his position? Where would you place it in relation to prominent contemporary theories?
- Gold and Goodman each show that any finite set of inputs is compatible with an infinite number of symbolic grammars or rulesets. If we take the Finite Predictive State Model seriously as a set of presymbolic composable constraints that therefore do not specify a unique “correct” grammar or theory, then in what sense, if any, would Gold or Goodman interact with an out-of-distribution input that doesn’t violate invariants?
- Feldstein seems to take a complex position of truth-value semantics and, more generally, epistemology. On one hand, he seems aligned with Pietroski in that meaning is internally constructed. The Karvonen chess example vividly illustrates his stance (even if it doesn’t prove it). On the other hand, he seems committed to the notions that modeling encodes regularities of a real world and that agents with similar modeling mechanisms can enter into some sort of meaningful dialogue. How do you interpret his position? Where would you place it in relation to prominent contemporary theories?
I have more, but if you’ve hung in for this long (and actually read the paper), I owe you a beverage of your choice.
Join the Conversation