
Now that I’m a year into my job as Chief Strategy Officer at 1EdTech, I’m finally at the point where I can start articulating my sense-making in writing again. These will be my typical long-form thought pieces. If you want short, there are plenty of good outlets to read (such as 1EdTech’s blog, where you’ll find a short, well-written piece on digital credentials by my colleague Rob Coyle). Also, a reminder: my posts on e-Literate are not official 1EdTech communications or positions. I’m writing my personal reflections about what I’m learning.
e-Literate is at least as much about how I think as it is about what I think. Let’s get the “what” part out of the way. Here’s what I think about digital credentials, the workforce, and AI:
- Different but poorly delineated mindsets about digital credentials have made them sound more complicated than they are.
- From a standards perspective, most of the specifications needed for supporting digital credentials, including in the workplace, already exist.
- Demand for digital credentials in the workplace exists, but we often look for it in the wrong places.
- I’m still confused about what problem a Learner Employment Record specification is intended to solve (although, oddly, I’m clear about the value of the supposedly downstream LER-RS standard).
- While I’m not in the “AI will magically solve every problem” club, I do believe AI will bring the economics of digital credentials to a tipping point.
- AI is also going to shift the emphasis from “Who says you know this?” to “How can you prove you know this?”, though the shift is not likely to be as radical as some believe.
You may or may not find these beliefs to be novel or in line with your own views. Personally, I didn’t hold any of them as recently as six months ago. I’ve been a decade-long skeptic of digital credentials, not because I think they’re a bad idea, but because I haven’t seen evidence that they were going anywhere. My views are changing, partly because of new developments and partly because I’m learning more. This post is a point-in-time explanation of how I’m thinking about the topic.
I’ll walk through four layers: (1) Verifiable Credentials and wallets, (2) Open Badges adoption, (3) CLRs and the LER debate, and (4) how AI changes the physics of the digital credentials ecosystem.
Digital credentials start with verifiable credentials
Actually, they start with digital wallets. In the digital credentials world, digital wallets are all the rage. There’s a lot of (often duplicative) work, discussion, and hand-wringing over them.
The thing is, you almost certainly already have a digital wallet. It’s called either Apple Wallet, Google Wallet, or Samsung Wallet. It holds credentials that are verifiable, like plane boarding passes, credit cards, and so on. The items in your wallet are cryptographically protected and only reveal the information that the recipient needs to have. For example, when I pay with my credit card using my Apple Wallet, the vendor never gets my actual credit card number. They get confirmation that I have a certain card that can be used to charge the item in question. I can share the information I want to share and only that information. Unfortunately, Apple, Google, and Samsung each use their own proprietary format for these cards. Some states, but not all, issue driver’s licenses in ISO’s mobile driver’s license (mDL) format. These can be put into one of the proprietary phone wallets and used at some airports. If you think about the driver’s license, the general utility of these credentials becomes clear. At the airport, the TSA might want to know a lot about who you are. The liquor store only needs to see if you’re old enough to buy beer. But the fragmentation problem also becomes clearer. We now have three different general formats for various phone vendors, plus a standard format solely for driver’s licenses, and who knows what else for other purposes.
The W3C, the group that manages global standards you use every day, such as HTML, has created a general standard called Verifiable Credentials (VCs). There are two essential parts. The first is the cryptographic envelope. It’s the thing that holds the credential. It’s not tamper-proof—no cryptography can promise that—but it is tamper-evident, like a new bottle of Tylenol. You can tell if the seal has been broken. The other part of the VC—or, to be more accurate, its complement—is something called a Distributed Identifier (DID). It is a globally unique identifier that can be created by anyone and used to reference any subject. DIDs are both human- and machine-readable, but more importantly, they provide public cryptographic keys and service endpoints. These enable applications and digital credentials to verify authenticity, establish trust, and securely exchange information. It enables anyone to become a source of truth for the VCs they issue. They also enable learners to be verifiable. (I realize this may sound complicated; in practice, DIDs can be pretty simple to issue and use with well-established technologies.) Together with the VC envelope itself, credentials are verifiable both through the cryptography and through the link to the source.
By the way, a lot of the genuine value hidden behind the hype of blockchain can be realized with VCs and DIDs alone. Blockchain provides an immutable ledger. So, for example, if you want to know every time a Bitcoin changed hands, you could trace it through the Blockchain ledger. That could be useful for some use cases. But, for example, a state issuing a driver’s license or a university issuing an open badge probably doesn’t need it.
Open Badges are VCs
The Mozilla Foundation recognized the value of of certifying learning and developed the original Open Badges certification. They transferred stewardship of the specification to 1EdTech, which has advanced it with community support to the current Open Badges 3 (OB3), re-implementing the original idea on top of W3C’s VC standard in the process of advancing the work. OB3s are VCs that support, but don’t require, DIDs. That’s the heart of it. An Open Badge is a cryptographic envelope that contains verification that you learned something, preferably with accompanying evidence that you learned it. OB3s can use DIDs to link back to an issuer. But if, for example, that issuer goes bankrupt, the credential is still verifiable through cryptography. It’s pretty straightforward to understand
The human part is more complicated. I remember hanging out in somebody’s hotel room at an OpenEd conference a decade ago and being asked, “Do you think badges will become useful?” I said, “I’m certain they will. I have no idea when or what for. A badge is a container. It’s a box that you put stuff in. Humans haven’t agreed on what kind of stuff should go in the box yet.” By 2022, 75 million Open Badges had been issued, according to a joint survey by 1EdTech and Credential Engine conducted at the time. Tracking is difficult because most badges are issued outside 1EdTech certification, but the volume continues to grow. Most badges are not certified with 1EdTech, so there is no easy way to track them. (There are proprietary market reports on the financial growth of the digital badging market sector; I’m not including them here because I don’t know anything about their quality.)
As a side note, all 1EdTech specifications are 100% openly licensed. They are public goods. The organization typically charges membership fees for access to certification suites and participation in the specification development because that work requires paying human staff members to develop and maintain it. That said, OB3 badges can be validated for free without requiring a login.
I’ve seen at least three different badge usage patterns, which is where the confusion starts to creep in. The first is what might be called a participation badge. Some conferences, webinars, and the like issue badges with no evidence of achievement, just for showing up. I don’t personally add these to my LinkedIn profile, but my reputation from e-Literate makes participation badges less useful for me than they might be for others. The second type is for a course completion that includes evidence of mastery, like a final test. “I received certification in Basic Accounting from Coursera.” Anecdotally, these seem to strike the best balance between value and ease of issuance—at the moment. They tend to be issued by online course providers like Coursera and, increasingly, career-oriented programs in higher education. A 2022 study encouraging students to share their badges on LinkedIn found the following:
[L]earners in the treatment group were 6% more likely to report new employment within a year, with an 8% increase in jobs related to their certificates. This effect was more pronounced among LinkedIn users with lower baseline employability. Across the entire sample, the treated group received a higher number of certificate views, indicating an increased interest in their profiles.
So. Seventy-five million badges (as of three years ago), and sharing them on LinkedIn produces significant increases in employment. A study by AAC&U found that that between 66 and 68% of employers state microcredentials make applicants either somewhat stronger or much stronger job candidates. Employers also see similar value in microcredentials for technical skills (68%) as those for broad, durable skills like critical thinking and oral communication (66%). (My colleague Mark Leuba co-authored an article with more detail on The evolllution.)
Meanwhile, providers like CredLens, Accredible, Instructure, Credly, and CanCred are growing Open Badges-based microcredential adoption throughout the world. (Canada has also built strong learner mobility infrastructure through provincial credit transfer councils, laying the groundwork for digital credential adoption.) Digital microcredentials are in the workplace at meaningful scale today.
Then there’s Europe. Worforce mobility is a big deal there. Europe is proving how digital credentials can scale across higher education and vocational training, and leading with a policy-led direction towards aligning education and national skills needs. The European success is heavily under-discussed in US-based digital credential conversations. They often use their own standards (ELM, EQF, Europass) in higher education and Open Badges in vocational training. Their success shows the workforce value of digital credentials at scale.
I’m giving you a workforce-focused sampling, not a comprehensive data view. The point is, despite the narratives you may hear, digital credentials have already gained traction globally in workforce. As William Gibson put it, “The future is here—it’s just not evenly distributed.”
The third use of digital credentials is for specific competencies. Not “I took this course” or “I passed this course” but “I learned this skill.” This is where a lot of higher-education-to-workforce conversation is focused in the United States. It’s also the toughest nut to crack. Many US colleges and universities do not uniformly require course or program competencies. The combination of weak Federal regulation and strong faculty autonomy makes this kind of mapping extremely hard. The regulations and accreditation requirements we do have make it nearly impossible. It’s easy to blame registrars and SIS makers here, but they’re just trying to follow the rules. A welter of shifting regulations and accreditation requirements put colleges and universities in jeopardy of losing financial aid eligibility for their students if they fail to follow the rules. In a way, an SIS is like TurboTax for awarding credits. Credits, with an “s”, are legally regulated units. Credit for learning, which is what microcredentials sometimes track (particularly in Competency-Based Education (CBE) programs), are not. Mixing the two is often viewed as dangerous or even reckless by the guardians of the credits-awarding process. Giving credit and awarding credits are functions than can co-exist, but they must be parallel and loosely joined in the US legal system.
There is a way to do this, but it will take some unpacking that I’ll save for another post.
CLR, LER, Wallet, and LER-RS (Oh, my!)
The situation gets really messy at the transcript level, though not for technical reasons. 1EdTech has a standard called Comprehensive Learner Record (CLR), which enables an organization to issue a transcript-like collection of OB3 badges and other learning-related VCs as a collection. I say “transcript-like” for two reasons. First, historically, transcript specifications have been handled by PESC, a different standards body. While a CLR could express a transcript, 1EdTech doesn’t position it as a transcript standard. (Individual institutions like Temple University, University of Central Oklahoma, and University of Georgia use CLRs to suppoort or supplement transcripts in various ways.) Second, there’s that whole cultural debate about the granularity of Open Badges that rolls up to CLRs. A CLR is a different thing depending on whether it’s a collection of verified competencies or verified course completions, and on whether the CLR assertions contain evidence of achievement. (By the way, 1EdTech also provides a free validator for CLRs.)
Now, suppose you’re a learner. You get a CLR from your university. Maybe you get a couple of CLRs from a couple of institutions. You have some free-floating badges, too. What are you supposed to do with all of that? It’s going to be a mess that you have to organize.
Remember those wallets we were talking about earlier? That’s the concept the sector has been running with. Verifiable Credentials like Open Badges and CLRs go into portable digital wallets. It’s not a bad first pass for a model. But if you think about that mess of credentials to be organized, a wallet very quickly starts to feel cramped. Here’s something relevant I wrote about ePortfolios back in 2006:
I heard four basic variations on the definitions of ePortfolios at the conference. The first one was the box of papers in the basement. You know, the one with all your notebooks, your tests, your essays…maybe your thesis…? This analogy was introduced by the very first speaker and repeated throughout the day. But the thing is, does anybody ever really think of that box as a portfolio? Personally, I think of it as my “stuff.” If I want to put together a portfolio, I’ll go through my stuff and pull out the best stuff. A portfolio is, roughly, a portable folio. Emphasis on portable. My box of stuff isn’t terribly portable, nor would I have any reason to port it around with me except on those rare and exceptionally distasteful times when I’m moving all of my stuff. I need my box of stuff to put together my portfolio, but the box of stuff is not a portfolio in itself.
The other three definitions of ePortfolios are closer to the mark:
- A periodic browse through the box of stuff: Every once and a while I go down to the basement, pull out my box of stuff, and look through it to remind myself of just how dumb I used to be and how I’ve grown to be slightly less dumb. During those times, I pull out maybe 10% of the stuff in my box. I might pull out slightly different items depending on what I’m thinking about at the time, but it’s always the same process. I pick a few things to read closely and shove the rest back in the box. Reflective ePortfolios should work roughly the same way.
- Pulling stuff out to impress somebody: This is the classic portfolio application. When a graphic artist or an architect brings a portfolio to a prospective client or employer, she usually picks a few items from her box of stuff that she thinks will resonate her audience. The collection will be tailored to the particular prospect, just as a cover letter and CV might be customized for each job application. An ePortfolio for potential employers should work the same way.
- Pulling stuff out to prove you did the work: Professional eportfolios for certification do this. They collect specific items so that evaluators can easily review the work.
So to support ePortfolio applications of all types, we need two things: A big box for stuff and some smaller…um…folios that are easy to fill with carefully selected subsets of the stuff. In other words, we need to give students a personal file storage system that’s linked to a personal publishing system. In the former case, the box should automatically store the stuff that students produce or submit online for their coursework. Why let student contributions be “owned” by a course instance which gets archived at the end of the semester, never to be seen again? Why not have it be “owned” by the student and published to the course? Why not have the instructor comments/grades get attached to the document and put in the student’s box, the way comments and grades get attached to physical papers that we return to our students? This isn’t an issue of building an ePortfolio; it’s an issue of correcting a fundamental design flaw in the LMS’s themselves.
Once every student has a box of stuff, then we can talk about making it easy for them to create portfolios that happen to be “e”. We need a simple publishing system that allows flexible templating and guest access control. Add to the mix a handful of pre-created templates to start the students off, and you’re basically done. You can add bells and whistles–maybe a commenting capability for guests, maybe a simple workflow for reviewers (including the students themselves, in a reflective portfolio application), etc.–but these are all nice-to-have add-ons. They are also, by the way, standard fare for even basic content management systems (like blogs, for example). Let’s keep it simple. An ePortfolio is a lightweight personal publishing system that should sit on top of an LMS’s personal file management system.
Badges and CLRs should dump into a box of stuff. Learners can add to the box throughout their lives. The technical implementation might be a wallet. However, the user experience must be a box. A wallet isn’t great for organizing lots of disorganized stuff. In any case, this wouldn’t be hard to build. Digital credential wallets exist. In fact, the box-of-credential-stuff product I’m describing probably already exists. I just don’t happen to have seen it yet. I’m not aware of any technical barriers.
There’s been a lot of talk—and many, many meetings—around the concept of a Learner Employment Record (LER). 1EdTech is involved in some of those conversations, and some of my colleagues are closer to it than I am. I do understand this much: You can’t license or download an LER today. You can’t build one according to a specification. LER is not a thing yet. It’s an idea. I’m less clear on exactly what that idea is. I’ve seen multiple declarations, white papers, and diagrams of LERs from different groups, groups of groups, groups insisting they’re not groups, and groups of groups insisting they’re not groups. Some of my colleagues participate in some of those groups. I sit in when I can. It’s not gelling for me yet; it’s not clear to me that there is a consensus understanding.
Standards groups, at least in EdTech, are vulnerable to what I call “death by a thousand convenings syndrome”. 1EdTech is far from immune from it, which is one reason I walked away from the meetings during some of the years between when I was contributing to the standards as an Oracle employee and when I accepted my current job under the new leadership of Curtiss Barnes, a person I trust to make things happen.
I’m a passionate believer in interoperability standards. When done right, they make it economical to deliver real value to users of the tools, make it easier for solving hard and important educational problems in a scalable, financially viable way, and make it harder for companies to profit off of what should be table-stakes functionality (like the ability to ensure student data is handled with appropriate sensitivity or easily add the right educational tools to a particular virtual course environment, for example). But it’s hard to build effective standards coalitions. It’s a Conway’s Law problem. Until you can get a group capable of taking action that’s sufficiently aligned around clearly defined, mutually beneficial standards-making, you’ll see many meetings of disparate stakeholders over multiple years. It’s both a symptom and a cause. This is the litmus test: If you miss six or twelve months’ worth of meetings and you’re not feeling a little lost when you return because of the things that happened while you were away, you probably don’t have the ingredients you need for standards-making in that room. Increasing the ability to recognize and correct that problem is one of the personal contributions I aspire to make at 1EdTech. My sense is that the organization is improving a lot and still has a lot more improvement it can achieve. I apply the same lens to work inside 1EdTech that I apply to work with our coalition partners and across the ecosystem.
Regarding LER, when I ask folks I respect across the digital credentials world what it is, and I get different answers, that’s a symptom. Maybe an LER is a box of stuff that includes learning-related VCs (e.g., OBs) and employment-related VCs (e.g., a driver’s license). If so, then I’m not sure why it’s complicated. Just create a VC box of stuff and be done with it. I admit I’m neither a standards geek nor a digital credentials geek, so maybe I’m missing some complexity. It’s been known to happen.
If an LER something different than an expanded box of stuff, then someone needs to explain clearly exactly what it does and how that functionality creates value. Not how it does…whatever the thing is that it does. Unless you’re way down in the technology stack—I’m talking about the level of “make web pages on the internet render properly—nobody is going to rally to the call for an ontology or a transport. They want to know about value. I’m starting to see the coalition-rallying goals crisp up a bit in efforts like AACRAO’s Project Infuse. While I don’t know if Infuse will succeed yet, I do feel like I have a fairly clear idea of what it’s trying to accomplish. And I do feel like I’m in danger of falling behind if I miss a meeting. I’m participating in the governance strand, so I don’t hear the same things that my colleague Rob Coyle hears in the technical strand. But the folks I talk with in the meetings I attend seem to be going somewhere together.
Likewise, LER-RS, a digital résumé standard being shepherded by HR-Open, makes perfect sense to me. It’s the folio you curate from your box of stuff for a prospective employer. The box of stuff it pulls from aligns well with existing standards. 1EdTech has been supporting HR-Open on this project and is collaborating on a certification suite for it.
My 1EdTech colleagues who have been working on digital credentials far longer than I have tell me the term LER originally came from a 2020 white paper issued by the US Department of Labor’s American Workforce Policy Advisory Board Digital Infrastructure Working Group. The term was invented to point to a set of functional needs and cited LER technologies that were already in production at the time. I know some of the folks who worked on that paper, and they’re all people I respect. The paper focuses on the “what.” Reading it now, I’m still not seeing any big gaps in the standards needed to make it a reality, at least at my level of understanding. The problem seems to be one of coalition-building. Holding lots of convenings and creating a coalition for action are not the same.
To my mind, a lot of the LER noise is a side show, not because LER isn’t important as a concept but because many of these conversations do not seem to advance the goal. Meanwhile, digital credentials are advancing.
AI and the shift
Regular e-Literate readers know that I try to understand what technologies are good for rather than deciding if they’re “good” or “bad”. AI is a good fit for advancing digital credentials for four reasons. First, it helps on the supply side. A university that doesn’t have defined competencies or the resources to define them can plausibly extract competency descriptions from course catalogs and transcripts. Will the resulting badges and CLRs be great? No. There usually isn’t the right kind of data (like evidence of achievement) in the transcript. Could it be significantly better than nothing? Absolutely. (Again, there is a different potential path, which I’ll unpack in another post.)
Second, it helps on the demand side. Employers are already having AIs read résumés. Forget about transcripts. A rich, machine-readable, AI-queriable skills record could lower the amount of effort required enough that employers would extract net value from the LER-RS. As a prospective employer, I could ask fairly detailed and sophisticated questions about a candidate pool and have AI surface interesting answers.
Third, as AI facilitates the evaluation of skill verification assertions, the locus of value in a credential will shift from the issuer to the proof of achievement. A university’s reputation is a proxy for the educational achievement of the student. And it isn’t a great one. While I doubt AIs will be terrific at evaluating a wide range of skill assertions in the near future, they could be good enough to give great students from less prestigious institutions a better chance at getting noticed.
Finally, AI may help to capture emerging skills that have not yet been codified. For example, recently I’ve been vibe coding as a non-programmer. I’ve figured out how to vibe code Model Context Protocol (MCP) servers in TypeScript and python, use progressive disclosure patterns to reduce AI token usage while increasing accuracy and security, and build a compositor that enables me to orchestrate these workflows using microservices. Some of these skills didn’t exist six months ago. And even if “my” code is good, it wouldn’t tell the story. How did I engineer Claude Code’s context to get it to think like a developer? Did I get it to follow practices that would check its code quality in ways that I can’t, like test-driven development? How did the idea of a “compositor” come about, and how did I make sure it wasn’t over-engineered AI slop? If I did? An AI that understands digital credentials standards could identify, express, and capture evidence for emerging competencies as part of the exhaust stream of my work. And another AI could read that evidence. To be clear, nobody would have any reason to believe that I have any of these skills based on my formal work experience. To bastardize a saying, the proof of the pudding is in the reading.
When I was hiring Agile Product Owners at Cengage, we used to take the top candidates and run them through simulated product situations to see how they would handle them. We deliberately created complications. Yes, we were looking for craft. But we were also looking for patterns of behavior that are related to how an individual applies a given competency. It’s about how they think as much as what they think (e.g., how they think about the purpose and applications of user stories or retrospectives). What do they bring to the table that’s unusual or unique? It was a time-consuming process involving multiple staff members, but it helped me identify the best performers in a way that no documentation I could have requested at the time would have revealed. If I had access to examples of their real work product, structured in a way that I could interrogate using an AI, I’m not sure if I’d need to run those simulations.
Still learning
I don’t pretend to be an expert in digital credentials. Far from it. It’s caught my attention in a way I didn’t expect, though. And I think it’s at just the right level of messiness and foment to be a space where we can make some new and significant progress as a sector.
Join the Conversation