Blog posts

2697 bookmarks
Newest
Is AI closing the door on entry-level job opportunities?
Is AI closing the door on entry-level job opportunities?
AI will result in both job losses and opportunities, changing the career ladder that used to provide entry-level opportunities and a mostly linear path upward. But eliminating entry-level roles will dramatically affect the long-term talent pipeline.
·weforum.org·
Is AI closing the door on entry-level job opportunities?
The Perils of Using AI to Replace Entry-Level Jobs | Harvard Business Impact Education
The Perils of Using AI to Replace Entry-Level Jobs | Harvard Business Impact Education
One theme I see coming up repeatedly in L&D conversations is that using AI requires expertise to evaluate AI-generated results. That's fine right now where we have experienced people who built skills pre-AI. But what about entry-level workers now and in the future? If those entry-level jobs are eliminated, how will the next generation learn those skills and judgment? This article has some ideas about restructuring work. This also points to some areas where L&D could help support people whose jobs are redefined and restructured.
Imagine recruiting managers who have never worked at the front lines, never handled customer complaints, never written up notes from consequential meetings, never grappled with the minutiae of operational work. Leadership would become abstract, detached, and dangerously naive.
Junior roles must no longer be defined by the repetitive, automatable tasks that AI can do better and faster. Instead, they should be designed to expose people to the why behind the work.
AI is only useful when paired with critical thinking. Productivity gains are meaningless if they come at the expense of professional judgment.
The default use of AI is substitution: Let the machine do the work and cut headcount. A smarter approach is to redesign workflows, so AI handles rote execution while humans focus on framing the problems, asking better questions, and building relationships.
Consider the analogy of education: If a student outsources every essay to generative AI, they bypass the intellectual struggle that produces deep learning. It is like microwaving ideas: fast, convenient, and unsatisfying. The effort, even the pain, of thinking for yourself is what builds a student’s capacity.
·hbsp.harvard.edu·
The Perils of Using AI to Replace Entry-Level Jobs | Harvard Business Impact Education
Three Steps for L&D Professionals to Legally Use AI Images and Video
Three Steps for L&D Professionals to Legally Use AI Images and Video
Debbie Richards shares tips for using AI image and video legally. Personally, I would add some other image tools to that list of professional options (like Flora and Freepik), but she's correct that Firefly is the safest for images based on training data. Free tools are fine to experiment with, but don't use them for commercial projects.
·linkedin.com·
Three Steps for L&D Professionals to Legally Use AI Images and Video
I Rebuilt a 10-Year-Old Simulation with AI in Half a Day. Here’s What I Learned.
I Rebuilt a 10-Year-Old Simulation with AI in Half a Day. Here’s What I Learned.
Trina Rimmer shares her experiences rebuilding an old Rise scenario activity as a new experience with Claude. You can see the projects side-by-side to compare. Trina's reflections add a lot of value here; you can see how the AI tools enabled her to do something new, but that was only possible because of her existing instructional design skills. If she didn't have that expertise, then she couldn't have gotten as successful a result with AI.
And then there was Claude’s desire to help. It was relentless. I pushed back constantly on anything that broke the “fourth wall,” anything that handed the learner an insight before they’d reached for it themselves, and anything that sounded like a training exercise instead of a real situation. Productive struggle is where learning happens.
AI is a design collaborator, not a design replacement. Every meaningful improvement in the rebuilt project came from my instructional design judgment, not from Claude’s defaults. The dialogue, the model interaction structure—me insisting that learners identify what worked before being told—and the debrief—me pushing back on Claude’s first version until it asked more than it told—that was all me. Claude made those things possible faster, but without the ID judgment driving the prompts, the end result would have been slicker but shallower.
·trinarimmer.substack.com·
I Rebuilt a 10-Year-Old Simulation with AI in Half a Day. Here’s What I Learned.
Simple Ways to Create Consistent Characters in ChatGPT
Simple Ways to Create Consistent Characters in ChatGPT
Tom Kuhlmann shares his workflow for generating consistent character images using ChatGPT using a GPT or a project with saved instructions. Even if you don't use ChatGPT for image generation, the descriptions of image styles in the download are useful for working with any tool.
·share.articulate.com·
Simple Ways to Create Consistent Characters in ChatGPT
The AI Image Generation System for Learning Designers
The AI Image Generation System for Learning Designers
Despite the article title, this system doesn't actually work with all AI image tools. It won't work with Midjourney, Recraft, Brushless, etc. But it will work with any of the LLM-based image tools like Nano Banana, ChatGPT, and Copilot, since those all work in similar ways. That means it covers what most people have access to.
The fix is an 3-step process which gives you superpowers in AI image generation: Write a visual brief — answer six questions that close the creative and pedagogical gaps before you generate a single image. Build a mood board — gather images that capture the lighting, energy, and environment of your learner’s world. Select the 3 that look like they were shot by the same photographer on the same day and upload them individually as style references. Create character anchors — your style references fix the visual world; your character references fix the people inside it. For each named character, generate a head-and-shoulders image on a neutral background, facing forward. This is your master reference. Attach it alongside your style references every time you generate a scene featuring that character — and the tool stops making casting decisions on your behalf.
·drphilippahardman.substack.com·
The AI Image Generation System for Learning Designers
Character-Driven Learning Experiences
Character-Driven Learning Experiences
Great insights here from Teresa Moreno about using characters effectively in elearning. I especially appreciate the learning science angle here focusing on improving self-efficacy through "coping models."
A mastery model demonstrates perfect performance from the start. They know exactly what to do, execute flawlessly, and model ideal behavior. This is what most learning designers and SMEs default to: show the “right” way immediately.   A coping model struggles, verbalizes their thinking process, makes mistakes, and demonstrates the process of overcoming challenges. They eventually succeed, but only after working through realistic difficulties.
Characters in learning design don’t need movie-like backstories. They need a recognizable problem intrinsically connected to what people need to learn. The swap question is the real test: could this character or story be replaced without changing what’s learned? If yes, it’s probably decoration.
·learningdesignerin.com·
Character-Driven Learning Experiences
Mascots vs. Authentic Characters in eLearning | Teresa Moreno
Mascots vs. Authentic Characters in eLearning | Teresa Moreno

I love Teresa Moreno's reflection questions here to help you differentiate between characters that support learning and those that are just decoration or distraction.

"- Is the character's context inseparable from what's being learned (intrinsic integration)?

  • Do people see realistic struggle instead of perfect performance (coping vs. mastery models)?
  • Is the character enabling active decision-making or just narrating steps?"
·linkedin.com·
Mascots vs. Authentic Characters in eLearning | Teresa Moreno
AI: You Still Have to Know Stuff – Usable Learning
AI: You Still Have to Know Stuff – Usable Learning
Julie Dirksen articulates what many of us have experienced. Yes, AI can be useful...but we still have to have enough expertise to know what's good about the AI output. You have to know what to discard or revise. Even as AI gets more accurate, you need to know what quality results look like. Plus, what happens when people come into jobs and don't have that prior experience that helps them evaluate AI output?
I need to use my expertise to craft a prompt that will get the most accurate result, while still recognizing the parts that need revision.
·usablelearning.com·
AI: You Still Have to Know Stuff – Usable Learning
Meet Thaura | Your Ethical AI Companion
Meet Thaura | Your Ethical AI Companion
Thaura AI is designed as an ethical LLM. It doesn't train models on your private data, has transparency about their business model, and advertises that it uses 94% less energy than ChatGPT.
·thaura.ai·
Meet Thaura | Your Ethical AI Companion
Something Big Is Happening
Something Big Is Happening
I'm not entirely convinced by this article, but I try to read opinions from a variety of sources and perspectives on AI. This author explains that the pace of AI is moving much faster than most people realize, and then argues that the amount of disruption AI will cause will be more significant and quicker than most people expect.
·shumer.dev·
Something Big Is Happening
A Deep Dive into Desirable Difficulties
A Deep Dive into Desirable Difficulties
I've seen some misunderstandings on desirable difficulties on social media recently. This article has an understandable explanation of what desirable difficulties are (techniques that may initially cause errors and short-term performance issues but in the long run improve learning and task performance). The techniques include varied practice, spacing, reduced feedback and guidance, retrieval, and interleaving. If you're new to the idea of desirable difficulties, this will give you a solid foundation.
Difficulties are desirable when they boost learning, not performance.
For example, when learning to drive, it would be easier to practice by driving round the same block multiple times, with an instructor sitting beside you and telling you exactly what to do. As a learner under such conditions, you’d make very few errors, if any.
However, once our lessons are over, we have to drive without an instructor telling us what to do, on complex and sometimes unfamiliar roads. The desirable difficulties framework would suggest, therefore, that practice should resemble that realistic situation, with a variety of road conditions to deal with, and reduced guidance or feedback.
·firth.substack.com·
A Deep Dive into Desirable Difficulties
Event Segmentation Theory: Why Some Training Feels Clear and Some Feels Like One Continuous Mistake
Event Segmentation Theory: Why Some Training Feels Clear and Some Feels Like One Continuous Mistake
This article includes some great research translation by Tom McDowall about Event Segmentation Theory. We talk about "chunking" content to support learning, but we often rely on time or intuition to determine where to break up content. Event Segmentation Theory provides an evidence-informated approach to more meaningful divisions so you can improve the effectiveness of your training based just on where you add breaks. Tom includes lots of citations for further reading.
Your brain doesn’t process the world as one unbroken stream. It automatically divides ongoing experience into discrete chunks, which researchers call “events,” and does so continuously, without you deciding to do so or being aware that it’s happening.
Information present at a boundary, the moment when one event ends and another begins, gets encoded more strongly than information in the middle of an event. The boundary acts like an attentional gate: it opens briefly to let new information in, and that information gets a better foothold in long-term memory as a result (Kurby and Zacks, 2008).
There’s a trade-off, though. While boundaries improve memory for what happens at the transition point, they impair memory for temporal order across the boundary. Items that span a boundary are harder to sequence correctly and are remembered as being further apart in time than they were (Ezzyat and Davachi, 2014).
Six features of a situation reliably trigger event boundaries: spatial or location changes, character entrances or exits, new object interactions, goal shifts, changes in causal structure, and temporal discontinuities (Speer, Zacks, and Reynolds, 2007). In practical terms, the most reliable triggers for workplace training are changes in what you’re trying to achieve (the goal), changes in where you are or what you’re looking at (the environment), and changes in why the current action matters (the causal structure).
The most direct way to apply EST is to structure process training and standard operating procedures around the natural event structure of the task. Rather than organising steps by convenience or by how they appear in a system, map them to the hierarchical structure of the activity: major phases first (the coarse events), then detailed steps within each phase (the fine events).
·idtips.substack.com·
Event Segmentation Theory: Why Some Training Feels Clear and Some Feels Like One Continuous Mistake
"Steal My Wins" | Kimberly Scott
"Steal My Wins" | Kimberly Scott
Kim Scott has been sharing lots of details on her job search and the strategies that are working for her. As a consultant, I've been out of the job market for a long time, so it's helpful to have folks like Kim that I can point others to who are looking for work in this lousy job market.
·linkedin.com·
"Steal My Wins" | Kimberly Scott
Don’t Panic: When Should AI Coaches and Assistants Request Human Intervention? – Parrotbox
Don’t Panic: When Should AI Coaches and Assistants Request Human Intervention? – Parrotbox
Thinking about a "human in the loop" is a good start, but what does that really look like if you're using chat bots or AI coaches at scale? I really like the example of levels of risk for safety in mental health and when chats should be reported or escalated to a human for intervention.
·parrotbox.ai·
Don’t Panic: When Should AI Coaches and Assistants Request Human Intervention? – Parrotbox
Grasp
Grasp
AI tool to create a personalized collection of resources and a sequence to help you learn a skill based on a goal you identify. h/t Nejc
·paths.grasp.study·
Grasp
Agentic Image Generation
Agentic Image Generation
This is an interesting example of using agentic AI to generate an infographic based on a blog post. This uses Claude Code to connect Nano Banana Pro for image generation with an additional tool for providing AI feedback on the image. The workflow iterates and improves the initial image based on the AI-generated feedback, without human intervention.
·academy.dair.ai·
Agentic Image Generation
AI companies will fail. We can salvage something from the wreckage | AI (artificial intelligence) | The Guardian
AI companies will fail. We can salvage something from the wreckage | AI (artificial intelligence) | The Guardian
Cory Doctorow has written a long article on the risks and problems of AI, particularly in the way that AI companies promote and hype the benefits of AI.
In automation theory, a “centaur” is a person who is assisted by a machine. Driving a car makes you a centaur, and so does using autocomplete. A reverse centaur is a machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine.
“And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop’. It’s their signature on the diagnosis.” This is a reverse centaur, and it is a specific kind of reverse centaur: it is what Dan Davies calls an “accountability sink”. The radiologist’s job is not really to oversee the AI’s work, it is to take the blame for the AI’s mistakes.
This is another key to understanding – and thus deflating – the AI bubble. The AI can’t do your job, but an AI salesman can convince your boss to fire you and replace you with an AI that can’t do your job.
For AI to be valuable, it has to replace high-wage workers, and those are precisely the workers who might spot some of those statistically camouflaged AI errors.
After more than 20 years of being consistently wrong and terrible for artists’ rights, the US Copyright Office has finally done something gloriously, wonderfully right. All through this AI bubble, the Copyright Office has maintained – correctly – that AI-generated works cannot be copyrighted, because copyright is exclusively for humans.
The fact that every AI-created work is in the public domain means that if Getty or Disney or Universal or Hearst newspapers use AI to generate works – then anyone else can take those works, copy them, sell them or give them away for nothing. And the only thing those companies hate more than paying creative workers, is having other people take their stuff without permission.
·theguardian.com·
AI companies will fail. We can salvage something from the wreckage | AI (artificial intelligence) | The Guardian
Memory Rewritten: Study Finds No Clear Line Between Episodic and Semantic Retrieval - Neuroscience News
Memory Rewritten: Study Finds No Clear Line Between Episodic and Semantic Retrieval - Neuroscience News
Previous research had pointed to the idea that episodic and semantic memory were different types of memory that used different parts of the brain. This study contradicts that earlier theory, finding that the whole brain is involved in memory.
there is no difference in neural activity between successful semantic and episodic retrieval.
Episodic memory refers to the ability to remember a past event that occurred in a particular spatial and temporal context. This type of memory supports the human capacity to re-experience events from our past, as a form of “mental time travel”. Semantic memory, on the other hand, refers to the ability to remember facts and general knowledge about the world that are retrieved independently from their original spatial or temporal context.
·neurosciencenews.com·
Memory Rewritten: Study Finds No Clear Line Between Episodic and Semantic Retrieval - Neuroscience News
The Hidden Mirror: Why Your AI is Only as Good as Your Thinking
The Hidden Mirror: Why Your AI is Only as Good as Your Thinking

Debbie Richards writes about critical issues related to working with AI: our own human cognitive biases. AI can reflect and amplify our own mental shortcuts. Being aware of our cognitive biases can make us more effective at working with AI.

" The researchers identify three critical stages where our own thinking can steer AI off course:

Before Prompting: Our past experiences create a "halo" or "horns" effect. If you’ve had great results, you might over-trust the tool for tasks it isn't ready for. Conversely, if you've been spooked by headlines about hallucinations, you might avoid it even when it could be genuinely helpful.
During Prompting: How we frame a question matters. "Leading question bias" happens when we bake the answer into the prompt, like asking "Why is product X the best?" This encourages the AI to ignore weaknesses. There is also "expediency bias," where we settle for the first "good enough" answer because we’re under time pressure.
After Prompting: Once we have an output, the "endowment effect" can make us overvalue it simply because of the effort we put into the prompt. We also have to watch the "framing effect." How we present that AI-driven data can completely change how our audience feels about it.:
·linkedin.com·
The Hidden Mirror: Why Your AI is Only as Good as Your Thinking
How to use JSON to build better AI image prompts
How to use JSON to build better AI image prompts
JSON is a way of structuring your image prompts, clearly labeling what each detail is. It can help you think through the specifics of your prompt and generate more systematic and repeatable prompts. This prompt structure doesn't work with all tools (including Midjourney), but it's something to consider for more advanced prompting in other tools.
·teabot.ai·
How to use JSON to build better AI image prompts
Fixing Plastic AI Skin: The Complete Guide to Realistic Prompts - Rezience | Andy H. Tu
Fixing Plastic AI Skin: The Complete Guide to Realistic Prompts - Rezience | Andy H. Tu
While I don't think that prompting alone will fix all problems with waxy skin texture in AI images, better prompting can improve your results. The specific phrases and tips here should work in any image generation tool (but watch out for the negative prompts; sometimes those confuse the models).
·andyhtu.com·
Fixing Plastic AI Skin: The Complete Guide to Realistic Prompts - Rezience | Andy H. Tu
Do AI Voices and Avatars Improve Learning? Here’s What the Data Says
Do AI Voices and Avatars Improve Learning? Here’s What the Data Says
TechSmith conducted a global study to determine how AI voices and avatars affect learning. I was surprised at how well the high-quality AI voices performed. We seem to have crossed the threshold where high-quality AI voices perform comparably to human voice actors. I was also surprised at how well the AI avatars did, although their recommendations for specific use cases do make some sense. I wish they'd also done a separate control with no narrator visible on screen (AI or human). The fact that AI avatars can be comparable to humans in some instances isn't that shocking, I guess, but I really want to see how it compares to just having the slide content and no face on screen.
What really makes learners pay attention? A voice that sounds clear, warm, and polished — not whether it’s human or AI. As voice quality improved in the study, so did professionalism ratings. In fact, 92% of viewers said the high-quality AI voice made the video feel professionally produced.
Results from the “pop quiz” portion of our study make the pattern clear: correct answers increased as voice quality improved. In fact, the high-quality AI voice produced the strongest retention numbers, aside from one low-quality human outlier.
But are AI voices distracting overall? It depends. Low-quality, synthetic voices are unmistakable and draw attention away from the content. When the AI voice sounds natural, many viewers can’t distinguish it from a human voice. The difference is less jarring, and information retention holds steady or even improves.
AI avatars aren’t distracting by default, but size matters. When an avatar fills the screen, viewers are more likely to notice robotic traits like lip sync issues, eye contact, limited facial movement, awkward blinking, or unnatural breathing.
The right format depends on your video’s purpose. Use this quick decision guide: Screen-heavy, procedural, and frequently updated content: High-quality AI voice with screen recording, plus an optional AI avatar in PiP. Emotionally sensitive, culture-setting, or leadership-driven content: Human presenter with a human voice.  Long-form, concept-heavy learning: A mix — human-led modules for core ideas, supported by AI-voiced micro-lessons and refreshers.
·techsmith.com·
Do AI Voices and Avatars Improve Learning? Here’s What the Data Says
How to fix your LinkedIn feed in one hour
How to fix your LinkedIn feed in one hour

If you find scrolling on LinkedIn terribly annoying, you may not have trained its algorithm well. Follow these tips to improve the quality of your LinkedIn feed.

" You manage your feed by giving AI the signal.

Signal for what you want. Signal for what you do not want. Then you reinforce it until the algorithm adjusts to your taste. That is it. Not complicated. But most people never do it. "

·linkedin.com·
How to fix your LinkedIn feed in one hour