Blog posts

2703 bookmarks
Newest
7 Ways to Automate Repetitive Design Tasks with Affinity and Claude
7 Ways to Automate Repetitive Design Tasks with Affinity and Claude
I use Affinity as my primary tool for editing images. Affinity now can connect with Claude to automate repetitive tasks like renaming layers and prepping files. It looks like a great way to speed up some boring tasks so you have more time on the fun work. This is currently in beta and free, but will probably become a paid feature later. Still, if it saves time, it may be worth a paid upgrade.
·affinity.studio·
7 Ways to Automate Repetitive Design Tasks with Affinity and Claude
How Much Water Does AI Use? An Expert Analysis of the Real Footprint.
How Much Water Does AI Use? An Expert Analysis of the Real Footprint.
The water use for AI data centers isn't as big of a problem as it's often made out to be. Energy use is a separate question, but genuinely--don't let the water use keep you up at night.
For 11 weeks, I tracked all of my AI use. One hundred sessions. I counted the tokens processed and applied publicly available numbers on per-token energy and water intensity from Epoch AI and operator-reported data from Microsoft and Google. Anyone can run this math. In those 11 weeks, I built an iOS app from scratch and wrote policy briefs on extreme heat for nonprofits I work with. I produced documentary pitch decks and drafted a 15,000-word climate fiction piece about the Colorado River collapse. I used AI every single day, often for hours at a time. Total lifecycle water footprint of all that work: about five gallons. That accounts for everything: the water used to cool the data centers, the water consumed at power plants to generate the electricity, and the water embedded in manufacturing the hardware. When an Outside editor reached out to ask me to write this story, I was on a trip to Marble Canyon, Arizona, to train raft guide companies on what is happening with the river. I drove my diesel Sprinter van from Tucson to the site, which tallied 383 miles at 20 miles per gallon of gasoline. When I ran the numbers later, the lifecycle water footprint of my fuel was around 110 gallons. One drive to the work I do on the Colorado River used more than 20 times the water of everything I did with AI in 11 weeks. That comparison stopped me cold—and I study this for a living.
·archive.ph·
How Much Water Does AI Use? An Expert Analysis of the Real Footprint.
Style of language Formal Versus Conversational
Style of language Formal Versus Conversational
This guide provides a summary of the personalization principle with a focus on the writing or speaking style. A polite, conversational style is more effective for learning in general (with some exceptions noted). I appreciate the examples in this guide so you can compare the difference between formal and conversational style.
·olmm2.trubox.ca·
Style of language Formal Versus Conversational
Vois - Professional AI Voice Studio
Vois - Professional AI Voice Studio
While Vois doesn't have as many voices as some other platforms, it has several other advantages. It runs locally on your machine, so there's no risk of content being used to train AI. You can tag your script for multiple speakers, making it easier to manage dialogue. You can also buy just the credits you need rather than paying a monthly or annual fee, and you only use credits when you publish (not for each iteration and typo fix).
·vois.so·
Vois - Professional AI Voice Studio
Seven Prompts No AI Image Generator Can Get Right
Seven Prompts No AI Image Generator Can Get Right
Really interesting research on the limits of AI image generation. Hands and text are both much better than a year ago, but multi line text (especially with numbers) fails because text isn't generated sequentially. AI images approximate rather than counting, and all models fail with prime numbers. Reflections are also approximate; there's no geometry behind them.
·linkedin.com·
Seven Prompts No AI Image Generator Can Get Right
Is AI closing the door on entry-level job opportunities?
Is AI closing the door on entry-level job opportunities?
AI will result in both job losses and opportunities, changing the career ladder that used to provide entry-level opportunities and a mostly linear path upward. But eliminating entry-level roles will dramatically affect the long-term talent pipeline.
·weforum.org·
Is AI closing the door on entry-level job opportunities?
The Perils of Using AI to Replace Entry-Level Jobs | Harvard Business Impact Education
The Perils of Using AI to Replace Entry-Level Jobs | Harvard Business Impact Education
One theme I see coming up repeatedly in L&D conversations is that using AI requires expertise to evaluate AI-generated results. That's fine right now where we have experienced people who built skills pre-AI. But what about entry-level workers now and in the future? If those entry-level jobs are eliminated, how will the next generation learn those skills and judgment? This article has some ideas about restructuring work. This also points to some areas where L&D could help support people whose jobs are redefined and restructured.
Imagine recruiting managers who have never worked at the front lines, never handled customer complaints, never written up notes from consequential meetings, never grappled with the minutiae of operational work. Leadership would become abstract, detached, and dangerously naive.
Junior roles must no longer be defined by the repetitive, automatable tasks that AI can do better and faster. Instead, they should be designed to expose people to the why behind the work.
AI is only useful when paired with critical thinking. Productivity gains are meaningless if they come at the expense of professional judgment.
The default use of AI is substitution: Let the machine do the work and cut headcount. A smarter approach is to redesign workflows, so AI handles rote execution while humans focus on framing the problems, asking better questions, and building relationships.
Consider the analogy of education: If a student outsources every essay to generative AI, they bypass the intellectual struggle that produces deep learning. It is like microwaving ideas: fast, convenient, and unsatisfying. The effort, even the pain, of thinking for yourself is what builds a student’s capacity.
·hbsp.harvard.edu·
The Perils of Using AI to Replace Entry-Level Jobs | Harvard Business Impact Education
Three Steps for L&D Professionals to Legally Use AI Images and Video
Three Steps for L&D Professionals to Legally Use AI Images and Video
Debbie Richards shares tips for using AI image and video legally. Personally, I would add some other image tools to that list of professional options (like Flora and Freepik), but she's correct that Firefly is the safest for images based on training data. Free tools are fine to experiment with, but don't use them for commercial projects.
·linkedin.com·
Three Steps for L&D Professionals to Legally Use AI Images and Video
I Rebuilt a 10-Year-Old Simulation with AI in Half a Day. Here’s What I Learned.
I Rebuilt a 10-Year-Old Simulation with AI in Half a Day. Here’s What I Learned.
Trina Rimmer shares her experiences rebuilding an old Rise scenario activity as a new experience with Claude. You can see the projects side-by-side to compare. Trina's reflections add a lot of value here; you can see how the AI tools enabled her to do something new, but that was only possible because of her existing instructional design skills. If she didn't have that expertise, then she couldn't have gotten as successful a result with AI.
And then there was Claude’s desire to help. It was relentless. I pushed back constantly on anything that broke the “fourth wall,” anything that handed the learner an insight before they’d reached for it themselves, and anything that sounded like a training exercise instead of a real situation. Productive struggle is where learning happens.
AI is a design collaborator, not a design replacement. Every meaningful improvement in the rebuilt project came from my instructional design judgment, not from Claude’s defaults. The dialogue, the model interaction structure—me insisting that learners identify what worked before being told—and the debrief—me pushing back on Claude’s first version until it asked more than it told—that was all me. Claude made those things possible faster, but without the ID judgment driving the prompts, the end result would have been slicker but shallower.
·trinarimmer.substack.com·
I Rebuilt a 10-Year-Old Simulation with AI in Half a Day. Here’s What I Learned.
Simple Ways to Create Consistent Characters in ChatGPT
Simple Ways to Create Consistent Characters in ChatGPT
Tom Kuhlmann shares his workflow for generating consistent character images using ChatGPT using a GPT or a project with saved instructions. Even if you don't use ChatGPT for image generation, the descriptions of image styles in the download are useful for working with any tool.
·share.articulate.com·
Simple Ways to Create Consistent Characters in ChatGPT
The AI Image Generation System for Learning Designers
The AI Image Generation System for Learning Designers
Despite the article title, this system doesn't actually work with all AI image tools. It won't work with Midjourney, Recraft, Brushless, etc. But it will work with any of the LLM-based image tools like Nano Banana, ChatGPT, and Copilot, since those all work in similar ways. That means it covers what most people have access to.
The fix is an 3-step process which gives you superpowers in AI image generation: Write a visual brief — answer six questions that close the creative and pedagogical gaps before you generate a single image. Build a mood board — gather images that capture the lighting, energy, and environment of your learner’s world. Select the 3 that look like they were shot by the same photographer on the same day and upload them individually as style references. Create character anchors — your style references fix the visual world; your character references fix the people inside it. For each named character, generate a head-and-shoulders image on a neutral background, facing forward. This is your master reference. Attach it alongside your style references every time you generate a scene featuring that character — and the tool stops making casting decisions on your behalf.
·drphilippahardman.substack.com·
The AI Image Generation System for Learning Designers
Character-Driven Learning Experiences
Character-Driven Learning Experiences
Great insights here from Teresa Moreno about using characters effectively in elearning. I especially appreciate the learning science angle here focusing on improving self-efficacy through "coping models."
A mastery model demonstrates perfect performance from the start. They know exactly what to do, execute flawlessly, and model ideal behavior. This is what most learning designers and SMEs default to: show the “right” way immediately.   A coping model struggles, verbalizes their thinking process, makes mistakes, and demonstrates the process of overcoming challenges. They eventually succeed, but only after working through realistic difficulties.
Characters in learning design don’t need movie-like backstories. They need a recognizable problem intrinsically connected to what people need to learn. The swap question is the real test: could this character or story be replaced without changing what’s learned? If yes, it’s probably decoration.
·learningdesignerin.com·
Character-Driven Learning Experiences
Mascots vs. Authentic Characters in eLearning | Teresa Moreno
Mascots vs. Authentic Characters in eLearning | Teresa Moreno

I love Teresa Moreno's reflection questions here to help you differentiate between characters that support learning and those that are just decoration or distraction.

"- Is the character's context inseparable from what's being learned (intrinsic integration)?

  • Do people see realistic struggle instead of perfect performance (coping vs. mastery models)?
  • Is the character enabling active decision-making or just narrating steps?"
·linkedin.com·
Mascots vs. Authentic Characters in eLearning | Teresa Moreno
AI: You Still Have to Know Stuff – Usable Learning
AI: You Still Have to Know Stuff – Usable Learning
Julie Dirksen articulates what many of us have experienced. Yes, AI can be useful...but we still have to have enough expertise to know what's good about the AI output. You have to know what to discard or revise. Even as AI gets more accurate, you need to know what quality results look like. Plus, what happens when people come into jobs and don't have that prior experience that helps them evaluate AI output?
I need to use my expertise to craft a prompt that will get the most accurate result, while still recognizing the parts that need revision.
·usablelearning.com·
AI: You Still Have to Know Stuff – Usable Learning
Meet Thaura | Your Ethical AI Companion
Meet Thaura | Your Ethical AI Companion
Thaura AI is designed as an ethical LLM. It doesn't train models on your private data, has transparency about their business model, and advertises that it uses 94% less energy than ChatGPT.
·thaura.ai·
Meet Thaura | Your Ethical AI Companion
Something Big Is Happening
Something Big Is Happening
I'm not entirely convinced by this article, but I try to read opinions from a variety of sources and perspectives on AI. This author explains that the pace of AI is moving much faster than most people realize, and then argues that the amount of disruption AI will cause will be more significant and quicker than most people expect.
·shumer.dev·
Something Big Is Happening
A Deep Dive into Desirable Difficulties
A Deep Dive into Desirable Difficulties
I've seen some misunderstandings on desirable difficulties on social media recently. This article has an understandable explanation of what desirable difficulties are (techniques that may initially cause errors and short-term performance issues but in the long run improve learning and task performance). The techniques include varied practice, spacing, reduced feedback and guidance, retrieval, and interleaving. If you're new to the idea of desirable difficulties, this will give you a solid foundation.
Difficulties are desirable when they boost learning, not performance.
For example, when learning to drive, it would be easier to practice by driving round the same block multiple times, with an instructor sitting beside you and telling you exactly what to do. As a learner under such conditions, you’d make very few errors, if any.
However, once our lessons are over, we have to drive without an instructor telling us what to do, on complex and sometimes unfamiliar roads. The desirable difficulties framework would suggest, therefore, that practice should resemble that realistic situation, with a variety of road conditions to deal with, and reduced guidance or feedback.
·firth.substack.com·
A Deep Dive into Desirable Difficulties
Event Segmentation Theory: Why Some Training Feels Clear and Some Feels Like One Continuous Mistake
Event Segmentation Theory: Why Some Training Feels Clear and Some Feels Like One Continuous Mistake
This article includes some great research translation by Tom McDowall about Event Segmentation Theory. We talk about "chunking" content to support learning, but we often rely on time or intuition to determine where to break up content. Event Segmentation Theory provides an evidence-informated approach to more meaningful divisions so you can improve the effectiveness of your training based just on where you add breaks. Tom includes lots of citations for further reading.
Your brain doesn’t process the world as one unbroken stream. It automatically divides ongoing experience into discrete chunks, which researchers call “events,” and does so continuously, without you deciding to do so or being aware that it’s happening.
Information present at a boundary, the moment when one event ends and another begins, gets encoded more strongly than information in the middle of an event. The boundary acts like an attentional gate: it opens briefly to let new information in, and that information gets a better foothold in long-term memory as a result (Kurby and Zacks, 2008).
There’s a trade-off, though. While boundaries improve memory for what happens at the transition point, they impair memory for temporal order across the boundary. Items that span a boundary are harder to sequence correctly and are remembered as being further apart in time than they were (Ezzyat and Davachi, 2014).
Six features of a situation reliably trigger event boundaries: spatial or location changes, character entrances or exits, new object interactions, goal shifts, changes in causal structure, and temporal discontinuities (Speer, Zacks, and Reynolds, 2007). In practical terms, the most reliable triggers for workplace training are changes in what you’re trying to achieve (the goal), changes in where you are or what you’re looking at (the environment), and changes in why the current action matters (the causal structure).
The most direct way to apply EST is to structure process training and standard operating procedures around the natural event structure of the task. Rather than organising steps by convenience or by how they appear in a system, map them to the hierarchical structure of the activity: major phases first (the coarse events), then detailed steps within each phase (the fine events).
·idtips.substack.com·
Event Segmentation Theory: Why Some Training Feels Clear and Some Feels Like One Continuous Mistake
"Steal My Wins" | Kimberly Scott
"Steal My Wins" | Kimberly Scott
Kim Scott has been sharing lots of details on her job search and the strategies that are working for her. As a consultant, I've been out of the job market for a long time, so it's helpful to have folks like Kim that I can point others to who are looking for work in this lousy job market.
·linkedin.com·
"Steal My Wins" | Kimberly Scott
Don’t Panic: When Should AI Coaches and Assistants Request Human Intervention? – Parrotbox
Don’t Panic: When Should AI Coaches and Assistants Request Human Intervention? – Parrotbox
Thinking about a "human in the loop" is a good start, but what does that really look like if you're using chat bots or AI coaches at scale? I really like the example of levels of risk for safety in mental health and when chats should be reported or escalated to a human for intervention.
·parrotbox.ai·
Don’t Panic: When Should AI Coaches and Assistants Request Human Intervention? – Parrotbox
Grasp
Grasp
AI tool to create a personalized collection of resources and a sequence to help you learn a skill based on a goal you identify. h/t Nejc
·paths.grasp.study·
Grasp
Agentic Image Generation
Agentic Image Generation
This is an interesting example of using agentic AI to generate an infographic based on a blog post. This uses Claude Code to connect Nano Banana Pro for image generation with an additional tool for providing AI feedback on the image. The workflow iterates and improves the initial image based on the AI-generated feedback, without human intervention.
·academy.dair.ai·
Agentic Image Generation
AI companies will fail. We can salvage something from the wreckage | AI (artificial intelligence) | The Guardian
AI companies will fail. We can salvage something from the wreckage | AI (artificial intelligence) | The Guardian
Cory Doctorow has written a long article on the risks and problems of AI, particularly in the way that AI companies promote and hype the benefits of AI.
In automation theory, a “centaur” is a person who is assisted by a machine. Driving a car makes you a centaur, and so does using autocomplete. A reverse centaur is a machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine.
“And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop’. It’s their signature on the diagnosis.” This is a reverse centaur, and it is a specific kind of reverse centaur: it is what Dan Davies calls an “accountability sink”. The radiologist’s job is not really to oversee the AI’s work, it is to take the blame for the AI’s mistakes.
This is another key to understanding – and thus deflating – the AI bubble. The AI can’t do your job, but an AI salesman can convince your boss to fire you and replace you with an AI that can’t do your job.
For AI to be valuable, it has to replace high-wage workers, and those are precisely the workers who might spot some of those statistically camouflaged AI errors.
After more than 20 years of being consistently wrong and terrible for artists’ rights, the US Copyright Office has finally done something gloriously, wonderfully right. All through this AI bubble, the Copyright Office has maintained – correctly – that AI-generated works cannot be copyrighted, because copyright is exclusively for humans.
The fact that every AI-created work is in the public domain means that if Getty or Disney or Universal or Hearst newspapers use AI to generate works – then anyone else can take those works, copy them, sell them or give them away for nothing. And the only thing those companies hate more than paying creative workers, is having other people take their stuff without permission.
·theguardian.com·
AI companies will fail. We can salvage something from the wreckage | AI (artificial intelligence) | The Guardian
Memory Rewritten: Study Finds No Clear Line Between Episodic and Semantic Retrieval - Neuroscience News
Memory Rewritten: Study Finds No Clear Line Between Episodic and Semantic Retrieval - Neuroscience News
Previous research had pointed to the idea that episodic and semantic memory were different types of memory that used different parts of the brain. This study contradicts that earlier theory, finding that the whole brain is involved in memory.
there is no difference in neural activity between successful semantic and episodic retrieval.
Episodic memory refers to the ability to remember a past event that occurred in a particular spatial and temporal context. This type of memory supports the human capacity to re-experience events from our past, as a form of “mental time travel”. Semantic memory, on the other hand, refers to the ability to remember facts and general knowledge about the world that are retrieved independently from their original spatial or temporal context.
·neurosciencenews.com·
Memory Rewritten: Study Finds No Clear Line Between Episodic and Semantic Retrieval - Neuroscience News
The Hidden Mirror: Why Your AI is Only as Good as Your Thinking
The Hidden Mirror: Why Your AI is Only as Good as Your Thinking

Debbie Richards writes about critical issues related to working with AI: our own human cognitive biases. AI can reflect and amplify our own mental shortcuts. Being aware of our cognitive biases can make us more effective at working with AI.

" The researchers identify three critical stages where our own thinking can steer AI off course:

Before Prompting: Our past experiences create a "halo" or "horns" effect. If you’ve had great results, you might over-trust the tool for tasks it isn't ready for. Conversely, if you've been spooked by headlines about hallucinations, you might avoid it even when it could be genuinely helpful.
During Prompting: How we frame a question matters. "Leading question bias" happens when we bake the answer into the prompt, like asking "Why is product X the best?" This encourages the AI to ignore weaknesses. There is also "expediency bias," where we settle for the first "good enough" answer because we’re under time pressure.
After Prompting: Once we have an output, the "endowment effect" can make us overvalue it simply because of the effort we put into the prompt. We also have to watch the "framing effect." How we present that AI-driven data can completely change how our audience feels about it.:
·linkedin.com·
The Hidden Mirror: Why Your AI is Only as Good as Your Thinking