When people hear “AI in education,” they often picture either a miracle cure for all learning challenges or a dystopian teacher-chatbot that drowns students in walls of incorrect text. After building AI tools for anatomy education at Enatom, I’ve learned that neither of these is what is needed, and that building AI tools that actually help students and teachers is possible.
The actual problem being solved
Anatomy education has a cognitive load problem. There’s an enormous amount of detail to learn, much of it spatial and relational. Students get overwhelmed trying to navigate textbooks, 3D atlases, clinical resources, and study materials.
One thing is for sure: you don’t learn the brachial plexus by reading about it. You learn it by looking at it from different angles, tracing nerve paths through 3D space, and building a spatial mental model. For this reason, if AI tools want to have any chance at actually being useful in anatomy education, they have to break out of the text-only chatbot format and actually interact with the 3D content students need to see.
What helps isn’t more content or better explanations. It’s intelligent systems that reduce the friction of finding what you need. If you’re studying the carpal tunnel and need to see how the median nerve passes through, you shouldn’t have to dig through menus or search multiple resources. The system should understand what you’re asking for and take you there.
AI that interacts with your content
The interesting development to look out for in this fast-moving landscape isn’t better language models but AI systems that can actually interact with your content. We’ve built an agent at Enatom that orchestrates multiple retrieval systems and tools. It can pull up specific 3D models, navigate through the anatomy viewer, highlight structures, and assemble custom lessons from existing verified content.
When a student is confused about the rotator cuff, the system doesn’t just generate an explanation, but it can pull up the relevant 3D shoulder anatomy, highlight each muscle, help the student navigate between related structures, and even help them generate a focused lesson or quiz on those muscles. This way manipulating, memorizing, and testing all come together in one learning experience.
The language model is just the interface layer. The actual work happens through tool calls that manipulate the 3D content and learning materials. The agent needs to maintain memory of the conversations, decide which tools to use when, and coordinate between different data sources.
Searching through 3D anatomy
Making this work requires searching through content that is much more complex than what a schematic drawing of the human body would want us to believe. When a student asks “where does the median nerve pass through the forearm,” the system needs to find out which structure the student is referring to, find it in a 3D pointcloud model of real human anatomy, and show it to them.
The challenge is mapping what students say to spatial features of real human bodies in 3D data. Anatomical structures not only vary a lot between individuals, but students and teachers can refer to the same structure in so many ways: be it the Terminologia Anatomica (bless international standards) but also whatever terminology their teacher prefers, their textbook suggests, their lab partner misremembers, or their brain panics into. Making sense of the relationship between messy textual data and even messier 3D data is exactly what AI systems can excel at.
A crucial part of this is that we’re not generating 3D content or hallucinating locations. We’re doing retrieval over real anatomical data. The AI handles the search and identification, but the underlying anatomy is verified and consistent.
Keep it simple
Agents aren’t always the best solution. They’re more expensive to run and less reliable than simpler approaches. If you know exactly what steps need to happen and in what order, you’re probably better off building that process explicitly rather than having an AI figure it out each time.
We use agents when students and teachers ask open-ended questions or need help navigating complex material where we can’t predict all the paths they might take. But for things like generating a quiz on a specific topic or loading a particular 3D model, we just build those as direct functions.
Humans in the loop
Agents will make mistakes. They might pull up the wrong structure, misunderstand what a student is asking for, or create a lesson that doesn’t quite flow right. This is why having humans in the loop is important.
For user-facing features, this means the AI suggests and helps, but students and teachers stay in control. If the system highlights the wrong muscle, they can correct it. If a generated quiz doesn’t make sense, they can modify it or ask for a different one.
For internal processes and content decisions, humans stay central. AI can help organize files, process 3D models, and manage data. But decisions about what content to include, which anatomical regions to teach together, what order makes sense for different learning paths, why students struggle with specific concepts, that requires human judgment and understanding of education.
The model we’ve found that works: AI handles execution and retrieval, humans handle curation and design. When we design a new lesson, someone with teaching experience decides the structure and flow. Then AI can help assemble the materials, find relevant 3D content, check for completeness, and eventually help students navigate through it. But the underlying pedagogical thinking stays human.
The point of it all
At the end of the day, the goal isn’t to build the smartest, shiniest AI system and parade it around like a party trick. It’s to build tools that make teaching and learning anatomy less overwhelming and more effective. Sometimes that means sophisticated agents that can navigate complex 3D content. Sometimes it just means a good search function. The technology matters less than whether it actually helps students understand what they’re looking at.

