Adaptive Knowledge Graphs for AI/ML Education

December 2025 · 8 min read Interactive Visualization BKT Knowledge Graphs EdTech

Bayesian Knowledge Tracing, knowledge graph navigation, and interactive explorations — integrated into a single adaptive platform where the learning algorithm is not a black box but a visible companion.

The Question

AI/ML education is oddly fragmented. On one side, static textbooks and MOOCs deliver broad coverage but require passive consumption. On the other, tools like TensorFlow Playground and the Georgia Tech Polo Club’s explainers offer brilliant interactive visualizations — but for a single concept at a time, with no learning path, no mastery tracking, and no memory of the learner.

A gap sits between these poles. No existing platform combines adaptive mastery tracking with interactive explorations and structured curriculum navigation in one coherent system. The question: can we build a tool where the visualization, the assessment, and the adaptive algorithm work together — and where the algorithm itself is transparent to the learner?

The Approach

The platform centers on a force-directed knowledge graph as the primary navigation interface. 42 AI/ML concepts are organized across five tiers, from data preprocessing and linear regression through transformers, diffusion models, and constitutional AI. Prerequisite relationships form a directed acyclic graph: new concepts unlock only when their dependencies are mastered [1].

Mastery is estimated per-concept using Bayesian Knowledge Tracing [2], a probabilistic model that updates the probability of knowledge after each quiz attempt. The implementation extends standard BKT in three ways: difficulty-conditioned slip and guess parameters, a hint penalty that blends correct and incorrect posteriors, and tier-aware parameter scaling where advanced concepts have lower initial priors and slower learning rates.

Most adaptive learning systems treat their algorithms as invisible infrastructure. Here, the learner sees everything: their BKT parameters, their forgetting curve, and the reason each concept was recommended.

This transparency follows the Open Learner Model (OLM) approach [3, 4]. Research consistently shows that exposing the adaptive model to learners promotes metacognitive processes: self-awareness of knowledge gaps, self-monitoring during study, and self-regulation of learning strategies. Low-prior-knowledge learners benefit most — exactly the target audience for an AI/ML education platform.

The Architecture

Five layers compose the system:

Data layer. A flat curriculum map of 42 concept nodes, each with prerequisites, cross-connections, Bloom’s taxonomy level, multi-format quizzes (multiple-choice, fill-in-blank, ordering), code examples, and mathematical notation. The prerequisite graph encodes knowledge space structure [1].

Adaptive engine. Four algorithms operate in concert: BKT mastery tracking [2] with Ebbinghaus-inspired forgetting curves, spaced repetition scheduling with mastery-scaled intervals, a BFS-based recommender that traverses the prerequisite graph to surface the learning frontier, and adaptive difficulty selection. A novel knowledge transfer function boosts connected concepts when a node crosses the mastery threshold.

Interaction layer. Seven interactive explorations render real computations on HTML Canvas: gradient descent with 3D cost surfaces, a neural network builder with live training, an attention mechanism visualizer with multi-head support, a K-means clustering simulation with Voronoi regions, decision boundary comparison across classifiers, data preprocessing pipelines, and a reinforcement learning gridworld with Q-learning.

Learner state. All state persists locally via React Context and localStorage. A typed telemetry system captures nine event types (quiz attempts, concept opens, exploration completions, sessions) for potential research instrumentation. No server, no accounts, no data leaves the browser.

Presentation. React 19, TypeScript, D3.js for force simulation, Framer Motion for transitions, KaTeX for mathematical rendering. The interface uses a dark glassmorphic design language with monochrome cream mastery indicators.

What Emerged

Three properties emerged from the integration that would not exist in any single component alone.

The knowledge graph changes shape as you learn. A fog-of-war progressive reveal system hides concepts whose prerequisites have not been touched. As the learner masters foundational nodes, the graph physically expands — new nodes animate into existence, edges materialize, and the force simulation settles into a new equilibrium. The curriculum is not a fixed syllabus but a living structure that responds to demonstrated knowledge.

Mastering one concept primes its neighbors. When BKT probability crosses the 0.85 threshold, connected concepts that have not yet been attempted receive a transfer boost (8%, capped at 40%). This models the empirical observation that learning gradient descent provides prior knowledge about backpropagation. Standard BKT treats concepts as independent; the transfer function encodes the graph structure into the mastery model.

The learning algorithm explains itself. Every concept detail panel includes a “Mastery Model” section showing the four BKT parameters for that concept’s tier, a mastery trajectory chart reconstructed from attempt history, a forgetting curve with the learner’s current position marked, and a natural-language explanation of why the current difficulty level was recommended. The recommender similarly explains its reasoning: “All prerequisites mastered (Linear Regression, Loss Functions). Ready to learn.”

What I Learned

Building this platform clarified where the boundaries of existing tools actually lie. TensorFlow Playground is a brilliant neural network sandbox — but it has no idea who is using it. CNN Explainer and Transformer Explainer from Georgia Tech run real inference in-browser — but they are standalone demonstrations, not learning instruments. Seeing Theory from Brown University is visually exceptional — but it has no assessment, no adaptive path, no memory. Distill.pub set the standard for interactive research communication — but it is now dormant and was never an adaptive system.

The gap is not in any individual dimension but in their integration. The interactive exploration needs to know what the learner has mastered. The mastery tracker needs to inform which concepts to explore next. The knowledge graph needs to reflect the current state of both. And the learner needs to see all of this happening — not as a dashboard afterthought, but as a core part of the interface.

Whether this integration actually improves learning outcomes is an empirical question that requires controlled evaluation. The platform includes client-side telemetry instrumentation and a JSON data export function designed to support exactly that study.

References

[1] Doignon, J.-P., & Falmagne, J.-C. (1999). Knowledge Spaces. Springer.

[2] Corbett, A. T., & Anderson, J. R. (1994). Knowledge tracing: Modeling the acquisition of procedural knowledge. User Modeling and User-Adapted Interaction, 4(4), 253–278.

[3] Bull, S., & Kay, J. (2016). SMILI: A framework for interfaces to learning data in open learner models. International Journal of Artificial Intelligence in Education, 26, 293–331.

[4] Hooshyar, D., et al. (2020). Open learner models as instruments for self-regulated learning. British Journal of Educational Technology, 51(1), 59–77.

[5] Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285.

[6] Victor, B. (2011). Explorable Explanations. worrydream.com/ExplorableExplanations.

[7] Ebbinghaus, H. (1885). Memory: A contribution to experimental psychology. Teachers College, Columbia University.