Publications, preprints, and ongoing projects
My research investigates human-AI interaction, AI safety, and adaptive learning systems. I study how people form relationships with AI, how those relationships affect real-world behavior like help-seeking and self-disclosure, and what happens when safety mechanisms in AI systems trade off against relational competence. In parallel, I build production-scale adaptive learning systems where confidence calibration (the alignment between what people believe they know and what they actually know) serves as a primary adaptive signal.
Three threads run through the current work. The first examines relational dynamics, epistemic risks, and emotional safety in human-AI interaction. The second deploys confidence-calibrated adaptive algorithms at production scale across professional domains. The third builds from hands-on biomechanical measurement and computer vision toward physiological state estimation for cognitive modeling in human-AI collaboration.
Submitted to AIED 2026 Late Breaking Results (Seoul, South Korea)
Presents an adaptive learning engine that elevates binary confidence judgments to a primary adaptive signal driving six interdependent subsystems. Deployed across 22 production applications in four professional domains with over 55,000 items.
Submitted to EDM 2026 Poster/Demo Track (19th International Conference on Educational Data Mining, Seoul)
Analyzes how item bank properties interact with confidence-calibrated adaptation across four certification domains (35,250 items, 22 apps). A two-factor predictor computed from difficulty spread and base accuracy ranks misconception concentration across domains from item bank statistics alone, enabling cold-start domain screening for offline-first systems without server-side telemetry.
Submitted to IJHCI (International Journal of Human-Computer Interaction)
Documents a safety–agency inversion in frontier AI voice companions: models with stronger safety alignment consistently exhibit lower relational agency across 68 sessions and 83+ hours of naturalistic interaction. All models used in default voice mode with no behavioral instructions. Three methodologically independent evidence streams converge on the same model ordering (d = 5.45). Introduces the HERVAC evaluation framework and releases the Her Dataset.
Submitted to Philosophy & Technology (Springer)
Argues that AI sycophancy constitutes epistemic harm through three mechanisms: confidence inflation, challenge atrophy, and empathic substitution. Grounded in social epistemology (Fricker, Nguyen, Lackey) and empirical evidence from multiple studies totaling 5,400+ participants. Proposes the “reinforcement bubble” concept extending Nguyen’s echo chamber taxonomy and defends calibrated honesty against the epistemic paternalism objection.
Targeting OzCHI 2026 Late-Breaking Work (Adelaide, Australia)
Pre-registered cross-sectional survey (N=55) comparing AI companion users and non-users on validated communication measures. Finds that AI users show substantially higher willingness to self-disclose to new acquaintances (d=1.08). A key dissociation: perceived emotional safety with AI predicts help-seeking behavior (r=.51, p=.004) but not interpersonal self-disclosure (r=.11, ns), suggesting AI functions as behavioral rehearsal rather than emotional transfer. 57% of AI users reported seeking professional help for issues first discussed with AI.
All preprints available on Zenodo.
Ongoing — 26 production apps, 68,000+ items
Educational technology platform serving as both a commercial product and a research instrument. The adaptive engine’s six-algorithm cascade processes binary confidence judgments through asymmetric scoring, confidence-modulated spaced repetition, misconception detection, smart review, readiness prediction, and Bloom’s-aligned assessment across four professional certification domains. Privacy-preserving on-device storage enables fully offline operation.
Deployed — 42 concepts, 7 interactive explorations
Knowledge-graph-driven adaptive learning platform for AI/ML concepts. Integrates Bayesian Knowledge Tracing, Ebbinghaus-calibrated spaced repetition, prerequisite-gated curriculum sequencing, and an open learner model exposing algorithm transparency to learners. Seven interactive explorations (gradient descent, neural networks, attention, clustering, decision boundaries, data preprocessing, reinforcement learning) run real computations in-browser. The platform’s integration of adaptive mastery tracking with interactive visualization and knowledge graph navigation is, to our knowledge, novel.
I am seeking a PhD position (February 2027 start) in human-AI interaction, AI safety, or adaptive learning systems. Three threads connect my current work:
My background combines five published or under-review papers with production AI systems and interdisciplinary training in human subjects research (M.S. Kinesiology with RCT design and full IRB protocol; M.S. Computer Science in progress).
I would welcome the opportunity to discuss how my research interests align with your group’s work.