Research Interests

My research investigates human-AI interaction, AI safety, and adaptive learning systems. I study how people form relationships with AI, how those relationships affect real-world behavior like help-seeking and self-disclosure, and what happens when safety mechanisms in AI systems trade off against relational competence. In parallel, I build production-scale adaptive learning systems where confidence calibration (the alignment between what people believe they know and what they actually know) serves as a primary adaptive signal.

Three threads run through the current work. The first examines relational dynamics, epistemic risks, and emotional safety in human-AI interaction. The second deploys confidence-calibrated adaptive algorithms at production scale across professional domains. The third builds from hands-on biomechanical measurement and computer vision toward physiological state estimation for cognitive modeling in human-AI collaboration.

Human-AI Interaction AI Safety Adaptive Learning Systems Confidence Calibration Affective Computing Physiological Computing Trust Calibration Human Subjects Research

Publications & Preprints

Under Review

Confidence-Calibrated Adaptive Learning: An Integrated Adaptive Engine for Professional Exam Preparation

A. C. Perry

Submitted to AIED 2026 Late Breaking Results (Seoul, South Korea)

Presents an adaptive learning engine that elevates binary confidence judgments to a primary adaptive signal driving six interdependent subsystems. Deployed across 22 production applications in four professional domains with over 55,000 items.

Under Review

Cross-Domain Analysis of a Confidence-Calibrated Adaptive Learning Engine

A. C. Perry

Submitted to EDM 2026 Poster/Demo Track (19th International Conference on Educational Data Mining, Seoul)

Analyzes how item bank properties interact with confidence-calibrated adaptation across four certification domains (35,250 items, 22 apps). A two-factor predictor computed from difficulty spread and base accuracy ranks misconception concentration across domains from item bank statistics alone, enabling cold-start domain screening for offline-first systems without server-side telemetry.

Under Review

The Safety–Agency Inversion: Longitudinal Multi-Method Evidence from Frontier Voice AI Companions

A. C. Perry

Submitted to IJHCI (International Journal of Human-Computer Interaction)

Documents a safety–agency inversion in frontier AI voice companions: models with stronger safety alignment consistently exhibit lower relational agency across 68 sessions and 83+ hours of naturalistic interaction. All models used in default voice mode with no behavioral instructions. Three methodologically independent evidence streams converge on the same model ordering (d = 5.45). Introduces the HERVAC evaluation framework and releases the Her Dataset.

Under Review

The Epistemic Harm of AI Sycophancy: When Agreement Undermines Justified Belief

A. C. Perry

Submitted to Philosophy & Technology (Springer)

Argues that AI sycophancy constitutes epistemic harm through three mechanisms: confidence inflation, challenge atrophy, and empathic substitution. Grounded in social epistemology (Fricker, Nguyen, Lackey) and empirical evidence from multiple studies totaling 5,400+ participants. Proposes the “reinforcement bubble” concept extending Nguyen’s echo chamber taxonomy and defends calibrated honesty against the epistemic paternalism objection.

In Preparation

Judgment-Free but Not Risk-Free: How Perceived Emotional Safety with AI Companions Relates to Human Self-Disclosure and Help-Seeking

A. C. Perry

Targeting OzCHI 2026 Late-Breaking Work (Adelaide, Australia)

Pre-registered cross-sectional survey (N=55) comparing AI companion users and non-users on validated communication measures. Finds that AI users show substantially higher willingness to self-disclose to new acquaintances (d=1.08). A key dissociation: perceived emotional safety with AI predicts help-seeking behavior (r=.51, p=.004) but not interpersonal self-disclosure (r=.11, ns), suggesting AI functions as behavioral rehearsal rather than emotional transfer. 57% of AI users reported seeking professional help for issues first discussed with AI.

All preprints available on Zenodo.

Current Projects

Meridian Labs Platform

Ongoing — 26 production apps, 68,000+ items

Educational technology platform serving as both a commercial product and a research instrument. The adaptive engine’s six-algorithm cascade processes binary confidence judgments through asymmetric scoring, confidence-modulated spaced repetition, misconception detection, smart review, readiness prediction, and Bloom’s-aligned assessment across four professional certification domains. Privacy-preserving on-device storage enables fully offline operation.

Interactive AI/ML Learning Map

Deployed — 42 concepts, 7 interactive explorations

Knowledge-graph-driven adaptive learning platform for AI/ML concepts. Integrates Bayesian Knowledge Tracing, Ebbinghaus-calibrated spaced repetition, prerequisite-gated curriculum sequencing, and an open learner model exposing algorithm transparency to learners. Seven interactive explorations (gradient descent, neural networks, attention, clustering, decision boundaries, data preprocessing, reinforcement learning) run real computations in-browser. The platform’s integration of adaptive mastery tracking with interactive visualization and knowledge graph navigation is, to our knowledge, novel.

For Potential Supervisors

I am seeking a PhD position (February 2027 start) in human-AI interaction, AI safety, or adaptive learning systems. Three threads connect my current work:

  • Human-AI interaction and AI safety — relational dynamics in voice AI, epistemic risks of sycophancy, and how perceived emotional safety with AI relates to human self-disclosure and help-seeking (IJHCI, Philosophy & Technology, OzCHI)
  • Adaptive learning systems with confidence-calibrated algorithms deployed across 26 production applications, serving as both commercial products and research instruments (AIED, EDM)
  • Embodied and physiological computing — building from hands-on biomechanical measurement and computer vision (M.S. Kinesiology, ZENith) toward biosensor integration for cognitive state modeling

My background combines five published or under-review papers with production AI systems and interdisciplinary training in human subjects research (M.S. Kinesiology with RCT design and full IRB protocol; M.S. Computer Science in progress).

I would welcome the opportunity to discuss how my research interests align with your group’s work.