Eighty percent of AI companion users in this study reported discussing embarrassing personal problems with AI that they had previously avoided entirely. Over half subsequently sought professional help. But the mechanism is not what you might expect.
Millions of people now use ChatGPT, Claude, Gemini, Replika, and Character.ai not for task assistance but for personal conversation about feelings, health concerns, relationships, and life decisions. This represents a qualitatively new form of human–AI interaction: sustained, emotionally intimate, and self-directed. The consequences for interpersonal behavior remain largely unknown.
Two competing narratives dominate. The first frames AI companions as social training grounds—safe spaces where individuals practice honest expression, building skills that transfer to human relationships. The second frames them as human replacements—comfortable substitutes that erode motivation for effortful human connection.
I proposed perceived emotional safety as the theoretical bridge. Drawing on Edmondson’s psychological safety framework and social penetration theory, the study tested whether the judgment-free nature of AI conversation facilitates both interpersonal disclosure and help-seeking for previously avoided problems, or whether these pathways diverge.
Pre-registered and transparent. The study was pre-registered on the Open Science Framework before data collection, specifying hypotheses, sample size, measures, and analysis plan. This eliminates the possibility of post-hoc hypothesis generation.
Validated instruments. Both groups (30 AI companion users, 25 non-users) completed the Self-Disclosure Index (SDI; Miller et al., 1983), the Personal Report of Communication Apprehension (PRCA-12; McCroskey, 1982), the WHO-5 Well-Being Index, the UCLA-3 Loneliness Scale, the Mini-SPIN social anxiety screener, and BFI-10 personality subscales. AI users additionally completed original emotional safety and help-seeking behavior scales with strong internal consistency (α = .86 and .81 respectively).
Rigorous quality control. Participants were recruited through Prolific with demographic balancing. Three to four stealth attention checks per form; participants failing two or more were excluded. Bonferroni-corrected significance thresholds for multiple comparisons. Progressive covariate adjustment models testing robustness.
Pre-registration, validated scales, attention checks, Bonferroni correction, and progressive covariate adjustment. The methodology is designed so that every reported finding can be independently verified.
Finding 1: AI users disclose more to humans. AI companion users showed substantially higher willingness to self-disclose to new acquaintances compared to non-users (d = 1.08, p < .001). This is a large effect. It survived Bonferroni correction, pre-registered ANCOVA controlling for loneliness, and a kitchen-sink model adding well-being, technology comfort, country, and extraversion. Country-stratified analysis showed the effect slightly increasing (d = 1.14). The communication apprehension difference (d = −0.60) did not survive covariate adjustment and is reported as a confounded null.
Finding 2: AI as gateway, not replacement. Among AI users, 80% reported using AI to discuss embarrassing personal problems, 77% had researched health issues they had been putting off, and 57% subsequently sought professional help for something first discussed with AI. A Friedman test comparing help-seeking intentions across AI, professionals, and friends/family found no significant preference—AI was used comparably, not as a dominant or exclusive resource.
Finding 3: The dissociation. This is the central theoretical contribution. Perceived emotional safety with AI showed a strong positive correlation with help-seeking behavior (r = .51, p = .004) but a near-zero, non-significant correlation with interpersonal self-disclosure (r = .11, ns). A Steiger test confirmed these correlations are significantly different from each other (p = .022).
Feeling safe with AI predicts that you will seek professional help. It does not predict that you will open up more to other humans. The pathways are distinct.
The dissociation challenges both competing narratives. AI companions are neither social training grounds (emotional safety does not transfer to human disclosure) nor simple replacements (users still seek human help at comparable rates). Instead, they appear to function as behavioral rehearsal spaces: the act of articulating difficult content to AI facilitates help-seeking by lowering the activation barrier for disclosure, but through practice with expression itself rather than through the transfer of emotional safety to human contexts.
This carries a concrete design implication: optimizing AI companions for in-session comfort may not produce the interpersonal benefits users seek. The feeling of safety stays in the conversation. What transfers is the practice of saying difficult things out loud.
This is a cross-sectional survey with retrospective self-report. It cannot establish causation—the direction of effect is unknown, and selection bias (people who choose AI companions may already differ) cannot be ruled out. N = 55 provides adequate power for large effects (d ≥ 0.83 at α = .025) but is underpowered for small-to-medium effects. Formal mediation analysis was deferred to a larger study following recommendations on cross-sectional mediation bias. The sample was recruited through a single platform (Prolific) and may not generalize broadly. All of this is stated in the paper.
Perry, A. C. (2026). Judgment-Free but Not Risk-Free: How Perceived Emotional Safety with AI Companions Relates to Human Self-Disclosure and Help-Seeking. Targeting OzCHI 2026 Late-Breaking Work (Adelaide, Australia).
Pre-registered: osf.io/5xzjs