About the platform

Built for identity-aware AI response analysis.

Affective Stickiness is a research platform for studying how prompts, identity conditions, and model choice shape generated text. It combines prompt management, batch experiments, emotion mapping, embedding analysis, cosine comparison against reference concepts, and keyword clustering in one workspace.

Methodology

From scenes and prompts to run-level analysis.

The platform works around scenes, prompts, jobs, and run-level analysis. Responses are scored against eight mapped emotions: desire, fear, pity, disgust, attraction, repulsion, trust, and suspicion. It also embeds response text for similarity analysis and generates word clouds to reveal recurring language.

Principles

Interpretability and reproducibility by design.

  • Reproducibility through prompt snapshots.
  • Comparability across identities and providers.
  • Interpretability through visual summaries instead of opaque scores.
  • Structured run metadata for audit and review.

What this changes

A clearer evaluation loop for modern model research.

Before

Teams manually compare outputs, lose prompt provenance, and struggle to detect identity-conditioned affective drift.

With Affective Stickiness

You get repeatable runs, explicit identity segments, and visible emotion plus semantic comparisons in one analysis flow.

Research confidence

Snapshot-based experiments make peer review and cross-team replication much easier to execute.

Product confidence

Faster feedback loops let product and policy teams identify high-risk behaviors before broader deployment.

Need an architecture walkthrough?

We can map your evaluation workflow into scenes, prompts, and reference comparisons during a guided session.

Request Demo
Request Demo