See how identity framing changes AI language.

AffectLab helps run prompt experiments across models, compare identity conditions, and analyze emotional drift through embeddings, cosine similarity, and keyword patterns.

  • Identity comparison conditions
  • Cross-model runs with prompt snapshots
  • Cosine and word-cloud insight layer

Live experiment card

Prompt Matrix

One prompt, multiple models, three identity conditions, reproducible snapshot history.

Emotion and lexical profile
Cosine response spread
Baseline language contour
Side-by-side model behavior

Analysis layers

Emotion summary cards, cosine ranking by identity and model, plus keyword cloud focused on meaningful terms.

Research-ready outputs

Structured runs that make audits, replication, and collaboration easier across teams.

Most AI evaluation stacks measure quality, but miss affect and tone.

AffectLab makes identity-conditioned shifts visible to inspect how framing changes emotional signals in generated language.

Core capabilities

Built for direct comparison.

Prompt Library

Organize experiments around prompt context, editable templates, and variables.

Cross-Model Runs

Launch one job across providers and keep request/response traces together.

Affective Mapping

Project model output into different emotion categories.

Embedding Similarity

Embed each response to compare semantic shifts between identities and model variants.

Word Cloud Discovery

Surface repeated high-value words and drop weak function-word noise.

Interactive Analysis

Filter by prompt, identity, provider, and model name for focused evidence review.

Workflow

From prompt setup to interpretable comparisons.

Define a narrative prompt. Launch runs across providers. Compare identities. Inspect emotional profiles. Measure semantic closeness. Surface repeated language.

  1. 01

    Define a narrative prompt and configure prompt variables.

  2. 02

    Launch model runs under identity conditions.

  3. 03

    Inspect emotion distributions and run-level summaries.

  4. 04

    Compare cosine similarity to reference strings.

  5. 05

    Review keyword patterns with stopword-aware clouding.

  6. 06

    Export evidence for research reporting and review.

Use cases

Built for behavior-level AI evidence.

Bias and identity research

Quantify how identity framing shifts emotional and semantic output patterns.

Media dialogue studies

Compare generated continuations against thematic references across narrative settings.

LLM evaluation

Benchmark providers and model variants with the same prompt snapshots.

Provider benchmarking

Track how model choice affects trust, suspicion, and lexical consistency.

Research collaboration pilots

Share a reproducible workflow across mixed research and engineering teams.

Institutional demo programs

Demonstrate transparent evaluation processes before larger rollouts.

FAQ

Direct answers to your questions.

What is identity-conditioned AI analysis?

It is an evaluation method that tests how model outputs shift when prompts contain different identity contexts, such as Middle Eastern, Nordic, or Unspecified.

How do you compare model responses across identities?

You run the same prompt across identities and providers, then compare emotion scores, cosine similarity against reference strings, and significant keyword repetition.

What does AffectLab measure?

It measures affective mapped emotions, embedding vectors for each response, cosine distance to predefined concepts, and word-level patterns that indicate behavioral drift.

How are embeddings and cosine similarity used in this platform?

Responses are embedded and compared with selected reference phrases to quantify semantic closeness by identity and model group.

Book a guided demo and walk through a live analysis workflow.

We will run sample prompts, compare identity conditions, and review emotion, cosine, and keyword insights with your team.

Book Demo
Request Demo