Upload narratives
CSV, Excel, open-ended survey responses, interview excerpts, or field notes.
NarraGrid is a collaborative platform for turning interviews, memories, and open-ended responses into structured datasets - with prompts, benchmarks, and findings you can cite.
I remembered the summer kitchen at my grandmother's house, the smell of pickled vine leaves, and how she would hum while folding them.
The pipeline
The workflow preserves the method around every result: the source text, prompt version, model provider, validation rules, and human benchmark.
CSV, Excel, open-ended survey responses, interview excerpts, or field notes.
Capture codebooks, rubrics, rating scales, and exclusion criteria as method.
Build a structured instrument with schema validation and change history.
Run GPT, Claude, Gemini, or local models against the same research task.
Compare against expert raters and report r, kappa, F1, MAE, and n.
Public artifacts
Treat the research instrument as a first-class object. Publish the prompt, benchmark, or result with provenance attached.
Kavdir, A. (2026). Turkish autobiographical memory valence prompt, v1.2. NarraGrid.
Rates each memory on a -3 to +3 scale with rationale. Validated against expert ratings.
Narrative Identity Lab. (2026). Specificity benchmark, v1.0. NarraGrid.
Compares four LLMs against expert-coded memories using a shared codebook.
Kavdir, A. and Celik, M. (2026). Turkish valence model comparison, v1.1. NarraGrid.
A same-prompt comparison of Claude, GPT, and Gemini against human ratings.
For research labs
structure
benchmark
publish
Worked example
Every artifact keeps the receipt: which model, which dataset, which evaluation, and which version produced the result.
Disciplines