LLM-based Chatbot
A research companion that explores how large language
models can index my archive and answer with context.
Built with OpenRouter for model diversity and cost control.
The Idea
This is not a generic chatbot. It is a lens on my research papers, creative work, and personal knowledge systems.
The goal is to see how AI can surface patterns without flattening nuance, and how it can become a thinking partner rather than a replacement.
The Core Question
Can a model hold context without erasing lived experience? Can it help me think, without deciding for me?
The work is less about answers, more about how answers are formed.
How It Works
- OpenRouter gateway for access to multiple LLMs
- Free-first routing with quota guardrails
- Context memory and conversation state
- Latency tracking and model attribution
- Retries and graceful failover
OpenRouter lets me test many models without lock-in and keep costs predictable while exploring.
Current Capabilities
- Real-time conversational AI with short-term memory
- Transparent model identification in each response
- Quota management with rate limits
- Basic logging for latency and failures
- Stable baseline prompt for consistent outputs
Next Build
- Retrieval over papers, notes, and project archives
- Source-aware responses and citations
- Personal prompt templates for research workflows
- Evaluation set for measuring drift and bias
- Long-term memory experiments and user controls
The goal is not automation - the goal is reflection.
