Getting Started
Whispr is an advanced cognitive extension designed to assist technical professionals during intense communication sessions. By leveraging real-time transcription and adaptive AI, Whispr provides contextual whispers to ensure you never lose your thread. The system is built on a "Terminal-First" philosophy, where every interaction is recorded and analyzed through a low-latency neural pipeline.
Quick Setup Protocol
- 01Access the Workspace
Navigate to the main terminal interface and ensure your identity badge is synced.
- 02Source Initialization
Click 'Initiate Capture' to grant screen and system audio permissions. Whispr supports both browser tabs and native applications.
- 03Neural Profile Injection
Open the Neural Profile modal and inject your JSON-based persona. This tells the AI who you are and what your technical background is.
- 04Engagement
Switch to Auto Mode for continuous whispers or stay in Manual Mode for specific targeted queries.
User Operating Manual
To achieve maximum efficiency during an interview or technical session, follow this standardized operating procedure. Whispr is designed to be your silent partner, providing support without distraction.
System Calibration
Access the System Settings (Gear icon in the Terminal header) to configure your environment. Select your preferred Transcription Engine: Deepgram for low-latency live captions, or Groq Whisper for maximum linguistic precision.
Neural Identity Sync
Open the AI Help Panel (Brain icon) and click Neural Profile. Inject your technical persona in JSON format. This step ensures that the AI "Whispers" are contextually aligned with your actual professional expertise.
The Intelligence Pipeline
Choose your AI Brain (Gemini, Llama 3, etc.) and toggle Auto Mode. Whispr monitors the transcript and triggers an analysis after 2 seconds of silence. This "Debounce" logic prevents the AI from interrupting you while you're still speaking.
Session Governance
Maintain control over your resources. Click the Resource Monitor icon in the Terminal header to track real-time token usage, requests, and estimated session costs for both transcription and AI synthesis.
Protocol Overview
Whispr operates on the principle of "Active Standby." The system doesn't just listen; it maintains a persistent state of neural awareness, ready to synthesize information the moment it's detected.
The system maintains a zero-load monitor, waiting for audio frequency spikes. It consumes minimal resources while staying "warm" for instant capture.
Raw audio is streamed via WebSocket to the transcription engine. Interim results are fed back instantly to the AI Panel for real-time visual feedback.
Finalized transcripts are cross-referenced with your Neural Profile to generate strategic hints and ready-to-read interview responses.
Core Systems
Real-time Audio Intelligence
Powered by Deepgram's Nova-2 model, Whispr converts spoken word to text with sub-second latency. The AI Panel features a "Live Audio Monitor" that shows you exactly what the system is hearing in real-time, even before the final transcript is generated. This ensures you can verify that Whispr is in sync with the conversation.
Dual-Engine Transcription
Whispr allows you to choose your "Ears" based on the environment.
Deepgram Nova-2
Native WebSocket streaming for sub-second visual feedback. Best for fast-paced Q&A sessions.
Groq Whisper Turbo
High-fidelity whisper-large-v3-turbo model. Perfect for complex technical jargon and foreign accents.
Dual-Mode Stealth AI Whisperer
Auto Mode
Continuously monitors the transcript. After 2 seconds of silence, it automatically synthesizes a whisper based on the most recent conversation context and your profile. The output is optimized for natural "Human-Pro" flow, allowing you to read it aloud directly without further adjustment.
Manual Mode
Allows you to explicitly ask the AI questions. This is perfect for deep dives into specific technical topics that weren't explicitly mentioned in the live audio. You can also switch models mid-conversation to get different perspectives.
Neural Infrastructure v2.0
High-speed execution via Llama 3.3 70B and Llama 3.1 8B. Features experimental support for Llama 4 Scout and Qwen 3 32B for cutting-edge response synthesis.
Utilizing Gemini 2.0 Flash for high-fidelity context preservation. Best for deep technical discussions where continuity is critical.
Access to GPT OSS 120B and multi-provider Gemini endpoints. Offers the broadest reasoning capabilities for complex architectural queries.
Neural Profile Architecture
The Neural Profile is Whispr's brain. Beyond basic identity, you can inject complex narratives, project histories, and research findings. The more detailed your profile, the more "human" and "experience-based" the AI's whispers become.
Baseline Neural Template
This is the standard personality.json structure. Use this as your primary reference when building a baseline identity.
Core Identity
Defines your professional persona and tech stack. This is the foundation of every AI whisper.
Academic Background
Lists your degrees and institutions. AI uses this for background validation and credential-based queries.
Skill Matrix
Categorizes your technical competencies. AI cross-references this to provide specific code logic or tools.
Experience Mapping
Detailed roles and key achievements used to validate your experience during technical discussions.
Project Highlights
Showcases specific work you've built, allowing the AI to cite real-world examples from your portfolio.
Interview Context
Pre-defined answers to common questions, ensuring the AI's suggestions align with your actual strategy.
Advanced Narrative Injection
To achieve maximum "Human Fidelity," you can expand your profile with specific project stories and research findings. This allows the AI to provide evidence-based responses.
"identity": { /* ... */ },
"projects": [
{
"title": "Eco-Stream Analytics",
"story": "Faced 50% data loss...",
"result": "99.9% integrity"
}
],
"research": [
{
"topic": "Audio Streaming",
"context": "PCM-to-Opus..."
}
]
}
Narrative Injection
By including specific "Stories," you enable the AI to use the STAR Method (Situation, Task, Action, Result) when suggesting answers. This is crucial for behavioral interview questions.
Project Case Studies
Detailing your technical triumphs allows the AI to provide specific code logic or architectural patterns you've actually implemented in the past.
Research & Theory
Adding research topics ensures that if an interviewer asks about deep theoretical concepts, Whispr can draw from your specific academic or self-taught background.
"A well-documented Neural Profile turns a generic AI into a high-fidelity digital twin that knows your career as well as you do."
System Configuration & Privacy
Cognitive Translation
When the Live Translator is enabled, Whispr activates its combined neural path. It doesn't just translate words; it synthesizes technical answers in your target language based on the translated context, providing a seamless "Foreign-to-Strategy" bridge.
Model Architecture
Choose your neural engine based on the situation. Groq is recommended for real-time speed, Gemini for reliable reasoning, and OpenRouter for access to specialized open-weights models like GPT-OSS and Qwen. You can swap models instantly via the sub-menu in the AI Panel.
Security & Privacy Protocols
Whispr is designed for high-stakes technical environments. Data privacy is integrated into our core architecture.
Zero-Server Persistence
1. Local Storage: All transcription history and neural profiles are stored locally in your browser's IndexedDB/LocalStorage. No data is sent to our servers for storage.
2. Encryption: Communication with AI providers (Google/OpenRouter) is secured via SSL. Your API keys are stored only in your local session.
3. Purge Command: At any time, you can clear the terminal to permanently erase all session logs from your device.