Streamline AI
Embeddable AI chat assistant that replaces static lead capture with intelligent, context-aware conversation.
Try ItThe Problem
Most lead capture forms are a dead end. They ask for a name and email, fire off a generic confirmation, and leave the business with no context about what the lead actually needs. Sales teams waste time on discovery calls that a well-structured conversation could have handled upfront.
At the same time, businesses fielding inbound enquiries — particularly in professional services, agencies, and B2B — get the same qualifying questions answered differently every time, with no consistency, no audit trail, and no structured data to act on. The information exists after the conversation, but it’s buried in a thread rather than sitting cleanly in a CRM record.
How It Works
Streamline presents as an embeddable chat widget, rebrandable per deployment. When a user starts a conversation, the system works through a configured question flow — gathering requirements, accepting file and PDF uploads, and running lightweight company enrichment in the background. By the time the conversation completes, the lead record is fully populated and pushed directly to the CRM. No manual data entry, no follow-up for basic context.
The backend maintains full conversation history with session-based auth, meaning returning users pick up where they left off. URL-based conversation starters allow specific entry points — a particular service page or campaign — to initialise the assistant with relevant context already loaded.
At scale: 572 real conversations, 20,000 synthetic conversations generated for testing and guardrail validation, and millions of tokens processed across deployments.
Tech Stack
Frontend React 18, TypeScript, Vite, Tailwind CSS, shadcn/ui, Wouter, TanStack Query
Backend Express.js (TypeScript), REST API, Drizzle ORM, PostgreSQL, Neon (serverless)
AI OpenAI (chat, PDF vision analysis, document processing)
Infrastructure Session-based auth, modular file storage, structured error logging
Outcomes
- 257,000+ lines of code across 282 files
- 572 live test conversations
- 20,000 synthetic conversations generated for guardrail testing
- Millions of tokens processed
- Deployed as a rebrandable product across client accounts
- Measurable reduction in time-to-qualification for inbound leads
Lessons Learned
Synthetic conversation generation for testing was one of the better calls made during development — it surfaced edge cases that real users would have taken months to produce organically. The guardrail coverage it enabled was worth the investment early.
The modular system message configuration ended up being more valuable than anticipated. Clients want the assistant to sound like them, not like a generic bot — making that trivially configurable removed a recurring friction point in onboarding new deployments.




