← back

Kite

An AI-first ecommerce platform for small business owners

role Lead Product Designer
timeline 2025 – 2026
platform Web · SaaS
product Visit site ↗
AIConversational InterfaceE-CommerceNo-CodeWebsite Builder
Conversation interface + live store preview

Some products start with a clear brief. Kite wasn't one of them.

In spring 2025, Appsmith made the decision to stop iterating on its existing low-code platform and build something entirely new. The brief was intentionally open: create an AI-first product that could define the company's next chapter. Nobody knew exactly what that product should be. My job, as Lead Product Designer, was to help figure that out.

Ten months later, Kite launched as a prompt-driven site and commerce builder for small business owners — people who need to get online fast, without technical expertise. Getting there required four distinct product bets, each one honest about what wasn't working, and each one leaving us better positioned for the next.

// my role

I was in the core decision-making loop from the beginning, working directly with leadership, marketing, and product on direction and research. On the design side, I worked closely with one senior designer throughout.

We had a clear division: he owned interaction-level UI, engineering specs, and ran user research sessions. I drove information architecture, visual design direction, research framing, and functional prototyping. I led the Claude Code prototyping work specifically — building interactive prototypes that kept pace with engineering and let us test real interaction patterns rather than static frames. The broader research — competitive analysis, interviews, benchmarking, leadership workshops — I shaped and directed, with execution genuinely shared.

// finding the right problem

Rather than a linear process, this was four consecutive bets. Each one had a hypothesis, a way to test it, and a clear reason we moved on.

// 01 The LCAP-Adjacent Tool
// 02 The Standalone Developer Tool
// 03 The Voice Interface
// 04 Small Business E-Commerce
Bet 01

— The LCAP-Adjacent Tool

We started where Appsmith had credibility: enterprise low-code. The hypothesis was that an AI-first tool could serve as a natural successor, bringing the existing user base along.

The tension was almost immediate. LLMs are open-ended by design — they generate freely, without inherent constraints. LCAP is the opposite: a tightly structured system of widgets, templates, and schemas. Getting a model to operate reliably within that framework produced outputs that were unpredictable and hard to trust. Layered on top of that, the LCAP market was showing no meaningful growth, and larger competitors were almost certainly pursuing the same idea. We moved on.

In 6 prototype sessions, users abandoned AI-generated outputs and rebuilt them manually every time.
Bet 02

— The Standalone Developer Tool

The second direction decoupled us from LCAP entirely: a developer-focused tool for building internal apps through prompting alone. Lighter, faster, no platform baggage.

This felt tractable until we looked carefully at the actual buyer. Enterprise teams — the ones most in need of internal tooling — had requirements a lightweight LLM-powered tool couldn't credibly meet: security compliance, backend reliability, deep systems integration. Meanwhile, the vibe-coding space was proliferating fast and we had no clear point of differentiation. No defensible position.

Bet 03

— The Voice Interface

If we couldn't win on product category, could we win on interaction model? The idea was a voice-driven builder — users could literally speak their way through creating a tool or site in real time.

We tested it. Users were genuinely delighted. The first experience was strong, and it felt like nothing else on the market.

But the cracks appeared at scale. Voice introduced dependency on a third-party platform, adding latency and failure points outside our control. More fundamentally, we had to be honest about what we'd built: a novel interaction layer over a product that still lacked a focused use case. First-impression delight isn't a retention strategy.

"I can't believe I just built that by talking."
Bet 04

— Small Business E-Commerce

By this point, something useful had happened almost as a side effect: we had a capable, well-tested LLM-driven site builder. The question was no longer what to build — it was who to build it for.

We needed a market with a genuine gap, users whose needs matched what we already had, and enough space to move fast. We found it in small business e-commerce. Solo operators and side-hustle owners trying to get an online store and basic digital marketing presence off the ground, without technical help. These users had urgent, practical needs and were being underserved. Existing platforms were bolting AI on as an afterthought, not building around it. We had the product. We just needed to focus it.

[X]% of SMB owners had no existing e-commerce presence Secondary research
[X]x higher purchase intent vs. developer-focused alternatives User interviews · n=[X]
[X]% of SMBs using AI tools were dissatisfied with AI integration quality Survey · n=[X]

// execution

Early process — whiteboard / Figma exploration

Over the following six months, I led the information architecture, visual design direction, prototyping, and user testing that took this market insight to a shippable product. Research continued throughout — validating assumptions with target users, testing flows, and refining based on what we learned.

Prototyping with Claude Code remained central. The builds weren't polished, but they were real enough that users could respond to them honestly. The gap between a design file and a working prototype is, in some ways, the gap between a question and an answer.

Conversation interface
Product catalogue
Store output
Publish flow

The main interaction surface — a prompt input alongside a live store preview.

Bulk import with AI-assisted categorisation, reviewed before publishing.

A generated storefront — produced from conversation, editable in-place.

Domain setup and go-live confirmation — the moment that closes the loop.

// key decisions

Conversation as the primary input

We explored several interaction models before committing — form-based setup, template galleries, drag-and-drop. The deciding factor was discoverability. New users didn't know what was possible, and a blank canvas gave them nothing to react to. Conversation let the product guide users through decisions they didn't yet know they needed to make.

Rejected model (left) vs. conversation interface (right)

Live preview over static output

Rather than showing a generated result at the end of a flow, we built a live preview that updated with each AI response. This kept users anchored in their actual product rather than a hypothetical version of it, and reduced the anxiety common to AI generation tools where the reveal moment carries too much weight.

Input on the left, live output updating on the right.

// outcomes

[X]% reduction in time to first published store
[X]× increase in completed onboarding flows
+[X] NPS points for the builder experience

Measured across the first cohort of users post-launch.

Note: final design files and prototypes are covered under NDA.

// reflection

The most valuable thing this project taught me wasn't a design skill — it was how to operate with clarity inside genuine uncertainty. When the brief is open-ended and the direction keeps changing, the risk is that the team either freezes or starts thrashing. The antidote to both is treating each phase as a real experiment with a real hypothesis. When you frame a direction as "here's what we believe, here's how we'll test it," a pivot stops feeling like failure and starts feeling like learning. That reframe changed how I worked.

It also reinforced something about prototyping. The Claude Code builds we created weren't polished, but they were real enough that users could respond to them honestly. That's the thing about fidelity — it only needs to be high enough that the right questions get asked.