Developers often judge an SDK by how quickly they can get up and running. We wanted our SDK to be drop‑dead simple to integrate, yet robust under the hood. In this post, I’ll walk through how we built our embeddable SDK, the reasons behind its architecture, and the little tricks that make it both beginner‑friendly and powerful for seasoned devs. We’ll cover everything from our one‑script initialization to lazy‑loading components, background data with TanStack Query Core, and how our internal folder keeps the public API obvious.
For Context
Onboarding to an API is a journey: get credentials, make the first call, wire auth, confirm webhooks, then expand into real use. Most products leave developers guessing whether they’re on track. Docs are static, dashboards lag the moment you switch tabs, and progress is invisible until “it finally works.” We created this SDK to make that journey visible in real time — right where developers are working.
We didn’t take the usual “just ship a React package” route. Framework‑specific SDKs often require providers, hooks, and build setup, which adds friction and ties releases to each ecosystem. By leaning on web standards, we keep the integration copy‑paste simple and portable across stacks while still playing nicely with React/Vue via standard props and DOM events. That choice lets API teams surface onboarding progress anywhere their users are, without forking SDKs per framework.
Embeddable by Design – One Script, One Tag
The first goal was simplicity. We didn’t want users to wrestle with frameworks or complex build pipelines. Instead, we opted for a small initialization script and a custom HTML element. To connect to Tailbits, you import the init
function from a CDN (unpkg) and drop a <tb-journey>
tag into your page. That’s it. For example:
<!-- Include the SDK via unpkg -->
<script type="module">
import { init } from 'https://unpkg.com/tailbits-js';
init({
projectId: 'your-project-id',
environmentId: 'your-unique-identifier',
});
</script>
<!-- Place the journey element in your HTML -->
<tb-journey journey-id="your-journey-id" step="login">
<h3>Login to our dashboard</h3>
</tb-journey>
This snippet is essentially all a developer needs to start using Tailbits Journeys. The call to init(...)
globally configures the SDK, sets up a resilient WebSocket (via reconnecting-websocket), and registers a lazy element loader. The <tb-journey>
element renders immediately and receives live journey progress updates.
Why a custom element? Custom Elements (Web Components) let us encapsulate our widget’s HTML and logic, so it can live on any page without conflicts. They’re also a perfect fit for lazy loading, meaning we can load their implementation on demand for better performance. By choosing a web component (built with Lit), we ensure that integrating our SDK is as easy as adding a <div>
. No React/Angular/Vue or build config required on your end.
Feature Modules and Lazy Loading
Under the hood, our SDK is organized into features that are loaded only when needed. We can add new features without bloating your bundle. Our solution is a small registry that lazy‑loads each feature implementation based on the tag name it observes on the page:
const features = import.meta.glob('./features/*-*.ts');
for (const [path, loader] of Object.entries(features)) {
const tag = featureName(path); // e.g., './features/tb-journey.ts' -> 'tb-journey'
if (tag) registerTagLoader(tag, loader);
}
// The registry observes the DOM and loads implementations
// when it sees a matching tag (e.g., <tb-journey ...>)
initRegistry();
When a <tb-journey>
element appears in the DOM, the registry import()
s its implementation on the fly and defines the element. This keeps initial page loads light and fast. We only ship a small loader and whatever the page actually uses. Everything else stays on the shelf until needed. If you know which elements you’ll render, you can also eagerly load them via preload: ['tb-journey']
during init(...)
.
Lazy loading like this improves performance and keeps memory usage in check. If a page only uses tb-journey
, it never pays the cost (in bandwidth or execution time) of other elements. Add another element later, and the SDK fetches it on demand. Integration stays “one‑line simple,” while the codebase remains modular.
Shared State with TanStack Query Core
One challenge with embedding a complex widget is data management. We use @tanstack/query-core under the hood to handle data fetching, caching, and background freshness, with real‑time invalidation from a resilient WebSocket connection. Even though the SDK is framework‑agnostic, components share a single QueryClient instance, so data stays consistent across elements without manual wiring.
For example, consider multiple tb-journey
elements on the same page showing different steps of the same journey. They all subscribe to the same underlying query. When the server pushes a step_update
over the WebSocket, we invalidate the relevant query key (e.g., ['journey', id]
), and all elements see the latest status immediately. No custom plumbing required.
Inside the tb-journey
element, this looks roughly like:
// Subscribe to journey data and react to changes
this.result = this.query(getJourneyQueryOptions(this.journeyId), (next) => {
this.result = next;
this.status = this.getCurrentStepStatus();
this.emit('tb-step-update', { journeyId: this.journeyId, stepId: this.step, status: this.status });
});
// Wire up real-time invalidation
this.use(subscribeStepUpdate(this.journeyId));
Queries are configured for background freshness and polling where appropriate, so you get a solid baseline without writing any fetch or caching glue.
Another benefit is lifecycle management. We built a tiny helper into our base element so that when a feature mounts or unmounts, it doesn’t leave orphaned subscriptions or ongoing requests. If a feature is removed, its queries are cancelled or simply no longer refetched, avoiding memory leaks or console warnings. TanStack Query naturally cancels fetches on unmount, and our helper nudges it along where needed. The result is that each feature can focus on what data it needs, and the Query library handles how to fetch and update that data over time. This keeps our feature code simple and declarative – we declare “give me the journey progress” and don’t worry about the rest.
For developers using our SDK, all this complexity is hidden. They don’t need to know we’re using TanStack Query at all. But they will appreciate that the widget feels snappy and in sync (e.g., no stale status, and no manual refresh needed) because under the hood a lot of smart caching is happening.
Typed API Client with Orval
To keep our data layer type‑safe and low-maintenance, we generate a tiny HTTP client from our OpenAPI spec using Orval. The generated client lives under src/internal/api/client.ts
and plugs into our ky
‑based fetcher. By filtering on operation-id, we only generate code we actually need, which keeps the output focused and the bundle small.
In practice, queries call the generated methods directly. For example, the journey query wires to the getProgress
operation from the spec:
// src/internal/api/query.ts
import { getProgress } from './client';
export function getJourneyQueryOptions(id: string) {
const cfg = getConfig();
return defineQueryOptions({
queryKey: ['journey', id],
refetchInterval: 30_000,
queryFn: () => getProgress(cfg.projectId, id, cfg.environmentId),
});
}
This gives us end‑to‑end typing (params, responses, and errors) without hand‑writing API calls, and it aligns neatly with our Query setup. Because the client is internal, we can regenerate or reorganize it freely without affecting the public SDK surface.
Public API vs Internal Implementation
When maintaining an SDK, it’s crucial to distinguish between the public API (what our users rely on) and the internal implementation details. We decided to make this distinction crystal clear in our repo by using a src/internal/
folder. Any file inside src/internal/
is not part of the public surface – it’s like a private utility or secret sauce that we can change as needed without worrying about breaking someone’s integration.
This idea is inspired by the convention used in other ecosystems, like Go’s internal packages. As the Go team puts it: code in an internal directory is effectively hidden from external consumers, so you’re “free to refactor its API and move things around without breaking external users”. In our TypeScript project, we don’t have a compiler enforcing the rule, but the folder name is a signal. If you see src/internal/api/client.ts
, you know it’s not meant for you to import in your app – it’s something we might swap out or change in the next version.
Practically speaking, this approach has given us a lot of freedom. We can rewrite an internal module, improve an algorithm, or restructure how features communicate, all without impacting our users – as long as the public API (e.g. the init function, the custom element attributes, events, etc.) stays consistent. It also helps new contributors (and our future selves) navigate the codebase: the boundary between what’s public and what’s private is visible at a glance in the project structure.
By consciously not exposing everything, we avoid accidental usage of private helpers. It’s a form of self-discipline that pays off in maintainability. One of the worst feelings as a library author is discovering that users relied on some undocumented function you never meant to publicize. Marking internals keeps everyone on the same page about what’s officially supported.
Conclusion
Building this SDK required balancing ease-of-use with solid engineering. On the surface, it’s just a snippet and a custom tag – any junior dev can copy-paste and get it working in minutes. But beneath that simplicity, we’ve layered an architecture that keeps things fast, modular, and reliable. We lazy-load pieces to trim bloat and boost performance. We use a shared data store (TanStack Query) to keep multiple pieces in sync effortlessly. And we keep our internals decoupled from the public interface so we can evolve and refactor with confidence.
In the end, these choices weren’t about using flashy tech for its own sake — we leaned on web standards that endure. By building with browser primitives like Custom Elements and MutationObserver, we avoid framework churn and reduce breakage while still delivering a snappy, lightweight integration. By drawing a clear line between internal code and the public API, and by architecting for lazy loading and solid state management, we’ve set ourselves up for long-term success. And perhaps most importantly, we didn’t sacrifice the newcomer experience to get there – a new user can still integrate our SDK without reading a novel of docs.
Building an SDK this way has been an exercise in hiding the complexity behind an API that feels simple. If we did our job right, developers integrating it won’t even think about all these considerations – it just feels straightforward. And that, honestly, is how it should be. They can plug it in and trust that under the hood, the toolkit is doing the smart things automatically.
From one developer to another, I find that pretty amazing.