
Why Your TypeScript Types Keep Lying to You — And How to Fix Them
Most developers think TypeScript's type system is there to catch bugs at compile time. That's only half the story — and the half that's causing your types to silently betray you when your application hits production. The real power of TypeScript isn't just in preventing errors; it's in modeling your domain so accurately that impossible states become literally unrepresentable. When you treat types as mere annotations rather than executable contracts, you end up with code that compiles fine but fails spectacularly at runtime.
This post explores how to build type-safe TypeScript applications that don't just pass the compiler — they actually behave correctly when real data (the messy, malformed kind) hits your functions. We'll look at runtime validation, branded types, and the often-misunderstood intersection of static and dynamic type checking.
Why Does TypeScript Compile Code That Crashes at Runtime?
The gap between compile-time types and runtime reality is where most TypeScript pain lives. TypeScript performs structural typing — it checks shapes, not origins. This means an object that "looks like" a User will be accepted as a User, even if it came from an untrusted API response with missing fields or unexpected nulls.
Consider this deceptively simple function:
interface User {
id: number;
email: string;
profile: {
name: string;
avatar?: string;
};
}
function sendWelcomeEmail(user: User) {
return fetch('/api/welcome', {
method: 'POST',
body: JSON.stringify({
email: user.email,
name: user.profile.name
})
});
}
This compiles perfectly. But what happens when user.profile is null because the API returned a partial record? Or when email is undefined due to a database migration mismatch? TypeScript assumes your types match reality — it doesn't verify that they do.
The fix isn't to add more any types (please, never do that) or to disable strict mode. The fix is to acknowledge that external boundaries — API responses, localStorage, user input — are untrusted until proven otherwise. You need runtime validation at these edges.
How Can You Validate Data Without Losing Type Safety?
The cleanest approach combines static types with runtime validators. Libraries like Zod, Valibot, or ArkType let you define schemas that generate TypeScript types — not the other way around. This ensures your runtime checks and compile-time types never drift out of sync.
Here's how you might rewrite that User interface using Zod:
import { z } from 'zod';
const UserSchema = z.object({
id: z.number(),
email: z.string().email(),
profile: z.object({
name: z.string().min(1),
avatar: z.string().url().optional()
}).nullable()
});
type User = z.infer<typeof UserSchema>;
async function fetchUser(id: number): Promise<User> {
const response = await fetch(`/api/users/${id}`);
const raw = await response.json();
// This throws if the data doesn't match — at the boundary
return UserSchema.parse(raw);
}
Now TypeScript and your running code agree on what a User is. If the API changes and starts returning profile: null instead of omitting the field, you'll know immediately — not when a customer reports a broken welcome email. The .parse() call acts as a contract enforcer: data that passes this checkpoint is guaranteed to match your types throughout the rest of your application.
Some teams worry about the performance cost of runtime validation. In practice, schema validation at API boundaries adds negligible overhead compared to the network request itself. And the cost of not validating — production crashes, corrupted data, security vulnerabilities — dwarfs any micro-optimization concerns.
What's the Deal With Branded Types — And Why Should You Care?
Even with validation, TypeScript's structural typing can bite you. Two interfaces with identical shapes are interchangeable, even when they represent semantically different concepts. This leads to subtle bugs where you pass a userId where a productId was expected — and TypeScript smiles and compiles it anyway.
Branded types solve this by adding a nominal "flavor" to your primitives:
type UserId = string & { __brand: 'UserId' };
type ProductId = string & { __brand: 'ProductId' };
function getUser(id: UserId) { /* ... */ }
function getProduct(id: ProductId) { /* ... */ }
const userId = '123' as UserId;
const productId = '123' as ProductId;
getUser(userId); // ✓ Compiles
getUser(productId); // ✗ Error: ProductId not assignable to UserId
The __brand property doesn't exist at runtime — it's a TypeScript phantom that exists solely to prevent accidental mixing of ID types. Combined with validation functions that return branded types, you get both safety and correctness:
function validateUserId(input: unknown): UserId {
if (typeof input !== 'string' || !input.match(/^user_/)) {
throw new Error('Invalid UserId format');
}
return input as UserId;
}
This pattern shines in large codebases where ID collisions between domains (users, orders, products, sessions) are a constant source of bugs. It's a small investment in type complexity that pays dividends in reduced debugging time.
How Do You Handle Type-Safe Error Handling?
TypeScript's type system has a blind spot: it doesn't track whether a function can throw. A function returning string might return a string — or it might explode. This makes error handling an exercise in tribal knowledge and defensive coding.
The Result pattern (popularized by Rust and Haskell) brings errors into the type system where they belong:
type Result<T, E> =
| { success: true; value: T }
| { success: false; error: E };
function parseConfig(json: string): Result<Config, ParseError> {
try {
const parsed = JSON.parse(json);
const validated = ConfigSchema.parse(parsed);
return { success: true, value: validated };
} catch (e) {
return { success: false, error: new ParseError(e) };
}
}
Now callers must handle both cases:
const result = parseConfig(rawConfig);
if (!result.success) {
// TypeScript knows result.error exists here
logError(result.error);
return;
}
// TypeScript knows result.value exists here
applyConfig(result.value);
Libraries like neverthrow provide production-ready Result types with helpful methods like .map(), .andThen(), and .unwrapOr(). The goal isn't to eliminate exceptions entirely — system errors like out-of-memory conditions should still throw — but to represent expected failures (validation errors, missing records, network timeouts) in your type signatures.
When failure modes are visible in your function signatures, code reviews become easier. You can see at a glance which paths handle errors and which don't. No more hunting through call stacks to understand what might go wrong.
When Should You Reach for Template Literal Types?
TypeScript 4.1 introduced template literal types, and they're far more powerful than simple string interpolation. You can build types that validate format strings, CSS properties, or routing patterns at compile time.
Want to ensure your translation keys actually exist in your dictionary?
type TranslationKey =
| 'nav.home'
| 'nav.about'
| 'nav.contact'
| `errors.${'auth' | 'payment' | 'network'}.${string}`;
function t(key: TranslationKey): string {
// Implementation
}
Now t('errors.auth.invalid_credentials') compiles, but t('errors.unknown.broken') fails because "unknown" isn't in the allowed error categories. This catches typos in your translation keys before they reach users.
Template literal types also enable typed API clients. You can generate routes like /users/:id/posts/:postId where TypeScript enforces that both parameters are present and correctly typed. Tools like tRPC take this further, providing end-to-end type safety between your backend and frontend without code generation.
The Real Value Isn't Fewer Bugs — It's Confidence
All these techniques — runtime validation, branded types, Result patterns, literal types — share a common goal. They shrink the gap between what your code claims to do and what it actually does. This isn't about achieving perfect type safety (TypeScript can't prevent all runtime errors) or writing the most clever types possible.
It's about building systems where refactoring doesn't induce anxiety. Where you can rename a field and trust the compiler to find every reference. Where changing an API response format propagates errors to exactly the places that need updating — not to your error tracking service at 3 AM.
The best TypeScript code doesn't just compile. It communicates intent. It encodes assumptions explicitly. And when those assumptions are violated — because APIs change, databases fail, or users do unexpected things — it fails loudly and locally, not silently and far downstream.
"TypeScript doesn't make your code correct. It makes your code's assumptions visible — so you can decide if they're the right ones."
Start at your boundaries. Validate everything that crosses them. Brand your IDs. Handle errors explicitly. Your future self — debugging a production issue at midnight — will thank you.
