Structured logging cover

Structured Logging for AI Debugging

When an AI coding assistant tries to help you debug a production issue, it reads your logs. If your logs are scattered console.log calls with inconsistent formatting, the AI can’t help you. It doesn’t know which log lines belong to the same request, what the timing was, or what the error context means. Evlog is a structured logging library by Hugo Richard, designed around the “wide event” pattern. One structured event per request, with all context attached. I’ve been using it in my projects and it’s particularly useful when you’re debugging with AI tools, because the log output is machine-readable by design. ...

March 5, 2026 · 4 min · Muhammad Hassan Raza
Model Context Protocol diagram

Model Context Protocol: Why This Matters More Than You Think

Every few months, something gets released that looks like infrastructure plumbing but turns out to matter more than the flashy launches. Model Context Protocol (MCP) is one of those things. If you’re a developer working with LLMs, MCP will change how you integrate AI into your workflows. Here’s an early-adopter perspective on what it is, why it matters, and how to actually use it. What Problem Does MCP Solve? Today’s AI tools are context-starved. You paste code into ChatGPT, upload files to Claude, manually copy database schemas into prompts. Every session starts from scratch. Every context window is a blank slate. ...

November 15, 2025 · 7 min · Muhammad Hassan Raza
Extended thinking interaction model diagram

Extended Thinking in LLMs: A Mental Model for Developers

Extended thinking isn’t just “model thinks longer”—it’s a fundamentally different interaction model. If you’re prompting extended thinking models (Claude Opus, o1) the same way you prompt standard models, you’re leaving most of the value on the table. This post is a developer’s mental model for working with these systems: when to use them, how to prompt them, and what trade-offs to expect. How Extended Thinking Actually Works Standard LLMs generate tokens one at a time, each token conditioned on everything before it. The model “thinks” only as fast as it speaks. Ask it to solve a complex problem, and it often commits to an approach in the first few tokens, then rationalizes that approach even if it’s wrong. ...

September 25, 2025 · 7 min · Muhammad Hassan Raza
Claude Opus 4.5 context hierarchy diagram

Claude Opus 4.5: When an AI Finally Gets It

I’ve been skeptical of every “game-changing AI release” for the past two years. Every few months, a new model drops and Twitter explodes with claims that AGI is here. Spoiler: it never is. But when Anthropic released Opus 4.5, something actually shifted in how I work. Not because it’s AGI—it’s decidedly not—but because it’s the first model that consistently delivers on complex, multi-step reasoning without falling apart halfway through. This isn’t a hype piece. This is a practitioner’s field notes from someone who uses these tools daily to ship product at Entropy Labs. ...

May 15, 2025 · 5 min · Muhammad Hassan Raza