WebDevPro #137: Why Blocking Code Breaks Node.js Performance
Crafting the Web: Tips, Tools, and Trends for Developers
Catch the latest HubSpot Developer Platform updates in Spring Spotlight
Spring Spotlight 2026 is live, and we’ve rounded up the top updates for developers. See what’s new for the HubSpot Developer Platform! Ship faster with AI coding tools like Cursor, Claude Code, and Codex.
Build MCP-powered AI connectors, run serverless functions with support for UI extensions, and use date-based versioning to streamline roadmap planning.
Welcome to this week’s issue of WebDevPro!
Node.js is often chosen for its ability to handle concurrent workloads efficiently. Its event-driven, non-blocking architecture allows applications to process multiple operations without waiting for each one to complete. This model is particularly effective in I/O-heavy systems where responsiveness matters more than sequential execution.
Because of this, there is a widespread assumption that Node.js applications will perform well under load by default. In practice, that assumption depends on one critical condition: the application must remain non-blocking. Once blocking behavior is introduced, the system begins to behave very differently.
Blocking code does not always cause immediate failures. In development environments, where concurrency is limited, the system may appear stable. Requests complete, responses are returned, and nothing seems obviously wrong. This creates a false sense of confidence in how the application will behave in production.
Under real-world conditions, however, multiple operations occur simultaneously, and the event loop becomes a shared dependency across all of them. At that point, the cost of blocking becomes visible. Delays accumulate, responsiveness drops, and the system begins to struggle under load. The issue is not simply that blocking code exists, but how it interacts with the event loop and how that interaction scales.
Before we get into it, here’s this week at a glance:
What blocking means in practice
In Node.js, all JavaScript execution happens on a single thread, coordinated by the event loop. The event loop continuously processes tasks by retrieving them from a queue, executing them, and delegating work when possible so that execution can continue without interruption. This model works efficiently because it assumes that tasks will be short-lived and non-blocking.
Blocking code violates this assumption. When a blocking operation is executed, the event loop cannot proceed to the next task until the current one completes. This does not simply delay a single operation. It prevents all other pending tasks from executing during that time, effectively pausing the system’s ability to make progress.
The impact is broader than it first appears. Callbacks that are ready to run remain in the queue. Timers do not fire at expected intervals. Incoming requests must wait before they can even begin execution. What appears to be a local delay becomes a system-wide bottleneck.
This shared delay is what makes blocking code particularly problematic in Node.js. In systems with multiple threads, delays can be absorbed or distributed. In Node.js, the delay is centralized. A single blocking operation affects everything else that depends on the event loop.
Why synchronous APIs are risky in Node.js
Synchronous APIs are often attractive because they simplify reasoning about code. Execution flows in a straight line, making it easier to follow and debug. This simplicity can be beneficial in scripts or isolated tasks where concurrency is not a concern.
In an event-driven system like Node.js, however, synchronous APIs introduce a significant limitation. Because they execute on the main thread, they block the event loop until they complete. During this time, no other operations can be processed, regardless of how unrelated they may be.
This creates a disconnect between how the code appears and how it behaves under load. The code may look efficient and predictable, but its execution forces all operations into a sequential pattern. Instead of handling multiple tasks concurrently, the system processes them one at a time.
The result is reduced flexibility in how work is handled. As more operations depend on the event loop, the impact of synchronous APIs becomes more pronounced. What begins as a simple design choice can evolve into a system-wide constraint.
CPU-intensive work and hidden blocking
Blocking behavior is not limited to I/O operations. CPU-intensive tasks can have an even greater impact, particularly when executed synchronously. These operations consume the main thread for their duration, preventing the event loop from processing other tasks.
The book illustrates this with a cryptographic key generation example, where a synchronous function is executed repeatedly. Each iteration runs on the main thread, and the cumulative effect is a prolonged period during which the event loop is unavailable.
performance.mark(”start-sync”);
for (let i = 0; i < 10000; i++) {
generateKeyPairSync(”rsa”, {
modulusLength: 1024,
});
}
performance.mark(”end-sync”);
performance.measure(”generateKeyPairSync”, “start-sync”, “end-sync”);
In this scenario, the cost is not just the execution time of a single operation, but the accumulation of blocking across many iterations. The event loop remains occupied for the entire duration, preventing any other work from progressing.
performance.mark(”start-async”);
for (let i = 0; i < 10000; i++) {
generateKeyPair(”rsa”, { modulusLength: 1024 },
() => {});
}
performance.mark(”end-async”);
performance.measure(”generateKeyPair”, “start-async”, “end-async”);
The asynchronous version changes how the work is executed. Instead of occupying the main thread, the operations are delegated, allowing the event loop to remain available. This difference has a direct impact on system responsiveness, especially under load.
The performance gap is not theoretical
The difference between synchronous and asynchronous execution is not just conceptual. The book demonstrates that the impact can be measured and observed in practice. The synchronous version of the operation takes significantly longer and blocks execution entirely.
The asynchronous version, by contrast, allows the system to continue processing other tasks while the work is being performed. This leads to better utilization of system resources and improved responsiveness.
This distinction changes how performance should be evaluated. In Node.js, performance is not only about how quickly a single operation completes. It is about whether the system can continue to process other work during that time.
A fast operation that blocks the event loop can still degrade system performance if it prevents other tasks from progressing. Conversely, a slower operation that does not block may have less overall impact on system responsiveness.
Why this matters more under load
Blocking behavior becomes more problematic as concurrency increases. In low-load environments, tasks are processed with minimal overlap, and delays may not be noticeable. The system appears stable because there is little competition for the event loop.
As the number of concurrent operations grows, the situation changes. Multiple tasks begin to depend on the event loop at the same time. Each task expects to be processed in a timely manner, and blocking operations disrupt this expectation.
When a blocking operation runs, it prevents the event loop from servicing other tasks. These tasks begin to accumulate in the queue, increasing wait times and reducing overall throughput. The system becomes less responsive as more work is added.
This is why blocking issues often appear only under load. The system needs sufficient concurrency for the delays to become visible. Once that threshold is reached, the impact becomes more pronounced, and performance begins to degrade more rapidly.
Async wrappers do not remove blocking
A common misconception is that using asynchronous syntax automatically makes an operation non-blocking. Wrapping a function in async/await or a callback does not change how the underlying work is executed.
The book emphasizes that async syntax controls how results are handled, not how the work itself is performed. If the underlying operation is synchronous and CPU-intensive, it will still block the event loop.
This distinction highlights the difference between code structure and execution behavior. A function may appear asynchronous in form while still behaving in a blocking manner in practice.
Understanding this difference is essential for identifying performance issues. Changing syntax without addressing the underlying execution does not resolve the problem.
Reducing work instead of changing execution
In some cases, improving performance is not about changing how an operation executes, but reducing how often it runs. The book demonstrates this with an example where repeated computation is replaced with a caching approach.
Instead of recalculating an expensive result for every request, the system stores the result and updates it periodically. This reduces the number of times the operation is executed, lowering the load on the event loop.
let cachedSignature = null;
const app = Express();
const signingMiddleware = (_req, res, next) => {
if (!cachedSignature) {
console.info(”Signature is not cached”);
cachedSignature = signData(Date.now().toString());
setInterval(
() => (cachedSignature = signData(Date.now().toString())),
10000);
}
res.setHeader(
“X-Signature”, `data=${cachedSignature.data.toString()};kid=${cachedSignature.keyId};sha512=${cachedSignature.signature}`);
next();
};
This approach shifts the focus from execution style to execution frequency. By reducing repeated work, the system becomes more efficient without changing how the operation itself is implemented.
The real trade-off
Blocking code is often easier to write and reason about. Its linear execution model provides clarity and predictability, making it attractive in many situations.
However, this simplicity comes at a cost in systems that rely on concurrency. Blocking operations limit the ability of the event loop to process multiple tasks efficiently, reducing overall system responsiveness.
The trade-off is not simply between synchronous and asynchronous code. It is between local simplicity and system-wide performance. A piece of code that is easy to understand in isolation may introduce constraints that affect the entire application.
As systems scale and concurrency increases, this trade-off becomes more significant. Decisions that seem minor at the code level can have substantial impact at the system level.
Key Takeaways
Blocking code prevents the event loop from processing other tasks, creating system-wide delays.
Synchronous APIs introduce constraints that limit concurrency.
CPU-intensive operations can block execution even when used with async syntax.
Performance in Node.js depends on maintaining event loop availability.
Reducing execution frequency can improve efficiency without changing execution style.
Final Thoughts
Node.js relies on a responsive event loop to handle concurrent operations effectively. Blocking code disrupts this model by introducing delays that affect all tasks, not just the one being executed.
The impact of blocking behavior is not always immediate, but it becomes clear under load. As concurrency increases, the system’s ability to process work efficiently depends on keeping the event loop free.
If you’d like to read more on this, Node.js Design Patterns explores these ideas in depth, especially around asynchronous behavior, performance, and the architectural decisions behind scalable Node.js systems.
This Week in the News
🧠 Tokenmaxxing is the new productivity metric developers are gaming: There’s a new habit showing up in AI-heavy workflows. Teams are starting to track and even optimize for token usage, treating it as a signal of productivity. Gergely Orosz digs into why that framing breaks down quickly. More tokens don’t mean better outcomes, just more input. It’s the same pattern developers have seen before with lines of code and story points, now resurfacing in an AI-shaped form. What’s interesting here is not the trend itself, but how quickly it appeared. Even with new tools, the instinct to measure the wrong thing hasn’t changed.
⚡ TypeScript 7.0 goes 10x faster with a Go rewrite: The TypeScript team has released the 7.0 beta after spending the past year porting the entire compiler to Go. The result is a version that’s roughly 10x faster than TypeScript 6.0, while keeping the type-checking behavior structurally the same. For developers, that means no major migration or new errors to worry about. Just significantly faster builds out of the box.
🟢 Node.js moves toward Temporal API and stabilizes key features: Node.js is preparing to support the Temporal API by default, likely landing in the upcoming v26 release. This brings a modern, more reliable alternative to JavaScript’s existing Date handling into the runtime. At the same time, Node.js 24.15.0 (LTS) marks require(esm), and the module compile cache as stable, and introduces a new --max-heap-size flag. Together, these updates signal continued progress in both runtime capabilities and performance tooling.
📧 React Email 6 simplifies a fragmented ecosystem: React Email 6 introduces a major update focused on cleaning up versioning issues across its ecosystem. The release makes it easier to manage dependencies and ensures the CLI and components stay in sync. For teams building email templates with React, this should reduce friction and make the overall workflow more predictable.
Beyond the Headlines
⚡ A new Angular compiler built on Oxc: This post explores an experimental Angular compiler powered by Oxc, a Rust-based toolchain focused on performance. The goal is to significantly speed up builds and modernize the compilation pipeline. It’s another signal that JavaScript tooling is steadily moving toward Rust-based infrastructure to push performance boundaries.
🚨 Investigating fake stars in GitHub repos: This investigation looks into how some GitHub repositories inflate their popularity using fake stars. It highlights how easily perception can be manipulated and why star counts aren’t always a reliable signal of quality. For developers, it’s a reminder to evaluate projects based on code, activity, and community, not just metrics.
🎥 Rethinking modern web architecture: This talk dives into how modern web architecture is evolving, covering trade-offs in performance, complexity, and developer experience. It offers a broader perspective on how current patterns scale in real-world applications. A useful watch if you’re thinking beyond frameworks and into long-term system design.
🧩 Features worth borrowing from npmx: This article breaks down specific ideas from the npmx project that could inspire better developer tools. It highlights practical features that improve how developers explore and interact with packages. The takeaway is simple. Good tooling often comes from rethinking small UX details.
🛠️ Why app stability matters more than ever: Cursor shares insights into building stable applications, especially in environments where rapid iteration and AI-assisted development are becoming the norm. The focus is on reducing breakage and maintaining reliability as systems evolve. It’s a reminder that speed is valuable, but stability is what keeps users and teams productive.
Tool of the Week
🎬 Add unique animations to your React apps
Building engaging UIs often means going beyond basic transitions. Animata offers a collection of 100+ animation-focused React components, including effects like animated beams, spreading cards, and even a Slack-style intro screen.
It’s a handy resource if you want to add more personality to your UI without building complex animations from scratch.
That’s all for this week. Have any ideas you want to see in the next article? Hit Reply!
Cheers!
Editor-in-chief,
Kinnari Chohan
👋 Advertise with us
Interested in sponsoring this newsletter and reaching a highly engaged audience of tech professionals? Simply reply to this email, and our team will get in touch with the next steps.



