Skip to content

Project Loom (Java)

Virtual threads, structured concurrency, and scoped values -- Java's platform-level approach to lightweight concurrency, built on hidden continuations inside the JVM.

FieldValue
LanguageJava 21+
LicenseGPL-2.0 with Classpath Exception (OpenJDK)
Repositorygithub.com/openjdk/loom
DocumentationOpenJDK Project Loom
Key AuthorsRon Pressler, Alan Bateman (Oracle)
ApproachJVM-managed virtual threads with hidden delimited continuations

Overview

What It Solves

Java's traditional concurrency model maps each Java thread one-to-one to an OS platform thread. Platform threads are expensive: each requires roughly 2 MB of stack memory and involves kernel-level scheduling. This makes the thread-per-request model -- the natural way to write server applications -- unable to scale beyond a few thousand concurrent connections without resorting to asynchronous frameworks (reactive streams, callbacks, CompletableFuture chains) that sacrifice readability and debuggability.

Project Loom solves this by introducing virtual threads: lightweight threads managed entirely by the JVM that can number in the millions, restoring the simplicity of thread-per-request programming at any scale.

Design Philosophy

Loom's philosophy is conservative integration: rather than exposing new programming models or algebraic effect abstractions, it makes the existing java.lang.Thread API work at scale. Virtual threads are Thread instances -- they work with synchronized, ThreadLocal, try/catch, debuggers, and profilers. The goal is that existing code benefits from virtual threads with minimal or no changes.


Core Abstractions and Types

Virtual Threads (JEP 444 -- Final in Java 21)

Virtual threads are lightweight threads scheduled by the JVM rather than the operating system. They are multiplexed onto a small pool of carrier threads (platform threads managed by a ForkJoinPool):

java
// Create and start a virtual thread
Thread.startVirtualThread(() -> {
    var result = fetchFromDatabase();  // blocks without wasting OS thread
    process(result);
});

// Using the builder API
Thread vt = Thread.ofVirtual()
    .name("worker-", 0)
    .start(() -> handleRequest(request));

// Using an executor (typical server pattern)
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
    for (var request : incomingRequests) {
        executor.submit(() -> handleRequest(request));
    }
}

When a virtual thread blocks on I/O (socket read, file read, Thread.sleep, lock acquisition), the JVM unmounts it from the carrier thread, freeing the carrier to run other virtual threads. When the I/O completes, the virtual thread is remounted onto an available carrier and resumes execution. This is invisible to application code.

Structured Concurrency (JEP 453 -- Preview)

StructuredTaskScope treats a group of concurrent subtasks as a single unit of work with well-defined lifecycle guarantees:

java
record UserProfile(User user, List<Order> orders) {}

UserProfile fetchProfile(String userId) throws Exception {
    try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
        Subtask<User> userTask = scope.fork(() -> findUser(userId));
        Subtask<List<Order>> ordersTask = scope.fork(() -> fetchOrders(userId));

        scope.join();            // wait for both subtasks
        scope.throwIfFailed();   // propagate first failure

        return new UserProfile(userTask.get(), ordersTask.get());
    }
    // If either subtask fails, the other is cancelled automatically
}

Policies control how the scope responds to subtask completion:

PolicyBehavior
ShutdownOnFailureCancel remaining subtasks when any subtask fails
ShutdownOnSuccessCancel remaining subtasks when any subtask succeeds

Subtasks forked within a scope run as virtual threads. The scope ensures that no subtask outlives the scope itself, preventing thread leaks.

Scoped Values (JEP 464 -- Preview)

Scoped values provide implicit, immutable context propagation through the call stack -- analogous to a Reader effect in algebraic effect systems:

java
private static final ScopedValue<String> CURRENT_USER = ScopedValue.newInstance();

// Bind a scoped value for a bounded region of code
ScopedValue.runWhere(CURRENT_USER, "alice", () -> {
    handleRequest();  // CURRENT_USER.get() returns "alice" here and in all callees
});

// Nested rebinding
void handleRequest() {
    String user = CURRENT_USER.get();  // "alice"
    ScopedValue.runWhere(CURRENT_USER, "system", () -> {
        auditLog();  // CURRENT_USER.get() returns "system"
    });
    // CURRENT_USER.get() returns "alice" again
}

Scoped values improve on ThreadLocal in several ways:

PropertyThreadLocalScopedValue
MutabilityMutable (set/get)Immutable per binding scope
LifetimeUnbounded (manual cleanup)Bounded to runWhere scope
InheritanceCopied to child threadsShared with structured concurrency
Memory leaksCommon (forgotten remove())Impossible by design
PerformanceHash map lookupCached after first access

How Effects Are Declared

Loom does not expose an explicit effect declaration mechanism. Instead, effects are implicit in the JVM's threading model:

  • Blocking I/O is the primary "effect" -- virtual threads yield their carrier automatically on blocking calls
  • Context propagation uses scoped values rather than an explicit Reader effect
  • Concurrency uses structured task scopes rather than explicit Fork/Join effects
  • Error handling uses Java's existing exception mechanism

This is a deliberate design choice: Java developers write ordinary sequential code, and the JVM runtime handles the underlying continuation mechanics transparently.


How Handlers/Interpreters Work

The Hidden Continuation

Internally, virtual threads are implemented using a jdk.internal.vm.Continuation class -- a scoped, stackful, one-shot delimited continuation. This class is not part of the public API:

java
jdk.internal.vm.Continuation
    - yield(ContinuationScope scope)  // suspend execution
    - run()                            // resume execution

When a virtual thread encounters a blocking operation:

  1. The JVM calls Continuation.yield(scope), capturing the current stack
  2. The carrier thread is released to the ForkJoinPool
  3. When the blocking condition resolves, Continuation.run() resumes the virtual thread on an available carrier

This is structurally identical to how algebraic effect handlers work: an effect (blocking I/O) is "thrown" upward and caught by the nearest matching handler (the virtual thread scheduler), which decides how and when to resume the continuation.

Why Continuations Are Not Public

The Loom team considered exposing continuations as a public API but decided against it for several reasons:

  • Safety: Continuations can violate thread identity (Thread.currentThread() can change mid-method)
  • Complexity: Low-level continuation manipulation is error-prone and rarely needed directly
  • Sufficiency: Virtual threads, structured concurrency, and scoped values cover the primary use cases
  • Compatibility: A public continuation API would be difficult to evolve without breaking changes

The Continuation class remains in jdk.internal.vm and requires --add-exports flags to access directly.


Performance Approach

Virtual Thread Overhead

Virtual threads are extremely lightweight compared to platform threads:

MetricPlatform ThreadVirtual Thread
Stack memory~2 MB (fixed)~1 KB (grows as needed)
Creation cost~1 ms~1 us
Context switchKernel-levelUser-level (JVM)
Maximum practical count~5,000Millions

Benchmark Results

Performance gains depend heavily on workload type:

  • I/O-bound workloads: Virtual threads can achieve 8-10x throughput improvements over platform threads under high concurrency, because blocked virtual threads do not consume carrier threads
  • High concurrency (>5,000 connections): Platform threads degrade rapidly; virtual threads maintain consistent performance
  • CPU-bound workloads: Virtual threads offer no advantage and can underperform due to ForkJoinPool scheduling overhead (observed as low as 50-55% throughput in some benchmarks)
  • Memory: Virtual threads use roughly 100x less memory per thread than platform threads

Pinning

A virtual thread becomes "pinned" to its carrier thread when it blocks inside a synchronized block or a native method. Pinned threads cannot yield, reducing the effective carrier pool size. The JVM can detect and report pinning via -Djdk.tracePinnedThreads=full. Replacing synchronized with ReentrantLock eliminates pinning.


Composability Model

Relation to Algebraic Effects

Loom's features map to a subset of what a full algebraic effect system provides:

Algebraic Effect ConceptLoom Equivalent
Async/IO effectVirtual thread blocking (implicit yield)
Reader effectScopedValue
Fork/Join effectStructuredTaskScope
Error effectJava exceptions
State effectNot provided (use AtomicReference, etc.)
NondeterminismNot provided
Custom user-defined effectsNot provided
Effect handlers (resume)Not exposed (internal Continuation)

For capability-based I/O in a similar spirit, see Scala's Ox which uses Java 21 virtual threads with Scala 3's capability system.

Loom provides the three most practically important effects (async I/O, context propagation, structured concurrency) without requiring developers to learn effect system concepts. However, it does not support user-defined effects or custom handlers.

Impact on the Java Ecosystem

Virtual threads reduce the need for reactive frameworks:

  • Before Loom: Libraries like Project Reactor and RxJava were necessary for scalable I/O because platform threads could not scale. These frameworks imposed a callback/stream-based programming model.
  • After Loom: Simple thread-per-request code achieves comparable scalability, making reactive frameworks unnecessary for many use cases. Frameworks like Spring Boot, Tomcat, and Jetty now support virtual threads natively.

However, reactive frameworks still provide value for backpressure, stream processing, and complex event-driven architectures that go beyond simple request-response patterns.


Strengths

  • Zero learning curve: Virtual threads are Thread instances; existing Java code works without changes
  • Ecosystem compatibility: Works with debuggers, profilers, thread dumps, existing libraries
  • Massive scalability: Millions of concurrent threads with minimal memory overhead
  • Structured concurrency: Prevents thread leaks and simplifies concurrent error handling
  • Scoped values: Safe, bounded context propagation without ThreadLocal pitfalls
  • Production-ready: Virtual threads are a final feature in Java 21 (LTS)

Weaknesses

  • No user-defined effects: Cannot extend the system with custom effect types or handlers
  • Continuations not exposed: Advanced use cases (generators, coroutines, custom schedulers) cannot be built on top of the continuation primitive
  • Pinning problem: synchronized blocks and native methods prevent virtual thread yielding
  • CPU-bound regression: Virtual threads can underperform platform threads for compute-intensive work
  • Structured concurrency still in preview: StructuredTaskScope and ScopedValue are not yet final features
  • Limited composability: No mechanism to compose effects or transform handlers like algebraic effect systems provide

Key Design Decisions and Trade-offs

DecisionRationaleTrade-off
Hidden continuationsSafety; simplicity; avoids exposing error-prone low-level APICannot build generators, coroutines, or custom effect handlers
Virtual threads as ThreadBackward compatibility; existing code benefits immediatelyInherits Thread API baggage; no clean break from legacy model
ForkJoinPool carriersWork-stealing provides good load balancing for I/O workloadsSuboptimal for CPU-bound work; scheduling latency under contention
Scoped values (immutable)Eliminates ThreadLocal memory leaks and mutation bugsCannot model mutable state effects; less flexible than full Reader
Structured task scopesPrevents thread leaks; clear ownership hierarchyCannot model unstructured concurrency patterns (fire-and-forget)
No reactive replacementLoom complements, not replaces, reactive for backpressure/streamingDevelopers must still choose between models for different use cases

Sources