WebGPU

Editor’s Draft,

Issue Tracking:
GitHub
Inline In Spec
Editors:
(Mozilla)
(Apple)
(Google)
Participate:
File an issue (open issues)

Abstract

WebGPU exposes an API for performing operations, such as rendering and computation, on a Graphics Processing Unit.

Status of this document

This specification was published by the GPU for the Web Community Group. It is not a W3C Standard nor is it on the W3C Standards Track. Please note that under the W3C Community Contributor License Agreement (CLA) there is a limited opt-out and other conditions apply. Learn more about W3C Community and Business Groups.

1. Introduction

This section is non-normative.

Graphics Processing Units, or GPUs for short, have been essential in enabling rich rendering and computational applications in personal computing. WebGPU is an API that exposes the capabilities of GPU hardware for the Web. The API is designed from the ground up to efficiently map to the Vulkan, Direct3D 12, and Metal native GPU APIs. WebGPU is not related to WebGL and does not explicitly target OpenGL ES.

WebGPU sees physical GPU hardware as GPUAdapters. It provides a connection to an adapter via GPUDevice, which manages resources, and the device’s GPUQueues, which execute commands. GPUDevice may have its own memory with high-speed access to the processing units. GPUBuffer and GPUTexture are the physical resources backed by GPU memory. GPUCommandBuffer and GPURenderBundle are containers for user-recorded commands. GPUShaderModule contains shader code. The other resources, such as GPUSampler or GPUBindGroup, configure the way physical resources are used by the GPU.

GPUs execute commands encoded in GPUCommandBuffers by feeding data through a pipeline, which is a mix of fixed-function and programmable stages. Programmable stages execute shaders, which are special programs designed to run on GPU hardware. Most of the state of a pipeline is defined by a GPURenderPipeline or a GPUComputePipeline object. The state not included in these pipeline objects is set during encoding with commands, such as beginRenderPass() or setBlendColor().

2. Security considerations

2.1. CPU-based undefined behavior

A WebGPU implementation translates the workloads issued by the user into API commands specific to the target platform. Native APIs specify the valid usage for the commands (for example, see vkCreateDescriptorSetLayout) and generally don’t guarantee any outcome if the valid usage rules are not followed. This is called "undefined behavior", and it can be exploited by an attacker to access memory they don’t own, or force the driver to execute arbitrary code.

In order to disallow insecure usage, the range of allowed WebGPU behaviors is defined for any input. An implementation has to validate all the input from the user and only reach the driver with the valid workloads. This document specifies all the error conditions and handling semantics. For example, specifying the same buffer with intersecting ranges in both "source" and "destination" of copyBufferToBuffer() results in GPUCommandEncoder generating an error, and no other operation occurring.

See § 20 Errors & Debugging for more information about error handling.

2.2. GPU-based undefined behavior

WebGPU shaders are executed by the compute units inside GPU hardware. In native APIs, some of the shader instructions may result in undefined behavior on the GPU. In order to address that, the shader instruction set and its defined behaviors are strictly defined by WebGPU. When a shader is provided, the WebGPU implementation has to validate it before doing any translation (to platform-specific shaders) or transformation passes.

2.3. Out-of-bounds access in shaders

Shaders can access physical resources either directly or via texture units, which are fixed-function hardware blocks that handle texture coordinate conversions. Validation on the API side can only guarantee that all the inputs to the shader are provided and they have the correct usage and types. The API side can not guarantee that the data is accessed within bounds if the texture units are not involved.

In order to prevent the shaders from accessing GPU memory an application doesn’t own, the WebGPU implementation may enable a special mode (called "robust buffer access") in the driver that guarantees that the access is limited to buffer bounds. Alternatively, an implementation may transform the shader code by inserting manual bounds checks.

If the shader attempts to load data outside of physical resource bounds, the implementation is allowed to:

  1. return a value at a different location within the resource bounds

  2. return a value vector of "(0, 0, 0, X)" with any "X"

  3. partially discard the draw or dispatch call

If the shader attempts to write data outside of physical resource bounds, the implementation is allowed to:

  1. write the value to a different location within the resource bounds

  2. discard the write operation

  3. partially discard the draw or dispatch call

2.4. Invalid data

When uploading floating-point data from CPU to GPU, or generating it on the GPU, we may end up with a binary representation that doesn’t correspond to a valid number, such as infinity or NaN (not-a-number). The GPU behavior in this case is subject to the accuracy of the GPU hardware implementation of the IEEE-754 standard. WebGPU guarantees that introducing invalid floating-point numbers would only affect the results of arithmetic computations and will not have other side effects.

2.5. Driver bugs

GPU drivers are subject to bugs like any other software. If a bug occurs, an attacker could possibly exploit the incorrect behavior of the driver to get access to unprivileged data. In order to reduce the risk, the WebGPU working group will coordinate with GPU vendors to integrate the WebGPU Conformance Test Suite (CTS) as part of their driver testing process, like it was done for WebGL. WebGPU implementations are expected to have workarounds for some of the discovered bugs, and support blacklisting particular drivers from using some of the native API backends.

2.6. Timing attacks

WebGPU is designed for multi-threaded use via Web Workers. Some of the objects, like GPUBuffer, have shared state which can be simultaneously accessed. This allows race conditions to occur, similar to those of accessing an SharedArrayBuffer from multiple Web Workers, which makes the thread scheduling observable and allows the creation of high-precision timers. The theoretical attack vectors are a subset of those of SharedArrayBuffer.

2.7. Denial of service

WebGPU applications have access to GPU memory and compute units. A WebGPU implementation may limit the available GPU memory to an application, in order to keep other applications responsive. For GPU processing time, a WebGPU implementation may set up "watchdog" timer that makes sure an application doesn’t cause GPU unresponsiveness for more than a few seconds. These measures are similar to those used in WebGL.

2.8. Fingerprinting

WebGPU defines the required limits and capabilities of any GPUAdapter. and encourages applications to target these standard limits. The actual result from requestAdapter() may have better limits, and could be subject to fingerprinting.

3. Terminology & Conventions

3.1. Dot Syntax

In this specification, the . ("dot") syntax, common in programming languages, is used. The phrasing "Foo.Bar" means "the Bar member of the value (or interface) Foo."

For example, where buffer is a GPUBuffer, buffer.[[device]].[[adapter]] means "the [[adapter]] internal slot of the [[device]] internal slot of buffer.

3.2. Coordinate Systems

WebGPU’s coordinate systems match DirectX and Metal’s coordinate systems in graphics pipeline.

3.3. Internal Objects

An internal object is a conceptual, non-exposed WebGPU object. Internal objects track the state of an API object and hold any underlying implementation. If the state of a particular internal object can change in parallel from multiple agents, those changes are always atomic with respect to all agents.

Note: An "agent" refers to a JavaScript "thread" (i.e. main thread, or Web Worker).

3.3.1. Invalid Objects

If an object is sucessfully created, it is valid at that moment. An internal object may be invalid. It may become invalid during its lifetime, but it will never become valid again.

Invalid objects result from a number of situations, including:

3.4. WebGPU Interfaces

A WebGPU interface is an exposed interface which encapsulates an internal object. It provides the interface through which the internal object's state is changed.

As a matter of convention, if a WebGPU interface is referred to as invalid, it means that the internal object it encapsulates is invalid.

Any interface which includes GPUObjectBase is a WebGPU interface.

interface mixin GPUObjectBase {
    attribute DOMString? label;
};

GPUObjectBase has the following attributes:

label, of type DOMString, nullable

A label which can be used by development tools (such as error/warning messages, browser developer tools, or platform debugging utilities) to identify the underlying internal object to the developer. It has no specified format, and therefore cannot be reliably machine-parsed.

In any given situation, the user agent may or may not choose to use this label.

GPUObjectBase has the following internal slots:

[[device]], of type device, readonly

An internal slot holding the device which owns the internal object.

3.5. Object Descriptors

An object descriptor holds the information needed to create an object, which is typically done via one of the create* methods of GPUDevice.

dictionary GPUObjectDescriptorBase {
    DOMString label;
};

GPUObjectDescriptorBase has the following members:

label, of type DOMString

The initial value of GPUObjectBase.label.

4. Programming Model

4.1. Timelines

This section is non-normative.

A computer system with a user agent at the front-end and GPU at the back-end has components working on different timelines in parallel:

Content timeline

Associated with the execution of the Web script. It includes calling all methods described by this specification.

Device timeline

Associated with the GPU device operations that are issued by the user agent. It includes creation of adapters, devicies, and GPU resources and state objects, which are typically synchronous operations from the point of view of the user agent part that controls the GPU, but can live in a separate OS process.

Queue timeline

Associated with the execution of operations on the compute units of the GPU. It includes actual draw, copy, and compute jobs that run on the GPU.

In this specification, asynchronous operations are used when the result value depends on work that happens on any timeline other than the Content timeline. They are represented by callbacks and promises in JavaScript.

GPUComputePassEncoder.dispatch():
  1. User encodes a dispatch command by calling a method of the GPUComputePassEncoder which happens on the Content timeline.

  2. User issues GPUQueue.submit() that hands over the GPUCommandBuffer to the user agent, which processes it on the Device timeline by calling the OS driver to do a low-level submission.

  3. The submit gets dispatched by the GPU thread scheduler onto the actual compute units for execution, which happens on the Queue timeline.

GPUDevice.createBuffer():
  1. User fills out a GPUBufferDescriptor and creates a GPUBuffer with it, which happens on the Content timeline.

  2. User agent creates a low-level buffer on the Device timeline.

GPUBuffer.mapReadAsync():
  1. User requests to map a GPUBuffer on the Content timeline and gets a promise in return.

  2. User agent checks if the buffer is currently used by the GPU and makes a reminder to itself to check back when this usage is over.

  3. After the GPU operating on Queue timeline is done using the buffer, the user agent maps it to memory and resolves the promise.

4.2. Memory

This section is non-normative.

Once a GPUDevice has been obtained during an application initialization routine, we can describe the WebGPU platform as consisting of the following layers:

  1. User agent implementing the specification.

  2. Operating system with low-level native API drivers for this device.

  3. Actual CPU and GPU hardware.

Each layer of the WebGPU platform may have different memory types that the user agent needs to consider when implementing the specification:

Most physical resources are allocated in the memory of type that is efficient for computation or rendering by the GPU. When the user needs to provide new data to the GPU, the data may first need to cross the process boundary in order to reach the user agent part that communicates with the GPU driver. Then it may need to be made visible to the driver, which sometimes requires a copy into driver-allocated staging memory. Finally, it may need to be transferred to the dedicated GPU memory, potentially changing the internal layout into one that is most efficient for GPUs to operate on.

All of these transitions are done by the WebGPU implementation of the user agent.

Note: This example describes the worst case, while in practice the implementation may not need to cross the process boundary, or may be able to expose the driver-managed memory directly to the user behind an ArrayBuffer, thus avoiding any data copies.

4.3. Resource usage

Buffers and textures can be used by the GPU in multiple ways, which can be split into two groups:

Read-only usages

Usages like GPUBufferUsage.VERTEX or GPUTextureUsage.SAMPLED don’t change the contents of a resource.

Mutating usages

Usages like GPUBufferUsage.STORAGE do change the contents of a resource.

Consider merging all read-only usages. <https://github.com/gpuweb/gpuweb/issues/296>

Textures may consist of separate mipmap levels and array layers, which can be used differently at any given time. For the matter of usage validation, we’ll call them subresources.

The main usage rule is that any subresource at any given time can only be in either:

Enforcing this rule allows the API to limit when data races can occur when working with memory. That property makes applications written against WebGPU more likely to run without modification on different platforms.

Generally, when an implementation processes an operation that uses a subresource in a different way than its current usage allows, it schedules a transition of the resource into the new state. In some cases, like within an open GPURenderPassEncoder, such a transition is impossible due to the hardware limitations. We define these places as usage scopes: each subresource must not change usage within the usage scope.

For example, binding the same buffer for STORAGE as well as for VERTEX within the same GPURenderPassEncoder would put the encoder as well as the owning GPUCommandEncoder into the error state. Since STORAGE is the only mutating usage for a buffer that is valid inside a render pass, if it’s present, this buffer can’t be used in any other way within this pass.

The subresources of textures included in the views provided to GPURenderPassColorAttachmentDescriptor.attachment and GPURenderPassColorAttachmentDescriptor/resolveTarget are considered to have OUTPUT_ATTACHMENT for the usage scope of this render pass.

Document read-only states for depth views. <https://github.com/gpuweb/gpuweb/issues/514>

5. Core Internal Objects

5.1. Adapters

An adapter represents an implementation of WebGPU on the system. Each adapter identifies both an instance of a hardware accelerator (e.g. GPU or CPU) and an instance of a browser’s implementation of WebGPU on top of that accelerator.

If an adapter becomes unavailable, it becomes invalid. Once invalid, it never becomes valid again. Any devices on the adapter, and internal objects owned by those devices, also become invalid.

Note: An adapter may be a physical display adapter (GPU), but it could also be a software renderer. A returned adapter could refer to different physical adapters, or to different browser codepaths or system drivers on the same physical adapters. Applications can hold onto multiple adapters at once (via GPUAdapter) (even if some are invalid), and two of these could refer to different instances of the same physical configuration (e.g. if the GPU was reset or disconnected and reconnected).

An adapter has the following internal slots:

[[extensions]], of type sequence<GPUExtensionName>, readonly

The extensions which can be used to create devices on this adapter.

[[limits]], of type GPULimits, readonly

The best limits which can be used to create devices on this adapter.

Each adapter limit must be the same or better than its default value in GPULimits.

Adapters are exposed via GPUAdapter.

5.2. Devices

A device is the logical instantiation of an adapter, through which internal objects are created. It can be shared across multiple agents (e.g. dedicated workers).

A device is the exclusive owner of all internal objects created from it: when the device is lost, it and all objects created on it (directly, e.g. createTexture(), or indirectly, e.g. createView()) become invalid.

Define "ownership".

A device has the following internal slots:

[[adapter]], of type adapter, readonly

The adapter from which this device was created.

[[extensions]], of type sequence<GPUExtensionName>, readonly

The extensions which can be used on this device. No additional extensions can be used, even if the underlying adapter can support them.

[[limits]], of type GPULimits, readonly

The limits which can be used on this device. No better limits can be used, even if the underlying adapter can support them.

When a new device device is created from adapter adapter with GPUDeviceDescriptor descriptor:

Devices are exposed via GPUDevice.

6. Initialization

6.1. Examples

Need a robust example like the one in ErrorHandling.md, which handles all situations. Possibly also include a simple example with no handling.

A GPU object is available via navigator.gpu on the Window:

[Exposed=Window]
partial interface Navigator {
    [SameObject] readonly attribute GPU gpu;
};

... as well as on dedicated workers:

[Exposed=DedicatedWorker]
partial interface WorkerNavigator {
    [SameObject] readonly attribute GPU gpu;
};

6.3. GPU

GPU is the entry point to WebGPU.

[Exposed=(Window, DedicatedWorker)]
interface GPU {
    Promise<GPUAdapter> requestAdapter(optional GPURequestAdapterOptions options = {});
};

GPU has the methods defined by the following sections.

6.3.1. requestAdapter(options)

Arguments:

Returns: promise, of type Promise<GPUAdapter>.

Requests an adapter from the user agent. The user agent chooses whether to return an adapter, and, if so, chooses according to the provided options.

Returns a new promise, promise. On the Device timeline, the following steps occur:

6.3.1.1. Adapter Selection

GPURequestAdapterOptions provides hints to the user agent indicating what configuration is suitable for the application.

dictionary GPURequestAdapterOptions {
    GPUPowerPreference powerPreference;
};
enum GPUPowerPreference {
    "low-power",
    "high-performance"
};

GPURequestAdapterOptions has the following members:

powerPreference, of type GPUPowerPreference

Optionally provides a hint indicating what class of adapter should be selected from the system’s available adapters.

The value of this hint may influence which adapter is chosen, but it must not influence whether an adapter is returned or not.

Note: The primary utility of this hint is to influence which GPU is used in a multi-GPU system. For instance, some laptops have a low-power integrated GPU and a high-performance discrete GPU.

Note: Depending on the exact hardware configuration, such as battery status and attached displays or removable GPUs, the user agent may select different adapters given the same power preference. Typically, given the same hardware configuration and state and powerPreference, the user agent is likely to select the same adapter.

It must be one of the following values:

undefined (or not present)

Provides no hint to the user agent.

"low-power"

Indicates a request to prioritize power savings over performance.

Note: Generally, content should use this if it is unlikely to be constrained by drawing performance; for example, if it renders only one frame per second, draws only relatively simple geometry with simple shaders, or uses a small HTML canvas element. Developers are encouraged to use this value if their content allows, since it may significantly improve battery life on portable devices.

"high-performance"

Indicates a request to prioritize performance over power consumption.

Note: By choosing this value, developers should be aware that, for devices created on the resulting adapter, user agents are more likely to force device loss, in order to save power by switching to a lower-power adapter. Developers are encouraged to only specify this value if they believe it is absolutely necessary, since it may significantly decrease battery life on portable devices.

6.4. GPUAdapter

A GPUAdapter encapsulates an adapter, and describes its capabilities (extensions and limits).

To get a GPUAdapter, use requestAdapter().

interface GPUAdapter {
    readonly attribute DOMString name;
    readonly attribute FrozenArray<GPUExtensionName> extensions;
    //readonly attribute GPULimits limits; Don’t expose higher limits for now.

    Promise<GPUDevice> requestDevice(optional GPUDeviceDescriptor descriptor = {});
};

GPUAdapter has:

6.4.1. requestDevice(descriptor)

Arguments:

Returns: promise, of type Promise<GPUDevice>.

Requests a device from the adapter.

Returns a new promise, promise. On the Device timeline, the following steps occur:

requestDevice Valid Usage

Given an adapter adapter and a GPUDeviceDescriptor descriptor, the following validation rules apply:

6.4.1.1. GPUDeviceDescriptor

GPUDeviceDescriptor describes a device request.

dictionary GPUDeviceDescriptor : GPUObjectDescriptorBase {
    sequence<GPUExtensionName> extensions = [];
    GPULimits limits = {};
};
extensions, of type sequence<GPUExtensionName>, defaulting to []

The set of GPUExtensionName values in this sequence defines the exact set of extensions that must be enabled on the device.

limits, of type GPULimits, defaulting to {}

Defines the exact limits that must be enabled on the device.

6.4.1.2. GPUExtensionName

Each GPUExtensionName identifies a set of functionality which, if available, allows additional usages of WebGPU that would have otherwise been invalid.

enum GPUExtensionName {
    "texture-compression-bc"
};
"texture-compression-bc"

Write a spec section for this, and link to it.

6.4.1.3. GPULimits

GPULimits describes various limits in the usage of WebGPU on a device.

One limit value may be better than another. For each limit, "better" is defined.

Note: Setting "better" limits may not necessarily be desirable. While they enable strictly more programs to be valid, they may have a performance impact. Because of this, and to improve portability across devices and implementations, applications should generally request the "worst" limits that work for their content.

dictionary GPULimits {
    GPUSize32 maxBindGroups = 4;
    GPUSize32 maxDynamicUniformBuffersPerPipelineLayout = 8;
    GPUSize32 maxDynamicStorageBuffersPerPipelineLayout = 4;
    GPUSize32 maxSampledTexturesPerShaderStage = 16;
    GPUSize32 maxSamplersPerShaderStage = 16;
    GPUSize32 maxStorageBuffersPerShaderStage = 4;
    GPUSize32 maxStorageTexturesPerShaderStage = 4;
    GPUSize32 maxUniformBuffersPerShaderStage = 12;
};
maxBindGroups, of type GPUSize32, defaulting to 4

The maximum number of GPUBindGroupLayouts allowed in bindGroupLayouts when creating a GPUPipelineLayout.

Higher is better.

maxDynamicUniformBuffersPerPipelineLayout, of type GPUSize32, defaulting to 8

The maximum number of bindings for which:

across all bindGroupLayouts when creating a GPUPipelineLayout.

Higher is better.

maxDynamicStorageBuffersPerPipelineLayout, of type GPUSize32, defaulting to 4

The maximum number of bindings for which:

across all bindGroupLayouts when creating a GPUPipelineLayout.

Higher is better.

maxSampledTexturesPerShaderStage, of type GPUSize32, defaulting to 16

For each possible GPUShaderStage stage, the maximum number of bindings for which:

across all bindGroupLayouts when creating a GPUPipelineLayout.

Higher is better.

maxSamplersPerShaderStage, of type GPUSize32, defaulting to 16

For each possible GPUShaderStage stage, the maximum number of bindings for which:

across all bindGroupLayouts when creating a GPUPipelineLayout.

Higher is better.

maxStorageBuffersPerShaderStage, of type GPUSize32, defaulting to 4

For each possible GPUShaderStage stage, the maximum number of bindings for which:

across all bindGroupLayouts when creating a GPUPipelineLayout.

Higher is better.

maxStorageTexturesPerShaderStage, of type GPUSize32, defaulting to 4

For each possible GPUShaderStage stage, the maximum number of bindings for which:

across all bindGroupLayouts when creating a GPUPipelineLayout.

Higher is better.

maxUniformBuffersPerShaderStage, of type GPUSize32, defaulting to 12

For each possible GPUShaderStage stage, the maximum number of bindings for which:

across all bindGroupLayouts when creating a GPUPipelineLayout.

Higher is better.

6.5. GPUDevice

A GPUDevice encapsulates a device and exposes the functionality of that device.

GPUDevice is the top-level interface through which WebGPU interfaces are created.

To get a GPUDevice, use requestDevice().

[Exposed=(Window, DedicatedWorker), Serializable]
interface GPUDevice : EventTarget {
    [SameObject] readonly attribute GPUAdapter adapter;
    readonly attribute FrozenArray<GPUExtensionName> extensions;
    readonly attribute object limits;

    [SameObject] readonly attribute GPUQueue defaultQueue;

    GPUBuffer createBuffer(GPUBufferDescriptor descriptor);
    GPUMappedBuffer createBufferMapped(GPUBufferDescriptor descriptor);
    GPUTexture createTexture(GPUTextureDescriptor descriptor);
    GPUSampler createSampler(optional GPUSamplerDescriptor descriptor = {});

    GPUBindGroupLayout createBindGroupLayout(GPUBindGroupLayoutDescriptor descriptor);
    GPUPipelineLayout createPipelineLayout(GPUPipelineLayoutDescriptor descriptor);
    GPUBindGroup createBindGroup(GPUBindGroupDescriptor descriptor);

    GPUShaderModule createShaderModule(GPUShaderModuleDescriptor descriptor);
    GPUComputePipeline createComputePipeline(GPUComputePipelineDescriptor descriptor);
    GPURenderPipeline createRenderPipeline(GPURenderPipelineDescriptor descriptor);

    GPUCommandEncoder createCommandEncoder(optional GPUCommandEncoderDescriptor descriptor = {});
    GPURenderBundleEncoder createRenderBundleEncoder(GPURenderBundleEncoderDescriptor descriptor);
};
GPUDevice includes GPUObjectBase;

GPUDevice has:

GPUDevice objects are serializable objects.

To serialize a GPUDevice object, given value, serialized, and forStorage, are:
  1. If forStorage is true, throw a "DataCloneError".

  2. Set serialized.device to the value of value.[[device]].

To deserialize a GPUDevice object, given serialized and value, are:
  1. Set value.[[device]] to serialized.device.

7. GPUBuffer

A GPUBuffer represents a block of memory that can be used in GPU operations. Data is stored in linear layout, meaning that each byte of the allocation can be addressed by its offset from the start of the GPUBuffer, subject to alignment restrictions depending on the operation. Some GPUBuffers can be mapped which makes the block of memory accessible via an ArrayBuffer called its mapping.

GPUBuffers can be created via the following functions:

[Serializable]
interface GPUBuffer {
    Promise<ArrayBuffer> mapReadAsync();
    Promise<ArrayBuffer> mapWriteAsync();
    void unmap();

    void destroy();
};
GPUBuffer includes GPUObjectBase;

GPUBuffer has the following internal slots:

[[size]] of type GPUSize64.

The length of the GPUBuffer allocation in bytes.

[[usage]] of type GPUBufferUsageFlags.

The allowed usages for this GPUBuffer.

[[state]] of type buffer state.

The current state of the GPUBuffer.

[[mapping]] of type ArrayBuffer or Promise or null.

The mapping for this GPUBuffer.

Each GPUBuffer has a current buffer state on the Content timeline which is one of the following:

Note: [[size]] and [[usage]] are immutable once the GPUBuffer has been created.

GPUBuffer has a state machine where the states are:

GPUBuffer is Serializable. It is a reference to an internal buffer object, and Serializable means that the reference can be copied between realms (threads/workers), allowing multiple realms to access it concurrently. Since GPUBuffer has internal state (mapped, destroyed), that state is internally-synchronized - these state changes occur atomically across realms.

7.1. Buffer creation

7.1.1. GPUBufferDescriptor

This specifies the options to use in creating a GPUBuffer.

dictionary GPUBufferDescriptor : GPUObjectDescriptorBase {
    required GPUSize64 size;
    required GPUBufferUsageFlags usage;
};
validating GPUBufferDescriptor(device, descriptor)
  1. If device is lost return false.

  2. If any of the bits of descriptor’s usage aren’t present in this device’s [[allowed buffer usages]] return false.

  3. If both the MAP_READ and MAP_WRITE bits of descriptor’s usage attribute are set, return false.

  4. Return true.

7.1.2. GPUDevice.createBuffer(descriptor)

createBuffer(descriptor)
  1. If the result of validating GPUBufferDescriptor(this, descriptor) is false:

    1. Record a validation error in the current scope.

    2. Create an invalid GPUBuffer and return the result.

  2. Let b be a new GPUBuffer object.

  3. Set the [[size]] slot of b to the value of the size attribute of descriptor.

  4. Set the [[usage]] slot of b to the value of the usage attribute of descriptor.

  5. Set the [[state]] internal slot of b to unmapped.

  6. Set the [[mapping]] internal slot of b to null.

  7. Set each byte of b’s allocation to zero.

  8. Return b.

7.1.3. GPUDevice.createBufferMapped(descriptor)

createBufferMapped(descriptor)
  1. If the result of validating GPUBufferDescriptor(this, descriptor) is false:

    1. Record a validation error in the current scope.

    2. Create an invalid GPUBuffer and return the result.

  2. Let b be a new GPUBuffer object.

  3. Let m be a zero-filled ArrayBuffer of size the [[size]] slot of b.

  4. Set the [[size]] slot of b to the value of the size attribute of descriptor.

  5. Set the [[usage]] slot of b to the value of the usage attribute of descriptor.

  6. Set the [[state]] internal slot of b to mapped for writing.

  7. Set the [[mapping]] internal slot of b to m.

  8. Set each byte of b’s allocation to zero.

  9. Return a sequence containing b and m in that order.

7.2. Buffer Destruction

An application that no longer requires a GPUBuffer can choose to lose access to it before garbage collection by calling destroy().

Note: This allows the user agent to reclaim the GPU memory associated with the GPUBuffer once all previously submitted operations using it are complete.

destroy()
  1. If the [[state]] slot of this is mapped for reading or mapped for writing:

    1. Run the steps to unmap "this"

  2. Set the [[state]] slot of this to destroyed

7.3. Buffer Usage

typedef [EnforceRange] unsigned long GPUBufferUsageFlags;
interface GPUBufferUsage {
    const GPUBufferUsageFlags MAP_READ  = 0x0001;
    const GPUBufferUsageFlags MAP_WRITE = 0x0002;
    const GPUBufferUsageFlags COPY_SRC  = 0x0004;
    const GPUBufferUsageFlags COPY_DST  = 0x0008;
    const GPUBufferUsageFlags INDEX     = 0x0010;
    const GPUBufferUsageFlags VERTEX    = 0x0020;
    const GPUBufferUsageFlags UNIFORM   = 0x0040;
    const GPUBufferUsageFlags STORAGE   = 0x0080;
    const GPUBufferUsageFlags INDIRECT  = 0x0100;
};

7.4. Buffer Mapping

An application can request to map a GPUBuffer to get its mapping which is an ArrayBuffer representing the GPUBuffer's allocation. Mappings are requested asynchronously so that the user agent can ensure the GPU finished using the GPUBuffer before the application gets its mapping. Mappings can be requested for reading with mapReadAsync or writing with mapWriteAsync. A mapped GPUBuffer cannot be used by the GPU and must be unmapped using unmap before it can be used on the Queue timeline.

Add client-side validation that a mapped buffer can only be unmapped and destroyed on the worker on which it was mapped.

7.4.1. GPUDevice.mapReadAsync

mapReadAsync()

Handle error buffers once we have a description of the error monad.

  1. If the [[usage]] slot of this doesn’t contain the MAP_READ bit or if [[state]] isn’t unmapped:

    1. Record a validation error on the current scope.

    2. Return a promise rejected with an AbortError.

    Specify that the rejection happens on the device timeline.

  2. Let p be a new Promise.

  3. Set the [[mapping]] slot of this to p.

  4. Set the [[state]] slot of this to mapping pending for reading.

  5. Enqueue an operation on the Queue timeline that will execute the following:

    1. Let m be a new ArrayBuffer of size the [[size]] of this.

    2. Set the content of m to the content of this’s allocation.

    3. Set the [[state]] slot of this to mapped for reading.

    4. If p is pending:

      1. Resolve p with m.

  6. Return p.

7.4.2. GPUDevice.mapWriteAsync

mapWriteAsync()

Handle error buffers once we have a description of the error monad.

  1. If the [[usage]] slot of this doesn’t contain the MAP_WRITE bit or if [[state]] isn’t unmapped:

    1. Record a validation error on the current scope.

    2. Return a promise rejected with an AbortError.

    Specify that the rejection happens on the device timeline.

  2. Let p be a new Promise.

  3. Set the [[mapping]] slot of this to p.

  4. Set the [[state]] slot of this to mapping pending for writing.

  5. Enqueue an operation on the Queue timeline that will execute the following:

    1. Let m be a new ArrayBuffer of size the [[size]] of this that is filled with zeroes.

    2. Set the [[state]] slot of this to mapped for writing.

    3. If p is pending:

      1. Resolve p with m.

  6. Return p.

7.4.3. GPUBuffer.unmap()

unmap()
  1. If the [[state]] slot of this is unmapped or destroyed:

    1. Record a validation error on the current scope.

    2. Return.

  2. If the [[mapping]] slot of this is a Promise:

    1. Reject [[mapping]] with an AbortError.

    2. Set the [[mapping]] slot of this to null.

  3. If the [[mapping]] slot of this is an ArrayBuffer:

    1. If the [[state]] slot of this is mapped for writing:

      1. Enqueue an operation on the Queue timeline that updates this’s allocation to the content of the ArrayBuffer in the [[mapping]] slot of this.

    2. Detach this.[[mapping]] from its content.

    3. Set the [[mapping]] slot of this to null.

  4. Set the [[state]] slot of this to unmapped.

8. Textures

8.1. GPUTexture

[Serializable]
interface GPUTexture {
    GPUTextureView createView(optional GPUTextureViewDescriptor descriptor = {});

    void destroy();
};
GPUTexture includes GPUObjectBase;

8.1.1. Texture Creation

dictionary GPUTextureDescriptor : GPUObjectDescriptorBase {
    required GPUExtent3D size;
    GPUIntegerCoordinate arrayLayerCount = 1;
    GPUIntegerCoordinate mipLevelCount = 1;
    GPUSize32 sampleCount = 1;
    GPUTextureDimension dimension = "2d";
    required GPUTextureFormat format;
    required GPUTextureUsageFlags usage;
};
enum GPUTextureDimension {
    "1d",
    "2d",
    "3d"
};
typedef [EnforceRange] unsigned long GPUTextureUsageFlags;
interface GPUTextureUsage {
    const GPUTextureUsageFlags COPY_SRC          = 0x01;
    const GPUTextureUsageFlags COPY_DST          = 0x02;
    const GPUTextureUsageFlags SAMPLED           = 0x04;
    const GPUTextureUsageFlags STORAGE           = 0x08;
    const GPUTextureUsageFlags OUTPUT_ATTACHMENT = 0x10;
};

8.2. GPUTextureView

interface GPUTextureView {
};
GPUTextureView includes GPUObjectBase;

8.2.1. Texture View Creation

dictionary GPUTextureViewDescriptor : GPUObjectDescriptorBase {
    GPUTextureFormat format;
    GPUTextureViewDimension dimension;
    GPUTextureAspect aspect = "all";
    GPUIntegerCoordinate baseMipLevel = 0;
    GPUIntegerCoordinate mipLevelCount = 0;
    GPUIntegerCoordinate baseArrayLayer = 0;
    GPUIntegerCoordinate arrayLayerCount = 0;
};
enum GPUTextureViewDimension {
    "1d",
    "2d",
    "2d-array",
    "cube",
    "cube-array",
    "3d"
};
enum GPUTextureAspect {
    "all",
    "stencil-only",
    "depth-only"
};

8.3. Texture Formats

The name of the format specifies the order of components, bits per component, and data type for the component.

If the format has the -srgb suffix, then sRGB gamma compression and decompression are applied during the reading and writing of color values in the pixel. Compressed texture formats are provided by extensions. Their naming should follow the convention here, with the texture name as a prefix. e.g. etc2-rgba8unorm.

enum GPUTextureFormat {
    // 8-bit formats
    "r8unorm",
    "r8snorm",
    "r8uint",
    "r8sint",

    // 16-bit formats
    "r16uint",
    "r16sint",
    "r16float",
    "rg8unorm",
    "rg8snorm",
    "rg8uint",
    "rg8sint",

    // 32-bit formats
    "r32uint",
    "r32sint",
    "r32float",
    "rg16uint",
    "rg16sint",
    "rg16float",
    "rgba8unorm",
    "rgba8unorm-srgb",
    "rgba8snorm",
    "rgba8uint",
    "rgba8sint",
    "bgra8unorm",
    "bgra8unorm-srgb",
    // Packed 32-bit formats
    "rgb10a2unorm",
    "rg11b10float",

    // 64-bit formats
    "rg32uint",
    "rg32sint",
    "rg32float",
    "rgba16uint",
    "rgba16sint",
    "rgba16float",

    // 128-bit formats
    "rgba32uint",
    "rgba32sint",
    "rgba32float",

    // Depth and stencil formats
    "depth32float",
    "depth24plus",
    "depth24plus-stencil8"
};
enum GPUTextureComponentType {
    "float",
    "sint",
    "uint"
};

9. Samplers

9.1. GPUSampler

interface GPUSampler {
};
GPUSampler includes GPUObjectBase;

9.1.1. Creation

dictionary GPUSamplerDescriptor : GPUObjectDescriptorBase {
    GPUAddressMode addressModeU = "clamp-to-edge";
    GPUAddressMode addressModeV = "clamp-to-edge";
    GPUAddressMode addressModeW = "clamp-to-edge";
    GPUFilterMode magFilter = "nearest";
    GPUFilterMode minFilter = "nearest";
    GPUFilterMode mipmapFilter = "nearest";
    float lodMinClamp = 0;
    float lodMaxClamp = 0xffffffff; // TODO: What should this be? Was Number.MAX_VALUE.
    GPUCompareFunction compare = "never";
};
enum GPUAddressMode {
    "clamp-to-edge",
    "repeat",
    "mirror-repeat"
};
enum GPUFilterMode {
    "nearest",
    "linear"
};
enum GPUCompareFunction {
    "never",
    "less",
    "equal",
    "less-equal",
    "greater",
    "not-equal",
    "greater-equal",
    "always"
};

10. Resource Binding

10.1. GPUBindGroupLayout

A GPUBindGroupLayout defines the interface between a set of resources bound in a GPUBindGroup and their accessibility in shader stages.

[Serializable]
interface GPUBindGroupLayout {
};
GPUBindGroupLayout includes GPUObjectBase;

10.1.1. Creation

A GPUBindGroupLayout is created via GPUDevice.createBindGroupLayout().

dictionary GPUBindGroupLayoutDescriptor : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayoutBinding> bindings;
};

A GPUBindGroupLayoutBinding describes a single shader resource binding to be included in a GPUBindGroupLayout.

dictionary GPUBindGroupLayoutBinding {
    required GPUIndex32 binding;
    required GPUShaderStageFlags visibility;
    required GPUBindingType type;
    GPUTextureViewDimension textureDimension = "2d";
    GPUTextureComponentType textureComponentType = "float";
    boolean multisampled = false;
    boolean hasDynamicOffset = false;
};
typedef [EnforceRange] unsigned long GPUShaderStageFlags;
interface GPUShaderStage {
    const GPUShaderStageFlags VERTEX   = 0x1;
    const GPUShaderStageFlags FRAGMENT = 0x2;
    const GPUShaderStageFlags COMPUTE  = 0x4;
};
enum GPUBindingType {
    "uniform-buffer",
    "storage-buffer",
    "readonly-storage-buffer",
    "sampler",
    "sampled-texture",
    "storage-texture"
    // TODO: other binding types
};

A GPUBindGroupLayout object has the following internal slots:

[[bindings]] of type sequence<GPUBindGroupLayoutBinding>.

The set of GPUBindGroupLayoutBindings this GPUBindGroupLayout describes.

10.1.2. GPUDevice.createBindGroupLayout(GPUBindGroupLayoutDescriptor)

The createBindGroupLayout(descriptor) method is used to create GPUBindGroupLayouts.

  1. Ensure device validation is not violated.

  2. Let layout be a new valid GPUBindGroupLayout object.

  3. For each GPUBindGroupLayoutBinding bindingDescriptor in descriptor.bindings:

    1. Ensure bindingDescriptor.binding does not violate binding validation.

    2. If bindingDescriptor.visibility includes VERTEX , ensure vertex shader binding validation is not violated.

    3. If bindingDescriptor.type is uniform-buffer:

      1. Ensure uniform buffer validation is not violated.

      2. If bindingDescriptor.hasDynamicOffset is true, ensure dynamic uniform buffer validation is not violated.

    4. If bindingDescriptor.type is storage-buffer or readonly-storage-buffer:

      1. Ensure storage buffer validation is not violated.

      2. If bindingDescriptor.hasDynamicOffset is true, ensure dynamic storage buffer validation is not violated.

    5. If bindingDescriptor.type is sampled-texture , ensure sampled texture validation is not violated.

    6. If bindingDescriptor.type is storage-texture , ensure storage texture validation is not violated.

    7. If bindingDescriptor.type is sampler , ensure sampler validation is not violated.

    8. Insert bindingDescriptor into layout.[[bindings]].

  4. Return layout.

Validation Conditions

If any of the following conditions are violated:
  1. Generate a GPUValidationError in the current scope with appropriate error message.

  2. Create a new invalid GPUBindGroupLayout and return the result.

device validation: The GPUDevice must not be lost.

binding validation: Each bindingDescriptor.binding in descriptor must be unique.

vertex shader binding validation: storage-buffer is not allowed.

uniform buffer validation: There must be GPULimits.maxUniformBuffersPerShaderStage or fewer bindingDescriptors of type uniform-buffer visible on each shader stage in descriptor.

dynamic uniform buffer validation: There must be GPULimits.maxDynamicUniformBuffersPerPipelineLayout or fewer bindingDescriptors of type uniform-buffer with hasDynamicOffset set to true in descriptor that are visible to any shader stage.

storage buffer validation: There must be GPULimits.maxStorageBuffersPerShaderStage or fewer bindingDescriptors of type storage-buffer visible on each shader stage in descriptor.

dynamic storage buffer validation: There must be GPULimits.maxDynamicStorageBuffersPerPipelineLayout or fewer bindingDescriptors of type storage-buffer with hasDynamicOffset set to true in descriptor that are visible to any shader stage.

sampled texture validation: There must be GPULimits.maxSampledTexturesPerShaderStage or fewer bindingDescriptors of type sampled-texture visible on each shader stage in descriptor. bindingDescriptor.hasDynamicOffset must be false.

storage texture validation: There must be GPULimits.maxStorageTexturesPerShaderStage or fewer bindingDescriptors of type storage-texture visible on each shader stage in descriptor. bindingDescriptor.hasDynamicOffset must be false.

sampler validation: There must be GPULimits.maxSamplersPerShaderStage or fewer bindingDescriptors of type sampler visible on each shader stage in descriptor. bindingDescriptor.hasDynamicOffset must be false.

10.2. GPUBindGroup

A GPUBindGroup defines a set of resources to be bound together in a group and how the resources are used in shader stages.

interface GPUBindGroup {
};
GPUBindGroup includes GPUObjectBase;

10.2.1. Bind Group Creation

A GPUBindGroup is created via GPUDevice.createBindGroup().

dictionary GPUBindGroupDescriptor : GPUObjectDescriptorBase {
    required GPUBindGroupLayout layout;
    required sequence<GPUBindGroupBinding> bindings;
};

A GPUBindGroupBinding describes a single resource to be bound in a GPUBindGroup.

typedef (GPUSampler or GPUTextureView or GPUBufferBinding) GPUBindingResource;

dictionary GPUBindGroupBinding {
    required GPUIndex32 binding;
    required GPUBindingResource resource;
};
dictionary GPUBufferBinding {
    required GPUBuffer buffer;
    GPUSize64 offset = 0;
    GPUSize64 size;
};

A GPUBindGroup object has the following internal slots:

[[layout]] of type GPUBindGroupLayout.

The GPUBindGroupLayout associated with this GPUBindGroup.

[[bindings]] of type sequence<GPUBindGroupBinding>.

The set of GPUBindGroupBindings this GPUBindGroup describes.

10.2.2. GPUDevice.createBindGroup(GPUBindGroupDescriptor)

The createBindGroup(descriptor) method is used to create GPUBindGroups.

  1. Ensure bind group device validation is not violated.

  2. Ensure descriptor.layout is a valid GPUBindGroupLayout.

  3. Ensure the number of bindings of descriptor.layout exactly equals to the number of descriptor.bindings.

  4. For each GPUBindGroupBinding bindingDescriptor in descriptor.bindings:

    1. Ensure there is exactly one GPUBindGroupLayoutBinding layoutBinding in bindings of descriptor.layout such that layoutBinding.binding equals to bindingDescriptor.binding.

    2. If layoutBinding.type is "sampler":

      1. Ensure bindingDescriptor.resource is a valid GPUSampler object.

    3. If layoutBinding.type is "sampled-texture" or "storage-texture".

      1. Ensure bindingDescriptor.resource is a valid GPUTextureView object.

      2. Ensure texture view binding validation is not violated.

    4. If layoutBinding.type is "uniform-buffer" or "storage-buffer" or "readonly-storage-buffer".

      1. Ensure bindingDescriptor.resource is a valid GPUBufferBinding object.

      2. Ensure buffer binding validation is not violated.

  5. Return a new GPUBindGroup object with:

Validation Conditions

If any of the following conditions are violated:
  1. Generate a GPUValidationError in the current scope with appropriate error message.

  2. Create a new invalid GPUBindGroup and return the result.

bind group device validation: The GPUDevice must not be lost.

texture view binding validation: Let view be bindingDescriptor.resource, a GPUTextureView. This layoutBinding must be compatible with this view. This requires:

  1. Its layoutBinding.textureDimension must equal view’s dimension.

  2. Its layoutBinding.textureComponentType must be compatible with view’s format.

  3. If layoutBinding.multisampled is true, view’s texture’s sampleCount must be greater than 1. Otherwise, if bindingDescriptor.multisampled is false, view’s texture’s sampleCount must be 1.

  4. If layoutBinding.type is "sampled-texture", view’s texture’s usage must include SAMPLED.

  5. If layoutBinding.type is "storage-texture", view’s texture’s usage must include STORAGE.

buffer binding validation: Let bufferBinding be bindingDescriptor.resource, a GPUBufferBinding. This layoutBinding must be compatible with this bufferBinding. This requires:

  1. If layoutBinding.type is "uniform-buffer", the bufferBinding.buffer's usage must include UNIFORM.

  2. If layoutBinding.type is "storage-buffer" or "readonly-storage-buffer", the bufferBinding.buffer's usage must include STORAGE.

  3. The bound part designated by bufferBinding.offset and bufferBinding.size must reside inside the buffer.

10.3. GPUPipelineLayout

interface GPUPipelineLayout {
};
GPUPipelineLayout includes GPUObjectBase;

10.3.1. Creation

dictionary GPUPipelineLayoutDescriptor : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayout> bindGroupLayouts;
};

11. Shader Modules

11.1. GPUShaderModule

[Serializable]
interface GPUShaderModule {
};
GPUShaderModule includes GPUObjectBase;

GPUShaderModule is Serializable. It is a reference to an internal shader module object, and Serializable means that the reference can be copied between realms (threads/workers), allowing multiple realms to access it concurrently. Since GPUShaderModule immutable, there are no race conditions.

11.1.1. Shader Module Creation

typedef (Uint32Array or DOMString) GPUShaderCode;

dictionary GPUShaderModuleDescriptor : GPUObjectDescriptorBase {
    required GPUShaderCode code;
};

Note: While the choice of shader language is undecided, GPUShaderModuleDescriptor will temporarily accept both text and binary input.

12. Pipelines

dictionary GPUPipelineDescriptorBase : GPUObjectDescriptorBase {
    required GPUPipelineLayout layout;
};
dictionary GPUProgrammableStageDescriptor {
    required GPUShaderModule module;
    required DOMString entryPoint;
    // TODO: other stuff like specialization constants?
};

12.1. GPUComputePipeline

[Serializable]
interface GPUComputePipeline {
};
GPUComputePipeline includes GPUObjectBase;

12.1.1. Creation

dictionary GPUComputePipelineDescriptor : GPUPipelineDescriptorBase {
    required GPUProgrammableStageDescriptor computeStage;
};

12.2. GPURenderPipeline

[Serializable]
interface GPURenderPipeline {
};
GPURenderPipeline includes GPUObjectBase;

12.2.1. Creation

dictionary GPURenderPipelineDescriptor : GPUPipelineDescriptorBase {
    required GPUProgrammableStageDescriptor vertexStage;
    GPUProgrammableStageDescriptor fragmentStage;

    required GPUPrimitiveTopology primitiveTopology;
    GPURasterizationStateDescriptor rasterizationState = {};
    required sequence<GPUColorStateDescriptor> colorStates;
    GPUDepthStencilStateDescriptor depthStencilState;
    GPUVertexStateDescriptor vertexState = {};

    GPUSize32 sampleCount = 1;
    GPUSampleMask sampleMask = 0xFFFFFFFF;
    boolean alphaToCoverageEnabled = false;
    // TODO: other properties
};

12.2.2. Primitive Topology

enum GPUPrimitiveTopology {
    "point-list",
    "line-list",
    "line-strip",
    "triangle-list",
    "triangle-strip"
};

12.2.3. Rasterization State

dictionary GPURasterizationStateDescriptor {
    GPUFrontFace frontFace = "ccw";
    GPUCullMode cullMode = "none";

    GPUDepthBias depthBias = 0;
    float depthBiasSlopeScale = 0;
    float depthBiasClamp = 0;
};
enum GPUFrontFace {
    "ccw",
    "cw"
};
enum GPUCullMode {
    "none",
    "front",
    "back"
};

12.2.4. Color State

dictionary GPUColorStateDescriptor {
    required GPUTextureFormat format;

    GPUBlendDescriptor alphaBlend = {};
    GPUBlendDescriptor colorBlend = {};
    GPUColorWriteFlags writeMask = 0xF;  // GPUColorWrite.ALL
};
typedef [EnforceRange] unsigned long GPUColorWriteFlags;
interface GPUColorWrite {
    const GPUColorWriteFlags RED   = 0x1;
    const GPUColorWriteFlags GREEN = 0x2;
    const GPUColorWriteFlags BLUE  = 0x4;
    const GPUColorWriteFlags ALPHA = 0x8;
    const GPUColorWriteFlags ALL   = 0xF;
};
12.2.4.1. Blend State
dictionary GPUBlendDescriptor {
    GPUBlendFactor srcFactor = "one";
    GPUBlendFactor dstFactor = "zero";
    GPUBlendOperation operation = "add";
};
enum GPUBlendFactor {
    "zero",
    "one",
    "src-color",
    "one-minus-src-color",
    "src-alpha",
    "one-minus-src-alpha",
    "dst-color",
    "one-minus-dst-color",
    "dst-alpha",
    "one-minus-dst-alpha",
    "src-alpha-saturated",
    "blend-color",
    "one-minus-blend-color"
};
enum GPUBlendOperation {
    "add",
    "subtract",
    "reverse-subtract",
    "min",
    "max"
};
enum GPUStencilOperation {
    "keep",
    "zero",
    "replace",
    "invert",
    "increment-clamp",
    "decrement-clamp",
    "increment-wrap",
    "decrement-wrap"
};

12.2.5. Depth/Stencil State

dictionary GPUDepthStencilStateDescriptor {
    required GPUTextureFormat format;

    boolean depthWriteEnabled = false;
    GPUCompareFunction depthCompare = "always";

    GPUStencilStateFaceDescriptor stencilFront = {};
    GPUStencilStateFaceDescriptor stencilBack = {};

    GPUStencilValue stencilReadMask = 0xFFFFFFFF;
    GPUStencilValue stencilWriteMask = 0xFFFFFFFF;
};
dictionary GPUStencilStateFaceDescriptor {
    GPUCompareFunction compare = "always";
    GPUStencilOperation failOp = "keep";
    GPUStencilOperation depthFailOp = "keep";
    GPUStencilOperation passOp = "keep";
};

12.2.6. Vertex State

enum GPUIndexFormat {
    "uint16",
    "uint32"
};
12.2.6.1. Vertex formats

The name of the format specifies the data type of the component, the number of values, and whether the data is normalized.

If no number of values is given in the name, a single value is provided. If the format has the -bgra suffix, it means the values are arranged as blue, green, red and alpha values.

enum GPUVertexFormat {
    "uchar2",
    "uchar4",
    "char2",
    "char4",
    "uchar2norm",
    "uchar4norm",
    "char2norm",
    "char4norm",
    "ushort2",
    "ushort4",
    "short2",
    "short4",
    "ushort2norm",
    "ushort4norm",
    "short2norm",
    "short4norm",
    "half2",
    "half4",
    "float",
    "float2",
    "float3",
    "float4",
    "uint",
    "uint2",
    "uint3",
    "uint4",
    "int",
    "int2",
    "int3",
    "int4"
};
enum GPUInputStepMode {
    "vertex",
    "instance"
};
dictionary GPUVertexStateDescriptor {
    GPUIndexFormat indexFormat = "uint32";
    sequence<GPUVertexBufferLayoutDescriptor?> vertexBuffers = [];
};

A vertex buffer is, conceptually, a view into buffer memory as an array of structures. arrayStride is the stride, in bytes, between elements of that array. Each element of a vertex buffer is like a structure with a memory layout defined by its attributes, which describe the members of the structure.

Each GPUVertexAttributeDescriptor describes its format and its offset, in bytes, within the structure.

Each attribute appears as a separate input in a vertex shader, each bound by a numeric location, which is specified by shaderLocation. Every location must be unique within the GPUVertexStateDescriptor.

dictionary GPUVertexBufferLayoutDescriptor {
    required GPUSize64 arrayStride;
    GPUInputStepMode stepMode = "vertex";
    required sequence<GPUVertexAttributeDescriptor> attributes;
};
dictionary GPUVertexAttributeDescriptor {
    required GPUVertexFormat format;
    required GPUSize64 offset;

    required GPUIndex32 shaderLocation;
};

13. Command Buffers

13.1. GPUCommandBuffer

interface GPUCommandBuffer {
};
GPUCommandBuffer includes GPUObjectBase;

13.1.1. Creation

dictionary GPUCommandBufferDescriptor : GPUObjectDescriptorBase {
};

14. Command Encoding

14.1. GPUCommandEncoder

interface GPUCommandEncoder {
    GPURenderPassEncoder beginRenderPass(GPURenderPassDescriptor descriptor);
    GPUComputePassEncoder beginComputePass(optional GPUComputePassDescriptor descriptor = {});

    void copyBufferToBuffer(
        GPUBuffer source,
        GPUSize64 sourceOffset,
        GPUBuffer destination,
        GPUSize64 destinationOffset,
        GPUSize64 size);

    void copyBufferToTexture(
        GPUBufferCopyView source,
        GPUTextureCopyView destination,
        GPUExtent3D copySize);

    void copyTextureToBuffer(
        GPUTextureCopyView source,
        GPUBufferCopyView destination,
        GPUExtent3D copySize);

    void copyTextureToTexture(
        GPUTextureCopyView source,
        GPUTextureCopyView destination,
        GPUExtent3D copySize);

    void pushDebugGroup(DOMString groupLabel);
    void popDebugGroup();
    void insertDebugMarker(DOMString markerLabel);

    GPUCommandBuffer finish(optional GPUCommandBufferDescriptor descriptor = {});
};
GPUCommandEncoder includes GPUObjectBase;

14.1.1. Creation

dictionary GPUCommandEncoderDescriptor : GPUObjectDescriptorBase {
    // TODO: reusability flag?
};

14.2. Copy Commands

dictionary GPUBufferCopyView {
    required GPUBuffer buffer;
    GPUSize64 offset = 0;
    required GPUSize32 rowPitch;
    required GPUSize32 imageHeight;
};
dictionary GPUTextureCopyView {
    required GPUTexture texture;
    GPUIntegerCoordinate mipLevel = 0;
    GPUIntegerCoordinate arrayLayer = 0;
    GPUOrigin3D origin = {};
};
dictionary GPUImageBitmapCopyView {
    required ImageBitmap imageBitmap;
    GPUOrigin2D origin = {};
};

14.3. Programmable Passes

interface mixin GPUProgrammablePassEncoder {
    void setBindGroup(GPUIndex32 index, GPUBindGroup bindGroup,
                      optional sequence<GPUBufferDynamicOffset> dynamicOffsets = []);

    void setBindGroup(GPUIndex32 index, GPUBindGroup bindGroup,
                      Uint32Array dynamicOffsetsData,
                      GPUSize64 dynamicOffsetsDataStart,
                      GPUSize32 dynamicOffsetsDataLength);

    void pushDebugGroup(DOMString groupLabel);
    void popDebugGroup();
    void insertDebugMarker(DOMString markerLabel);
};

Debug groups in a GPUCommandEncoder or GPUProgrammablePassEncoder must be well nested.

15. Compute Passes

15.1. GPUComputePassEncoder

interface GPUComputePassEncoder {
    void setPipeline(GPUComputePipeline pipeline);
    void dispatch(GPUSize32 x, optional GPUSize32 y = 1, optional GPUSize32 z = 1);
    void dispatchIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);

    void endPass();
};
GPUComputePassEncoder includes GPUObjectBase;
GPUComputePassEncoder includes GPUProgrammablePassEncoder;

15.1.1. Creation

dictionary GPUComputePassDescriptor : GPUObjectDescriptorBase {
};

16. Render Passes

16.1. GPURenderPassEncoder

interface mixin GPURenderEncoderBase {
    void setPipeline(GPURenderPipeline pipeline);

    void setIndexBuffer(GPUBuffer buffer, optional GPUSize64 offset = 0);
    void setVertexBuffer(GPUIndex32 slot, GPUBuffer buffer, optional GPUSize64 offset = 0);

    void draw(GPUSize32 vertexCount, GPUSize32 instanceCount,
              GPUSize32 firstVertex, GPUSize32 firstInstance);
    void drawIndexed(GPUSize32 indexCount, GPUSize32 instanceCount,
                     GPUSize32 firstIndex, GPUSignedOffset32 baseVertex, GPUSize32 firstInstance);

    void drawIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
    void drawIndexedIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
};

interface GPURenderPassEncoder {
    void setViewport(float x, float y,
                     float width, float height,
                     float minDepth, float maxDepth);

    void setScissorRect(GPUIntegerCoordinate x, GPUIntegerCoordinate y,
                        GPUIntegerCoordinate width, GPUIntegerCoordinate height);

    void setBlendColor(GPUColor color);
    void setStencilReference(GPUStencilValue reference);

    void executeBundles(sequence<GPURenderBundle> bundles);
    void endPass();
};
GPURenderPassEncoder includes GPUObjectBase;
GPURenderPassEncoder includes GPUProgrammablePassEncoder;
GPURenderPassEncoder includes GPURenderEncoderBase;

When a GPURenderPassEncoder is created, it has the following default state:

When a GPURenderBundle is executed, it does not inherit the pass’s pipeline, bind groups, or vertex or index buffers. After a GPURenderBundle has executed, the pass’s pipeline, bind groups, and vertex and index buffers are cleared. If zero GPURenderBundles are executed, the command buffer state is unchanged.

16.1.1. Creation

dictionary GPURenderPassDescriptor : GPUObjectDescriptorBase {
    required sequence<GPURenderPassColorAttachmentDescriptor> colorAttachments;
    GPURenderPassDepthStencilAttachmentDescriptor depthStencilAttachment;
};
16.1.1.1. Color Attachments
dictionary GPURenderPassColorAttachmentDescriptor {
    required GPUTextureView attachment;
    GPUTextureView resolveTarget;

    required (GPULoadOp or GPUColor) loadValue;
    GPUStoreOp storeOp = "store";
};
16.1.1.2. Depth/Stencil Attachments
dictionary GPURenderPassDepthStencilAttachmentDescriptor {
    required GPUTextureView attachment;

    required (GPULoadOp or float) depthLoadValue;
    required GPUStoreOp depthStoreOp;

    required (GPULoadOp or GPUStencilValue) stencilLoadValue;
    required GPUStoreOp stencilStoreOp;
};

16.1.2. Load & Store Operations

enum GPULoadOp {
    "load"
};
enum GPUStoreOp {
    "store",
    "clear"
};

17. Bundles

17.1. GPURenderBundle

interface GPURenderBundle {
};
GPURenderBundle includes GPUObjectBase;

17.1.1. Creation

dictionary GPURenderBundleDescriptor : GPUObjectDescriptorBase {
};
interface GPURenderBundleEncoder {
    GPURenderBundle finish(optional GPURenderBundleDescriptor descriptor = {});
};
GPURenderBundleEncoder includes GPUObjectBase;
GPURenderBundleEncoder includes GPUProgrammablePassEncoder;
GPURenderBundleEncoder includes GPURenderEncoderBase;

17.1.2. Encoding

dictionary GPURenderBundleEncoderDescriptor : GPUObjectDescriptorBase {
    required sequence<GPUTextureFormat> colorFormats;
    GPUTextureFormat depthStencilFormat;
    GPUSize32 sampleCount = 1;
};

18. Queues

interface GPUQueue {
    void submit(sequence<GPUCommandBuffer> commandBuffers);

    GPUFence createFence(optional GPUFenceDescriptor descriptor = {});
    void signal(GPUFence fence, GPUFenceValue signalValue);

    void copyImageBitmapToTexture(
        GPUImageBitmapCopyView source,
        GPUTextureCopyView destination,
        GPUExtent3D copySize);
};
GPUQueue includes GPUObjectBase;

submit(commandBuffers) does nothing and produces an error if any of the following is true:

18.1. GPUFence

interface GPUFence {
    GPUFenceValue getCompletedValue();
    Promise<void> onCompletion(GPUFenceValue completionValue);
};
GPUFence includes GPUObjectBase;

18.1.1. Creation

dictionary GPUFenceDescriptor : GPUObjectDescriptorBase {
    GPUFenceValue initialValue = 0;
};

19. Canvas Rendering and Swap Chain

interface GPUCanvasContext {
    GPUSwapChain configureSwapChain(GPUSwapChainDescriptor descriptor);

    Promise<GPUTextureFormat> getSwapChainPreferredFormat(GPUDevice device);
};
dictionary GPUSwapChainDescriptor : GPUObjectDescriptorBase {
    required GPUDevice device;
    required GPUTextureFormat format;
    GPUTextureUsageFlags usage = 0x10;  // GPUTextureUsage.OUTPUT_ATTACHMENT
};
interface GPUSwapChain {
    GPUTexture getCurrentTexture();
};
GPUSwapChain includes GPUObjectBase;

20. Errors & Debugging

20.1. Fatal Errors

interface GPUDeviceLostInfo {
    readonly attribute DOMString message;
};

partial interface GPUDevice {
    readonly attribute Promise<GPUDeviceLostInfo> lost;
};

20.2. Error Scopes

enum GPUErrorFilter {
    "none",
    "out-of-memory",
    "validation"
};
interface GPUOutOfMemoryError {
    constructor();
};

interface GPUValidationError {
    constructor(DOMString message);
    readonly attribute DOMString message;
};

typedef (GPUOutOfMemoryError or GPUValidationError) GPUError;
partial interface GPUDevice {
    void pushErrorScope(GPUErrorFilter filter);
    Promise<GPUError?> popErrorScope();
};

popErrorScope() throws OperationError if there are no error scopes on the stack. popErrorScope() rejects with OperationError if the device is lost.

20.3. Telemetry

[
    Exposed=(Window, DedicatedWorker)
]
interface GPUUncapturedErrorEvent : Event {
    constructor(
        DOMString type,
        GPUUncapturedErrorEventInit gpuUncapturedErrorEventInitDict
    );
    [SameObject] readonly attribute GPUError error;
};

dictionary GPUUncapturedErrorEventInit : EventInit {
    required GPUError error;
};
partial interface GPUDevice {
    [Exposed=(Window, DedicatedWorker)]
    attribute EventHandler onuncapturederror;
};

21. Type Definitions

typedef [EnforceRange] unsigned long GPUBufferDynamicOffset;
typedef [EnforceRange] unsigned long long GPUFenceValue;
typedef [EnforceRange] unsigned long GPUStencilValue;
typedef [EnforceRange] unsigned long GPUSampleMask;
typedef [EnforceRange] long GPUDepthBias;

typedef [EnforceRange] unsigned long long GPUSize64;
typedef [EnforceRange] unsigned long GPUIntegerCoordinate;
typedef [EnforceRange] unsigned long GPUIndex32;
typedef [EnforceRange] unsigned long GPUSize32;
typedef [EnforceRange] long GPUSignedOffset32;

21.1. Colors and Vectors

dictionary GPUColorDict {
    required double r;
    required double g;
    required double b;
    required double a;
};
typedef (sequence<double> or GPUColorDict) GPUColor;

Note: double is large enough to precisely hold 32-bit signed/unsigned integers and single-precision floats.

dictionary GPUOrigin2DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin2DDict) GPUOrigin2D;
dictionary GPUOrigin3DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
    GPUIntegerCoordinate z = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin3DDict) GPUOrigin3D;
dictionary GPUExtent3DDict {
    required GPUIntegerCoordinate width;
    required GPUIntegerCoordinate height;
    required GPUIntegerCoordinate depth;
};
typedef (sequence<GPUIntegerCoordinate> or GPUExtent3DDict) GPUExtent3D;
typedef sequence<(GPUBuffer or ArrayBuffer)> GPUMappedBuffer;

GPUMappedBuffer is always a sequence of 2 elements, of types GPUBuffer and ArrayBuffer, respectively.

22. Temporary usages of non-exported dfns #

Eventually all of these should disappear but they are useful to avoid warning while building the specification.

vertex buffer

Conformance

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[ECMASCRIPT]
ECMAScript Language Specification. URL: https://tc39.es/ecma262/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://tools.ietf.org/html/rfc2119
[WebIDL]
Boris Zbarsky. Web IDL. 15 December 2016. ED. URL: https://heycam.github.io/webidl/

Informative References

[CSP3]
Mike West. Content Security Policy Level 3. 15 October 2018. WD. URL: https://www.w3.org/TR/CSP3/

IDL Index

interface mixin GPUObjectBase {
    attribute DOMString? label;
};

dictionary GPUObjectDescriptorBase {
    DOMString label;
};

[Exposed=Window]
partial interface Navigator {
    [SameObject] readonly attribute GPU gpu;
};

[Exposed=DedicatedWorker]
partial interface WorkerNavigator {
    [SameObject] readonly attribute GPU gpu;
};

[Exposed=(Window, DedicatedWorker)]
interface GPU {
    Promise<GPUAdapter> requestAdapter(optional GPURequestAdapterOptions options = {});
};

dictionary GPURequestAdapterOptions {
    GPUPowerPreference powerPreference;
};

enum GPUPowerPreference {
    "low-power",
    "high-performance"
};

interface GPUAdapter {
    readonly attribute DOMString name;
    readonly attribute FrozenArray<GPUExtensionName> extensions;
    //readonly attribute GPULimits limits; Don’t expose higher limits for now.

    Promise<GPUDevice> requestDevice(optional GPUDeviceDescriptor descriptor = {});
};

dictionary GPUDeviceDescriptor : GPUObjectDescriptorBase {
    sequence<GPUExtensionName> extensions = [];
    GPULimits limits = {};
};

enum GPUExtensionName {
    "texture-compression-bc"
};

dictionary GPULimits {
    GPUSize32 maxBindGroups = 4;
    GPUSize32 maxDynamicUniformBuffersPerPipelineLayout = 8;
    GPUSize32 maxDynamicStorageBuffersPerPipelineLayout = 4;
    GPUSize32 maxSampledTexturesPerShaderStage = 16;
    GPUSize32 maxSamplersPerShaderStage = 16;
    GPUSize32 maxStorageBuffersPerShaderStage = 4;
    GPUSize32 maxStorageTexturesPerShaderStage = 4;
    GPUSize32 maxUniformBuffersPerShaderStage = 12;
};

[Exposed=(Window, DedicatedWorker), Serializable]
interface GPUDevice : EventTarget {
    [SameObject] readonly attribute GPUAdapter adapter;
    readonly attribute FrozenArray<GPUExtensionName> extensions;
    readonly attribute object limits;

    [SameObject] readonly attribute GPUQueue defaultQueue;

    GPUBuffer createBuffer(GPUBufferDescriptor descriptor);
    GPUMappedBuffer createBufferMapped(GPUBufferDescriptor descriptor);
    GPUTexture createTexture(GPUTextureDescriptor descriptor);
    GPUSampler createSampler(optional GPUSamplerDescriptor descriptor = {});

    GPUBindGroupLayout createBindGroupLayout(GPUBindGroupLayoutDescriptor descriptor);
    GPUPipelineLayout createPipelineLayout(GPUPipelineLayoutDescriptor descriptor);
    GPUBindGroup createBindGroup(GPUBindGroupDescriptor descriptor);

    GPUShaderModule createShaderModule(GPUShaderModuleDescriptor descriptor);
    GPUComputePipeline createComputePipeline(GPUComputePipelineDescriptor descriptor);
    GPURenderPipeline createRenderPipeline(GPURenderPipelineDescriptor descriptor);

    GPUCommandEncoder createCommandEncoder(optional GPUCommandEncoderDescriptor descriptor = {});
    GPURenderBundleEncoder createRenderBundleEncoder(GPURenderBundleEncoderDescriptor descriptor);
};
GPUDevice includes GPUObjectBase;

[Serializable]
interface GPUBuffer {
    Promise<ArrayBuffer> mapReadAsync();
    Promise<ArrayBuffer> mapWriteAsync();
    void unmap();

    void destroy();
};
GPUBuffer includes GPUObjectBase;

dictionary GPUBufferDescriptor : GPUObjectDescriptorBase {
    required GPUSize64 size;
    required GPUBufferUsageFlags usage;
};

typedef [EnforceRange] unsigned long GPUBufferUsageFlags;
interface GPUBufferUsage {
    const GPUBufferUsageFlags MAP_READ  = 0x0001;
    const GPUBufferUsageFlags MAP_WRITE = 0x0002;
    const GPUBufferUsageFlags COPY_SRC  = 0x0004;
    const GPUBufferUsageFlags COPY_DST  = 0x0008;
    const GPUBufferUsageFlags INDEX     = 0x0010;
    const GPUBufferUsageFlags VERTEX    = 0x0020;
    const GPUBufferUsageFlags UNIFORM   = 0x0040;
    const GPUBufferUsageFlags STORAGE   = 0x0080;
    const GPUBufferUsageFlags INDIRECT  = 0x0100;
};

[Serializable]
interface GPUTexture {
    GPUTextureView createView(optional GPUTextureViewDescriptor descriptor = {});

    void destroy();
};
GPUTexture includes GPUObjectBase;

dictionary GPUTextureDescriptor : GPUObjectDescriptorBase {
    required GPUExtent3D size;
    GPUIntegerCoordinate arrayLayerCount = 1;
    GPUIntegerCoordinate mipLevelCount = 1;
    GPUSize32 sampleCount = 1;
    GPUTextureDimension dimension = "2d";
    required GPUTextureFormat format;
    required GPUTextureUsageFlags usage;
};

enum GPUTextureDimension {
    "1d",
    "2d",
    "3d"
};

typedef [EnforceRange] unsigned long GPUTextureUsageFlags;
interface GPUTextureUsage {
    const GPUTextureUsageFlags COPY_SRC          = 0x01;
    const GPUTextureUsageFlags COPY_DST          = 0x02;
    const GPUTextureUsageFlags SAMPLED           = 0x04;
    const GPUTextureUsageFlags STORAGE           = 0x08;
    const GPUTextureUsageFlags OUTPUT_ATTACHMENT = 0x10;
};

interface GPUTextureView {
};
GPUTextureView includes GPUObjectBase;

dictionary GPUTextureViewDescriptor : GPUObjectDescriptorBase {
    GPUTextureFormat format;
    GPUTextureViewDimension dimension;
    GPUTextureAspect aspect = "all";
    GPUIntegerCoordinate baseMipLevel = 0;
    GPUIntegerCoordinate mipLevelCount = 0;
    GPUIntegerCoordinate baseArrayLayer = 0;
    GPUIntegerCoordinate arrayLayerCount = 0;
};

enum GPUTextureViewDimension {
    "1d",
    "2d",
    "2d-array",
    "cube",
    "cube-array",
    "3d"
};

enum GPUTextureAspect {
    "all",
    "stencil-only",
    "depth-only"
};

enum GPUTextureFormat {
    // 8-bit formats
    "r8unorm",
    "r8snorm",
    "r8uint",
    "r8sint",

    // 16-bit formats
    "r16uint",
    "r16sint",
    "r16float",
    "rg8unorm",
    "rg8snorm",
    "rg8uint",
    "rg8sint",

    // 32-bit formats
    "r32uint",
    "r32sint",
    "r32float",
    "rg16uint",
    "rg16sint",
    "rg16float",
    "rgba8unorm",
    "rgba8unorm-srgb",
    "rgba8snorm",
    "rgba8uint",
    "rgba8sint",
    "bgra8unorm",
    "bgra8unorm-srgb",
    // Packed 32-bit formats
    "rgb10a2unorm",
    "rg11b10float",

    // 64-bit formats
    "rg32uint",
    "rg32sint",
    "rg32float",
    "rgba16uint",
    "rgba16sint",
    "rgba16float",

    // 128-bit formats
    "rgba32uint",
    "rgba32sint",
    "rgba32float",

    // Depth and stencil formats
    "depth32float",
    "depth24plus",
    "depth24plus-stencil8"
};

enum GPUTextureComponentType {
    "float",
    "sint",
    "uint"
};

interface GPUSampler {
};
GPUSampler includes GPUObjectBase;

dictionary GPUSamplerDescriptor : GPUObjectDescriptorBase {
    GPUAddressMode addressModeU = "clamp-to-edge";
    GPUAddressMode addressModeV = "clamp-to-edge";
    GPUAddressMode addressModeW = "clamp-to-edge";
    GPUFilterMode magFilter = "nearest";
    GPUFilterMode minFilter = "nearest";
    GPUFilterMode mipmapFilter = "nearest";
    float lodMinClamp = 0;
    float lodMaxClamp = 0xffffffff; // TODO: What should this be? Was Number.MAX_VALUE.
    GPUCompareFunction compare = "never";
};

enum GPUAddressMode {
    "clamp-to-edge",
    "repeat",
    "mirror-repeat"
};

enum GPUFilterMode {
    "nearest",
    "linear"
};

enum GPUCompareFunction {
    "never",
    "less",
    "equal",
    "less-equal",
    "greater",
    "not-equal",
    "greater-equal",
    "always"
};

[Serializable]
interface GPUBindGroupLayout {
};
GPUBindGroupLayout includes GPUObjectBase;

dictionary GPUBindGroupLayoutDescriptor : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayoutBinding> bindings;
};

dictionary GPUBindGroupLayoutBinding {
    required GPUIndex32 binding;
    required GPUShaderStageFlags visibility;
    required GPUBindingType type;
    GPUTextureViewDimension textureDimension = "2d";
    GPUTextureComponentType textureComponentType = "float";
    boolean multisampled = false;
    boolean hasDynamicOffset = false;
};

typedef [EnforceRange] unsigned long GPUShaderStageFlags;
interface GPUShaderStage {
    const GPUShaderStageFlags VERTEX   = 0x1;
    const GPUShaderStageFlags FRAGMENT = 0x2;
    const GPUShaderStageFlags COMPUTE  = 0x4;
};

enum GPUBindingType {
    "uniform-buffer",
    "storage-buffer",
    "readonly-storage-buffer",
    "sampler",
    "sampled-texture",
    "storage-texture"
    // TODO: other binding types
};

interface GPUBindGroup {
};
GPUBindGroup includes GPUObjectBase;

dictionary GPUBindGroupDescriptor : GPUObjectDescriptorBase {
    required GPUBindGroupLayout layout;
    required sequence<GPUBindGroupBinding> bindings;
};

typedef (GPUSampler or GPUTextureView or GPUBufferBinding) GPUBindingResource;

dictionary GPUBindGroupBinding {
    required GPUIndex32 binding;
    required GPUBindingResource resource;
};

dictionary GPUBufferBinding {
    required GPUBuffer buffer;
    GPUSize64 offset = 0;
    GPUSize64 size;
};

interface GPUPipelineLayout {
};
GPUPipelineLayout includes GPUObjectBase;

dictionary GPUPipelineLayoutDescriptor : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayout> bindGroupLayouts;
};

[Serializable]
interface GPUShaderModule {
};
GPUShaderModule includes GPUObjectBase;

typedef (Uint32Array or DOMString) GPUShaderCode;

dictionary GPUShaderModuleDescriptor : GPUObjectDescriptorBase {
    required GPUShaderCode code;
};

dictionary GPUPipelineDescriptorBase : GPUObjectDescriptorBase {
    required GPUPipelineLayout layout;
};

dictionary GPUProgrammableStageDescriptor {
    required GPUShaderModule module;
    required DOMString entryPoint;
    // TODO: other stuff like specialization constants?
};

[Serializable]
interface GPUComputePipeline {
};
GPUComputePipeline includes GPUObjectBase;

dictionary GPUComputePipelineDescriptor : GPUPipelineDescriptorBase {
    required GPUProgrammableStageDescriptor computeStage;
};

[Serializable]
interface GPURenderPipeline {
};
GPURenderPipeline includes GPUObjectBase;

dictionary GPURenderPipelineDescriptor : GPUPipelineDescriptorBase {
    required GPUProgrammableStageDescriptor vertexStage;
    GPUProgrammableStageDescriptor fragmentStage;

    required GPUPrimitiveTopology primitiveTopology;
    GPURasterizationStateDescriptor rasterizationState = {};
    required sequence<GPUColorStateDescriptor> colorStates;
    GPUDepthStencilStateDescriptor depthStencilState;
    GPUVertexStateDescriptor vertexState = {};

    GPUSize32 sampleCount = 1;
    GPUSampleMask sampleMask = 0xFFFFFFFF;
    boolean alphaToCoverageEnabled = false;
    // TODO: other properties
};

enum GPUPrimitiveTopology {
    "point-list",
    "line-list",
    "line-strip",
    "triangle-list",
    "triangle-strip"
};

dictionary GPURasterizationStateDescriptor {
    GPUFrontFace frontFace = "ccw";
    GPUCullMode cullMode = "none";

    GPUDepthBias depthBias = 0;
    float depthBiasSlopeScale = 0;
    float depthBiasClamp = 0;
};

enum GPUFrontFace {
    "ccw",
    "cw"
};

enum GPUCullMode {
    "none",
    "front",
    "back"
};

dictionary GPUColorStateDescriptor {
    required GPUTextureFormat format;

    GPUBlendDescriptor alphaBlend = {};
    GPUBlendDescriptor colorBlend = {};
    GPUColorWriteFlags writeMask = 0xF;  // GPUColorWrite.ALL
};

typedef [EnforceRange] unsigned long GPUColorWriteFlags;
interface GPUColorWrite {
    const GPUColorWriteFlags RED   = 0x1;
    const GPUColorWriteFlags GREEN = 0x2;
    const GPUColorWriteFlags BLUE  = 0x4;
    const GPUColorWriteFlags ALPHA = 0x8;
    const GPUColorWriteFlags ALL   = 0xF;
};

dictionary GPUBlendDescriptor {
    GPUBlendFactor srcFactor = "one";
    GPUBlendFactor dstFactor = "zero";
    GPUBlendOperation operation = "add";
};

enum GPUBlendFactor {
    "zero",
    "one",
    "src-color",
    "one-minus-src-color",
    "src-alpha",
    "one-minus-src-alpha",
    "dst-color",
    "one-minus-dst-color",
    "dst-alpha",
    "one-minus-dst-alpha",
    "src-alpha-saturated",
    "blend-color",
    "one-minus-blend-color"
};

enum GPUBlendOperation {
    "add",
    "subtract",
    "reverse-subtract",
    "min",
    "max"
};

enum GPUStencilOperation {
    "keep",
    "zero",
    "replace",
    "invert",
    "increment-clamp",
    "decrement-clamp",
    "increment-wrap",
    "decrement-wrap"
};

dictionary GPUDepthStencilStateDescriptor {
    required GPUTextureFormat format;

    boolean depthWriteEnabled = false;
    GPUCompareFunction depthCompare = "always";

    GPUStencilStateFaceDescriptor stencilFront = {};
    GPUStencilStateFaceDescriptor stencilBack = {};

    GPUStencilValue stencilReadMask = 0xFFFFFFFF;
    GPUStencilValue stencilWriteMask = 0xFFFFFFFF;
};

dictionary GPUStencilStateFaceDescriptor {
    GPUCompareFunction compare = "always";
    GPUStencilOperation failOp = "keep";
    GPUStencilOperation depthFailOp = "keep";
    GPUStencilOperation passOp = "keep";
};

enum GPUIndexFormat {
    "uint16",
    "uint32"
};

enum GPUVertexFormat {
    "uchar2",
    "uchar4",
    "char2",
    "char4",
    "uchar2norm",
    "uchar4norm",
    "char2norm",
    "char4norm",
    "ushort2",
    "ushort4",
    "short2",
    "short4",
    "ushort2norm",
    "ushort4norm",
    "short2norm",
    "short4norm",
    "half2",
    "half4",
    "float",
    "float2",
    "float3",
    "float4",
    "uint",
    "uint2",
    "uint3",
    "uint4",
    "int",
    "int2",
    "int3",
    "int4"
};

enum GPUInputStepMode {
    "vertex",
    "instance"
};

dictionary GPUVertexStateDescriptor {
    GPUIndexFormat indexFormat = "uint32";
    sequence<GPUVertexBufferLayoutDescriptor?> vertexBuffers = [];
};

dictionary GPUVertexBufferLayoutDescriptor {
    required GPUSize64 arrayStride;
    GPUInputStepMode stepMode = "vertex";
    required sequence<GPUVertexAttributeDescriptor> attributes;
};

dictionary GPUVertexAttributeDescriptor {
    required GPUVertexFormat format;
    required GPUSize64 offset;

    required GPUIndex32 shaderLocation;
};

interface GPUCommandBuffer {
};
GPUCommandBuffer includes GPUObjectBase;

dictionary GPUCommandBufferDescriptor : GPUObjectDescriptorBase {
};

interface GPUCommandEncoder {
    GPURenderPassEncoder beginRenderPass(GPURenderPassDescriptor descriptor);
    GPUComputePassEncoder beginComputePass(optional GPUComputePassDescriptor descriptor = {});

    void copyBufferToBuffer(
        GPUBuffer source,
        GPUSize64 sourceOffset,
        GPUBuffer destination,
        GPUSize64 destinationOffset,
        GPUSize64 size);

    void copyBufferToTexture(
        GPUBufferCopyView source,
        GPUTextureCopyView destination,
        GPUExtent3D copySize);

    void copyTextureToBuffer(
        GPUTextureCopyView source,
        GPUBufferCopyView destination,
        GPUExtent3D copySize);

    void copyTextureToTexture(
        GPUTextureCopyView source,
        GPUTextureCopyView destination,
        GPUExtent3D copySize);

    void pushDebugGroup(DOMString groupLabel);
    void popDebugGroup();
    void insertDebugMarker(DOMString markerLabel);

    GPUCommandBuffer finish(optional GPUCommandBufferDescriptor descriptor = {});
};
GPUCommandEncoder includes GPUObjectBase;

dictionary GPUCommandEncoderDescriptor : GPUObjectDescriptorBase {
    // TODO: reusability flag?
};

dictionary GPUBufferCopyView {
    required GPUBuffer buffer;
    GPUSize64 offset = 0;
    required GPUSize32 rowPitch;
    required GPUSize32 imageHeight;
};

dictionary GPUTextureCopyView {
    required GPUTexture texture;
    GPUIntegerCoordinate mipLevel = 0;
    GPUIntegerCoordinate arrayLayer = 0;
    GPUOrigin3D origin = {};
};

dictionary GPUImageBitmapCopyView {
    required ImageBitmap imageBitmap;
    GPUOrigin2D origin = {};
};

interface mixin GPUProgrammablePassEncoder {
    void setBindGroup(GPUIndex32 index, GPUBindGroup bindGroup,
                      optional sequence<GPUBufferDynamicOffset> dynamicOffsets = []);

    void setBindGroup(GPUIndex32 index, GPUBindGroup bindGroup,
                      Uint32Array dynamicOffsetsData,
                      GPUSize64 dynamicOffsetsDataStart,
                      GPUSize32 dynamicOffsetsDataLength);

    void pushDebugGroup(DOMString groupLabel);
    void popDebugGroup();
    void insertDebugMarker(DOMString markerLabel);
};

interface GPUComputePassEncoder {
    void setPipeline(GPUComputePipeline pipeline);
    void dispatch(GPUSize32 x, optional GPUSize32 y = 1, optional GPUSize32 z = 1);
    void dispatchIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);

    void endPass();
};
GPUComputePassEncoder includes GPUObjectBase;
GPUComputePassEncoder includes GPUProgrammablePassEncoder;

dictionary GPUComputePassDescriptor : GPUObjectDescriptorBase {
};

interface mixin GPURenderEncoderBase {
    void setPipeline(GPURenderPipeline pipeline);

    void setIndexBuffer(GPUBuffer buffer, optional GPUSize64 offset = 0);
    void setVertexBuffer(GPUIndex32 slot, GPUBuffer buffer, optional GPUSize64 offset = 0);

    void draw(GPUSize32 vertexCount, GPUSize32 instanceCount,
              GPUSize32 firstVertex, GPUSize32 firstInstance);
    void drawIndexed(GPUSize32 indexCount, GPUSize32 instanceCount,
                     GPUSize32 firstIndex, GPUSignedOffset32 baseVertex, GPUSize32 firstInstance);

    void drawIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
    void drawIndexedIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
};

interface GPURenderPassEncoder {
    void setViewport(float x, float y,
                     float width, float height,
                     float minDepth, float maxDepth);

    void setScissorRect(GPUIntegerCoordinate x, GPUIntegerCoordinate y,
                        GPUIntegerCoordinate width, GPUIntegerCoordinate height);

    void setBlendColor(GPUColor color);
    void setStencilReference(GPUStencilValue reference);

    void executeBundles(sequence<GPURenderBundle> bundles);
    void endPass();
};
GPURenderPassEncoder includes GPUObjectBase;
GPURenderPassEncoder includes GPUProgrammablePassEncoder;
GPURenderPassEncoder includes GPURenderEncoderBase;

dictionary GPURenderPassDescriptor : GPUObjectDescriptorBase {
    required sequence<GPURenderPassColorAttachmentDescriptor> colorAttachments;
    GPURenderPassDepthStencilAttachmentDescriptor depthStencilAttachment;
};

dictionary GPURenderPassColorAttachmentDescriptor {
    required GPUTextureView attachment;
    GPUTextureView resolveTarget;

    required (GPULoadOp or GPUColor) loadValue;
    GPUStoreOp storeOp = "store";
};

dictionary GPURenderPassDepthStencilAttachmentDescriptor {
    required GPUTextureView attachment;

    required (GPULoadOp or float) depthLoadValue;
    required GPUStoreOp depthStoreOp;

    required (GPULoadOp or GPUStencilValue) stencilLoadValue;
    required GPUStoreOp stencilStoreOp;
};

enum GPULoadOp {
    "load"
};

enum GPUStoreOp {
    "store",
    "clear"
};

interface GPURenderBundle {
};
GPURenderBundle includes GPUObjectBase;

dictionary GPURenderBundleDescriptor : GPUObjectDescriptorBase {
};

interface GPURenderBundleEncoder {
    GPURenderBundle finish(optional GPURenderBundleDescriptor descriptor = {});
};
GPURenderBundleEncoder includes GPUObjectBase;
GPURenderBundleEncoder includes GPUProgrammablePassEncoder;
GPURenderBundleEncoder includes GPURenderEncoderBase;

dictionary GPURenderBundleEncoderDescriptor : GPUObjectDescriptorBase {
    required sequence<GPUTextureFormat> colorFormats;
    GPUTextureFormat depthStencilFormat;
    GPUSize32 sampleCount = 1;
};

interface GPUQueue {
    void submit(sequence<GPUCommandBuffer> commandBuffers);

    GPUFence createFence(optional GPUFenceDescriptor descriptor = {});
    void signal(GPUFence fence, GPUFenceValue signalValue);

    void copyImageBitmapToTexture(
        GPUImageBitmapCopyView source,
        GPUTextureCopyView destination,
        GPUExtent3D copySize);
};
GPUQueue includes GPUObjectBase;

interface GPUFence {
    GPUFenceValue getCompletedValue();
    Promise<void> onCompletion(GPUFenceValue completionValue);
};
GPUFence includes GPUObjectBase;

dictionary GPUFenceDescriptor : GPUObjectDescriptorBase {
    GPUFenceValue initialValue = 0;
};

interface GPUCanvasContext {
    GPUSwapChain configureSwapChain(GPUSwapChainDescriptor descriptor);

    Promise<GPUTextureFormat> getSwapChainPreferredFormat(GPUDevice device);
};

dictionary GPUSwapChainDescriptor : GPUObjectDescriptorBase {
    required GPUDevice device;
    required GPUTextureFormat format;
    GPUTextureUsageFlags usage = 0x10;  // GPUTextureUsage.OUTPUT_ATTACHMENT
};

interface GPUSwapChain {
    GPUTexture getCurrentTexture();
};
GPUSwapChain includes GPUObjectBase;

interface GPUDeviceLostInfo {
    readonly attribute DOMString message;
};

partial interface GPUDevice {
    readonly attribute Promise<GPUDeviceLostInfo> lost;
};

enum GPUErrorFilter {
    "none",
    "out-of-memory",
    "validation"
};

interface GPUOutOfMemoryError {
    constructor();
};

interface GPUValidationError {
    constructor(DOMString message);
    readonly attribute DOMString message;
};

typedef (GPUOutOfMemoryError or GPUValidationError) GPUError;

partial interface GPUDevice {
    void pushErrorScope(GPUErrorFilter filter);
    Promise<GPUError?> popErrorScope();
};

[
    Exposed=(Window, DedicatedWorker)
]
interface GPUUncapturedErrorEvent : Event {
    constructor(
        DOMString type,
        GPUUncapturedErrorEventInit gpuUncapturedErrorEventInitDict
    );
    [SameObject] readonly attribute GPUError error;
};

dictionary GPUUncapturedErrorEventInit : EventInit {
    required GPUError error;
};

partial interface GPUDevice {
    [Exposed=(Window, DedicatedWorker)]
    attribute EventHandler onuncapturederror;
};

typedef [EnforceRange] unsigned long GPUBufferDynamicOffset;
typedef [EnforceRange] unsigned long long GPUFenceValue;
typedef [EnforceRange] unsigned long GPUStencilValue;
typedef [EnforceRange] unsigned long GPUSampleMask;
typedef [EnforceRange] long GPUDepthBias;

typedef [EnforceRange] unsigned long long GPUSize64;
typedef [EnforceRange] unsigned long GPUIntegerCoordinate;
typedef [EnforceRange] unsigned long GPUIndex32;
typedef [EnforceRange] unsigned long GPUSize32;
typedef [EnforceRange] long GPUSignedOffset32;

dictionary GPUColorDict {
    required double r;
    required double g;
    required double b;
    required double a;
};
typedef (sequence<double> or GPUColorDict) GPUColor;

dictionary GPUOrigin2DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin2DDict) GPUOrigin2D;

dictionary GPUOrigin3DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
    GPUIntegerCoordinate z = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin3DDict) GPUOrigin3D;

dictionary GPUExtent3DDict {
    required GPUIntegerCoordinate width;
    required GPUIntegerCoordinate height;
    required GPUIntegerCoordinate depth;
};
typedef (sequence<GPUIntegerCoordinate> or GPUExtent3DDict) GPUExtent3D;

typedef sequence<(GPUBuffer or ArrayBuffer)> GPUMappedBuffer;

Issues Index

Consider merging all read-only usages. <https://github.com/gpuweb/gpuweb/issues/296>
Document read-only states for depth views. <https://github.com/gpuweb/gpuweb/issues/514>
Define "ownership".
Need a robust example like the one in ErrorHandling.md, which handles all situations. Possibly also include a simple example with no handling.
Write a spec section for this, and link to it.
Add client-side validation that a mapped buffer can only be unmapped and destroyed on the worker on which it was mapped.
Handle error buffers once we have a description of the error monad.
Specify that the rejection happens on the device timeline.
Handle error buffers once we have a description of the error monad.
Specify that the rejection happens on the device timeline.