WebGPU

Editor’s Draft,

Issue Tracking:
GitHub
Inline In Spec
Editors:
(Mozilla)
(Apple)
(Google)
Participate:
File an issue (open issues)

Abstract

WebGPU exposes an API for performing operations, such as rendering and computation, on a Graphics Processing Unit.

Status of this document

This specification was published by the GPU for the Web Community Group. It is not a W3C Standard nor is it on the W3C Standards Track. Please note that under the W3C Community Contributor License Agreement (CLA) there is a limited opt-out and other conditions apply. Learn more about W3C Community and Business Groups.

1. Introduction

This section is non-normative.

Graphics Processing Units, or GPUs for short, have been essential in enabling rich rendering and computational applications in personal computing. WebGPU is an API that exposes the capabilities of GPU hardware for the Web. The API is designed from the ground up to efficiently map to the Vulkan, Direct3D 12, and Metal native GPU APIs. WebGPU is not related to WebGL and does not explicitly target OpenGL ES.

WebGPU sees physical GPU hardware as GPUAdapters. It provides a connection to an adapter via GPUDevice, which manages resources, and the device’s GPUQueues, which execute commands. GPUDevice may have its own memory with high-speed access to the processing units. GPUBuffer and GPUTexture are the physical resources backed by GPU memory. GPUCommandBuffer and GPURenderBundle are containers for user-recorded commands. GPUShaderModule contains shader code. The other resources, such as GPUSampler or GPUBindGroup, configure the way physical resources are used by the GPU.

GPUs execute commands encoded in GPUCommandBuffers by feeding data through a pipeline, which is a mix of fixed-function and programmable stages. Programmable stages execute shaders, which are special programs designed to run on GPU hardware. Most of the state of a pipeline is defined by a GPURenderPipeline or a GPUComputePipeline object. The state not included in these pipeline objects is set during encoding with commands, such as beginRenderPass() or setBlendColor().

2. Security considerations

2.1. CPU-based undefined behavior

A WebGPU implementation translates the workloads issued by the user into API commands specific to the target platform. Native APIs specify the valid usage for the commands (for example, see vkCreateDescriptorSetLayout) and generally don’t guarantee any outcome if the valid usage rules are not followed. This is called "undefined behavior", and it can be exploited by an attacker to access memory they don’t own, or force the driver to execute arbitrary code.

In order to disallow insecure usage, the range of allowed WebGPU behaviors is defined for any input. An implementation has to validate all the input from the user and only reach the driver with the valid workloads. This document specifies all the error conditions and handling semantics. For example, specifying the same buffer with intersecting ranges in both "source" and "destination" of copyBufferToBuffer() results in GPUCommandEncoder generating an error, and no other operation occurring.

See § 22 Errors & Debugging for more information about error handling.

2.2. GPU-based undefined behavior

WebGPU shaders are executed by the compute units inside GPU hardware. In native APIs, some of the shader instructions may result in undefined behavior on the GPU. In order to address that, the shader instruction set and its defined behaviors are strictly defined by WebGPU. When a shader is provided, the WebGPU implementation has to validate it before doing any translation (to platform-specific shaders) or transformation passes.

2.3. Out-of-bounds access in shaders

Shaders can access physical resources either directly or via texture units, which are fixed-function hardware blocks that handle texture coordinate conversions. Validation on the API side can only guarantee that all the inputs to the shader are provided and they have the correct usage and types. The API side can not guarantee that the data is accessed within bounds if the texture units are not involved.

In order to prevent the shaders from accessing GPU memory an application doesn’t own, the WebGPU implementation may enable a special mode (called "robust buffer access") in the driver that guarantees that the access is limited to buffer bounds. Alternatively, an implementation may transform the shader code by inserting manual bounds checks.

If the shader attempts to load data outside of physical resource bounds, the implementation is allowed to:

  1. return a value at a different location within the resource bounds

  2. return a value vector of "(0, 0, 0, X)" with any "X"

  3. partially discard the draw or dispatch call

If the shader attempts to write data outside of physical resource bounds, the implementation is allowed to:

  1. write the value to a different location within the resource bounds

  2. discard the write operation

  3. partially discard the draw or dispatch call

2.4. Invalid data

When uploading floating-point data from CPU to GPU, or generating it on the GPU, we may end up with a binary representation that doesn’t correspond to a valid number, such as infinity or NaN (not-a-number). The GPU behavior in this case is subject to the accuracy of the GPU hardware implementation of the IEEE-754 standard. WebGPU guarantees that introducing invalid floating-point numbers would only affect the results of arithmetic computations and will not have other side effects.

2.5. Driver bugs

GPU drivers are subject to bugs like any other software. If a bug occurs, an attacker could possibly exploit the incorrect behavior of the driver to get access to unprivileged data. In order to reduce the risk, the WebGPU working group will coordinate with GPU vendors to integrate the WebGPU Conformance Test Suite (CTS) as part of their driver testing process, like it was done for WebGL. WebGPU implementations are expected to have workarounds for some of the discovered bugs, and support blacklisting particular drivers from using some of the native API backends.

2.6. Timing attacks

WebGPU is designed for multi-threaded use via Web Workers. Some of the objects, like GPUBuffer, have shared state which can be simultaneously accessed. This allows race conditions to occur, similar to those of accessing a SharedArrayBuffer from multiple Web Workers, which makes the thread scheduling observable and allows the creation of high-precision timers. The theoretical attack vectors are a subset of those of SharedArrayBuffer.

2.7. Denial of service

WebGPU applications have access to GPU memory and compute units. A WebGPU implementation may limit the available GPU memory to an application, in order to keep other applications responsive. For GPU processing time, a WebGPU implementation may set up "watchdog" timer that makes sure an application doesn’t cause GPU unresponsiveness for more than a few seconds. These measures are similar to those used in WebGL.

2.8. Fingerprinting

WebGPU defines the required limits and capabilities of any GPUAdapter. and encourages applications to target these standard limits. The actual result from requestAdapter() may have better limits, and could be subject to fingerprinting.

3. Terminology & Conventions

3.1. Dot Syntax

In this specification, the . ("dot") syntax, common in programming languages, is used. The phrasing "Foo.Bar" means "the Bar member of the value (or interface) Foo."

For example, where buffer is a GPUBuffer, buffer.[[device]].[[adapter]] means "the [[adapter]] internal slot of the [[device]] internal slot of buffer.

3.2. Coordinate Systems

WebGPU’s coordinate systems match DirectX and Metal’s coordinate systems in a graphics pipeline.

3.3. Internal Objects

An internal object is a conceptual, non-exposed WebGPU object. Internal objects track the state of an API object and hold any underlying implementation. If the state of a particular internal object can change in parallel from multiple agents, those changes are always atomic with respect to all agents.

Note: An "agent" refers to a JavaScript "thread" (i.e. main thread, or Web Worker).

3.3.1. Invalid Objects

If an object is successfully created, it is valid at that moment. An internal object may be invalid. It may become invalid during its lifetime, but it will never become valid again.

Invalid objects result from a number of situations, including:

3.4. WebGPU Interfaces

A WebGPU interface is an exposed interface which encapsulates an internal object. It provides the interface through which the internal object's state is changed.

As a matter of convention, if a WebGPU interface is referred to as invalid, it means that the internal object it encapsulates is invalid.

Any interface which includes GPUObjectBase is a WebGPU interface.

interface mixin GPUObjectBase {
    attribute USVString? label;
};

GPUObjectBase has the following attributes:

label, of type USVString, nullable

A label which can be used by development tools (such as error/warning messages, browser developer tools, or platform debugging utilities) to identify the underlying internal object to the developer. It has no specified format, and therefore cannot be reliably machine-parsed.

In any given situation, the user agent may or may not choose to use this label.

GPUObjectBase has the following internal slots:

[[device]], of type device, readonly

An internal slot holding the device which owns the internal object.

3.5. Object Descriptors

An object descriptor holds the information needed to create an object, which is typically done via one of the create* methods of GPUDevice.

dictionary GPUObjectDescriptorBase {
    USVString label;
};

GPUObjectDescriptorBase has the following members:

label, of type USVString

The initial value of GPUObjectBase.label.

4. Programming Model

4.1. Timelines

This section is non-normative.

A computer system with a user agent at the front-end and GPU at the back-end has components working on different timelines in parallel:

Content timeline

Associated with the execution of the Web script. It includes calling all methods described by this specification.

Device timeline

Associated with the GPU device operations that are issued by the user agent. It includes creation of adapters, devices, and GPU resources and state objects, which are typically synchronous operations from the point of view of the user agent part that controls the GPU, but can live in a separate OS process.

Queue timeline

Associated with the execution of operations on the compute units of the GPU. It includes actual draw, copy, and compute jobs that run on the GPU.

In this specification, asynchronous operations are used when the result value depends on work that happens on any timeline other than the Content timeline. They are represented by callbacks and promises in JavaScript.

GPUComputePassEncoder.dispatch():
  1. User encodes a dispatch command by calling a method of the GPUComputePassEncoder which happens on the Content timeline.

  2. User issues GPUQueue.submit() that hands over the GPUCommandBuffer to the user agent, which processes it on the Device timeline by calling the OS driver to do a low-level submission.

  3. The submit gets dispatched by the GPU thread scheduler onto the actual compute units for execution, which happens on the Queue timeline.

GPUDevice.createBuffer():
  1. User fills out a GPUBufferDescriptor and creates a GPUBuffer with it, which happens on the Content timeline.

  2. User agent creates a low-level buffer on the Device timeline.

GPUBuffer.mapAsync():
  1. User requests to map a GPUBuffer on the Content timeline and gets a promise in return.

  2. User agent checks if the buffer is currently used by the GPU and makes a reminder to itself to check back when this usage is over.

  3. After the GPU operating on Queue timeline is done using the buffer, the user agent maps it to memory and resolves the promise.

4.2. Memory

This section is non-normative.

Once a GPUDevice has been obtained during an application initialization routine, we can describe the WebGPU platform as consisting of the following layers:

  1. User agent implementing the specification.

  2. Operating system with low-level native API drivers for this device.

  3. Actual CPU and GPU hardware.

Each layer of the WebGPU platform may have different memory types that the user agent needs to consider when implementing the specification:

Most physical resources are allocated in the memory of type that is efficient for computation or rendering by the GPU. When the user needs to provide new data to the GPU, the data may first need to cross the process boundary in order to reach the user agent part that communicates with the GPU driver. Then it may need to be made visible to the driver, which sometimes requires a copy into driver-allocated staging memory. Finally, it may need to be transferred to the dedicated GPU memory, potentially changing the internal layout into one that is most efficient for GPUs to operate on.

All of these transitions are done by the WebGPU implementation of the user agent.

Note: This example describes the worst case, while in practice the implementation may not need to cross the process boundary, or may be able to expose the driver-managed memory directly to the user behind an ArrayBuffer, thus avoiding any data copies.

4.3. Resource usage

Buffers and textures can be used by the GPU in multiple ways, which can be split into two groups:

Read-only usages

Usages like GPUBufferUsage.VERTEX or GPUTextureUsage.SAMPLED don’t change the contents of a resource.

Mutating usages

Usages like GPUBufferUsage.STORAGE do change the contents of a resource.

Consider merging all read-only usages. <https://github.com/gpuweb/gpuweb/issues/296>

Textures may consist of separate mipmap levels and array layers, which can be used differently at any given time. Each such subresource is uniquely identified by a texture, mipmap level, and (for 2d textures only) array layer.

The main usage rule is that any subresource at any given time can only be in either:

Enforcing this rule allows the API to limit when data races can occur when working with memory. That property makes applications written against WebGPU more likely to run without modification on different platforms.

Generally, when an implementation processes an operation that uses a subresource in a different way than its current usage allows, it schedules a transition of the resource into the new state. In some cases, like within an open GPURenderPassEncoder, such a transition is impossible due to the hardware limitations. We define these places as usage scopes: each subresource must not change usage within the usage scope.

For example, binding the same buffer for GPUBufferUsage.STORAGE as well as for GPUBufferUsage.VERTEX within the same GPURenderPassEncoder would put the encoder as well as the owning GPUCommandEncoder into the error state. Since GPUBufferUsage.STORAGE is the only mutating usage for a buffer that is valid inside a render pass, if it’s present, this buffer can’t be used in any other way within this pass.

The subresources of textures included in the views provided to GPURenderPassColorAttachmentDescriptor.attachment and GPURenderPassColorAttachmentDescriptor.resolveTarget are considered to have OUTPUT_ATTACHMENT for the usage scope of this render pass.

The physical size of a GPUTexture subresource is the dimension of the GPUTexture subresource in texels that includes the possible extra paddings to form complete texel blocks in the subresource.

Considering a GPUTexture in BC format whose [[textureSize]] is {60, 60, 1}, when sampling the GPUTexture at mipmap level 2, the sampling hardware uses {15, 15, 1} as the size of the subresource, while its physical size is {16, 16, 1} as the block-compression algorithm can only operate on 4x4 texel blocks.

Document read-only states for depth views. <https://github.com/gpuweb/gpuweb/issues/514>

4.4. Synchronization

For each subresource of a physical resource, its set of usage flags is tracked on the Queue timeline. Usage flags are GPUBufferUsage or GPUTextureUsage flags, according to the type of the subresource.

This section will need to be revised to support multiple queues.

On the Queue timeline, there is an ordered sequence of usage scopes. Each item on the timeline is contained within exactly one scope. For the duration of each scope, the set of usage flags of any given subresource is constant. A subresource may transition to new usages at the boundaries between usage scopes.

This specification defines the following usage scopes:

  1. an individual command on a GPUCommandEncoder, such as GPUCommandEncoder.copyBufferToTexture.

  2. an individual command on a GPUComputePassEncoder, such as GPUProgrammablePassEncoder.setBindGroup.

  3. the whole GPURenderPassEncoder.

Note: calling GPUProgrammablePassEncoder.setBindGroup adds the [[usedBuffers]] and [[usedTextures]] to the usage scope regardless of whether the shader or GPUPipelineLayout actually depends on these bindings. Similarly GPURenderEncoderBase.setIndexBuffer add the index buffer to the usage scope (as GPUBufferUsage.INDEX) regardless of whether the indexed draw calls are used afterwards.

The usage scopes are validated at GPUCommandEncoder.finish time. The implementation performs the usage scope validation by composing the set of all usage flags of each subresource used in the usage scope. A GPUValidationError is generated in the current scope with an appropriate error message if that union contains a mutating usage combined with any other usage.

5. Core Internal Objects

5.1. Adapters

An adapter represents an implementation of WebGPU on the system. Each adapter identifies both an instance of a hardware accelerator (e.g. GPU or CPU) and an instance of a browser’s implementation of WebGPU on top of that accelerator.

If an adapter becomes unavailable, it becomes invalid. Once invalid, it never becomes valid again. Any devices on the adapter, and internal objects owned by those devices, also become invalid.

Note: An adapter may be a physical display adapter (GPU), but it could also be a software renderer. A returned adapter could refer to different physical adapters, or to different browser codepaths or system drivers on the same physical adapters. Applications can hold onto multiple adapters at once (via GPUAdapter) (even if some are invalid), and two of these could refer to different instances of the same physical configuration (e.g. if the GPU was reset or disconnected and reconnected).

An adapter has the following internal slots:

[[extensions]], of type sequence<GPUExtensionName>, readonly

The extensions which can be used to create devices on this adapter.

[[limits]], of type GPULimits, readonly

The best limits which can be used to create devices on this adapter.

Each adapter limit must be the same or better than its default value in GPULimits.

Adapters are exposed via GPUAdapter.

5.2. Devices

A device is the logical instantiation of an adapter, through which internal objects are created. It can be shared across multiple agents (e.g. dedicated workers).

A device is the exclusive owner of all internal objects created from it: when the device is lost, it and all objects created on it (directly, e.g. createTexture(), or indirectly, e.g. createView()) become invalid.

Define "ownership".

A device has the following internal slots:

[[adapter]], of type adapter, readonly

The adapter from which this device was created.

[[extensions]], of type sequence<GPUExtensionName>, readonly

The extensions which can be used on this device. No additional extensions can be used, even if the underlying adapter can support them.

[[limits]], of type GPULimits, readonly

The limits which can be used on this device. No better limits can be used, even if the underlying adapter can support them.

When a new device device is created from adapter adapter with GPUDeviceDescriptor descriptor:

Devices are exposed via GPUDevice.

6. Initialization

6.1. Examples

Need a robust example like the one in ErrorHandling.md, which handles all situations. Possibly also include a simple example with no handling.

A GPU object is available via navigator.gpu on the Window:

[Exposed=Window]
partial interface Navigator {
    [SameObject] readonly attribute GPU gpu;
};

... as well as on dedicated workers:

[Exposed=DedicatedWorker]
partial interface WorkerNavigator {
    [SameObject] readonly attribute GPU gpu;
};

6.3. GPU

GPU is the entry point to WebGPU.

[Exposed=(Window, DedicatedWorker)]
interface GPU {
    Promise<GPUAdapter> requestAdapter(optional GPURequestAdapterOptions options = {});
};

GPU has the methods defined by the following sections.

6.3.1. requestAdapter(options)

Arguments:

Returns: promise, of type Promise<GPUAdapter>.

Requests an adapter from the user agent. The user agent chooses whether to return an adapter, and, if so, chooses according to the provided options.

Returns a new promise, promise. On the Device timeline, the following steps occur:

6.3.1.1. Adapter Selection

GPURequestAdapterOptions provides hints to the user agent indicating what configuration is suitable for the application.

dictionary GPURequestAdapterOptions {
    GPUPowerPreference powerPreference;
};
enum GPUPowerPreference {
    "low-power",
    "high-performance"
};

GPURequestAdapterOptions has the following members:

powerPreference, of type GPUPowerPreference

Optionally provides a hint indicating what class of adapter should be selected from the system’s available adapters.

The value of this hint may influence which adapter is chosen, but it must not influence whether an adapter is returned or not.

Note: The primary utility of this hint is to influence which GPU is used in a multi-GPU system. For instance, some laptops have a low-power integrated GPU and a high-performance discrete GPU.

Note: Depending on the exact hardware configuration, such as battery status and attached displays or removable GPUs, the user agent may select different adapters given the same power preference. Typically, given the same hardware configuration and state and powerPreference, the user agent is likely to select the same adapter.

It must be one of the following values:

undefined (or not present)

Provides no hint to the user agent.

"low-power"

Indicates a request to prioritize power savings over performance.

Note: Generally, content should use this if it is unlikely to be constrained by drawing performance; for example, if it renders only one frame per second, draws only relatively simple geometry with simple shaders, or uses a small HTML canvas element. Developers are encouraged to use this value if their content allows, since it may significantly improve battery life on portable devices.

"high-performance"

Indicates a request to prioritize performance over power consumption.

Note: By choosing this value, developers should be aware that, for devices created on the resulting adapter, user agents are more likely to force device loss, in order to save power by switching to a lower-power adapter. Developers are encouraged to only specify this value if they believe it is absolutely necessary, since it may significantly decrease battery life on portable devices.

6.4. GPUAdapter

A GPUAdapter encapsulates an adapter, and describes its capabilities (extensions and limits).

To get a GPUAdapter, use requestAdapter().

interface GPUAdapter {
    readonly attribute DOMString name;
    readonly attribute FrozenArray<GPUExtensionName> extensions;
    //readonly attribute GPULimits limits; Don’t expose higher limits for now.

    Promise<GPUDevice> requestDevice(optional GPUDeviceDescriptor descriptor = {});
};

GPUAdapter has:

6.4.1. requestDevice(optional descriptor)

this: of type GPUAdapter.

Arguments:

Returns: promise, of type Promise<GPUDevice>.

Requests a device from the adapter.

Returns a new promise, promise. On the Device timeline, the following steps occur:

Valid Usage

Let adapter be this.[[adapter]].

6.4.1.1. GPUDeviceDescriptor

GPUDeviceDescriptor describes a device request.

dictionary GPUDeviceDescriptor : GPUObjectDescriptorBase {
    sequence<GPUExtensionName> extensions = [];
    GPULimits limits = {};
};
extensions, of type sequence<GPUExtensionName>, defaulting to []

The set of GPUExtensionName values in this sequence defines the exact set of extensions that must be enabled on the device.

limits, of type GPULimits, defaulting to {}

Defines the exact limits that must be enabled on the device.

6.4.1.2. GPUExtensionName

Each GPUExtensionName identifies a set of functionality which, if available, allows additional usages of WebGPU that would have otherwise been invalid.

enum GPUExtensionName {
    "texture-compression-bc",
    "pipeline-statistics-query"
};
"texture-compression-bc"

Write a spec section for this, and link to it.

6.4.1.3. GPULimits

GPULimits describes various limits in the usage of WebGPU on a device.

One limit value may be better than another. For each limit, "better" is defined.

Note: Setting "better" limits may not necessarily be desirable. While they enable strictly more programs to be valid, they may have a performance impact. Because of this, and to improve portability across devices and implementations, applications should generally request the "worst" limits that work for their content.

dictionary GPULimits {
    GPUSize32 maxBindGroups = 4;
    GPUSize32 maxDynamicUniformBuffersPerPipelineLayout = 8;
    GPUSize32 maxDynamicStorageBuffersPerPipelineLayout = 4;
    GPUSize32 maxSampledTexturesPerShaderStage = 16;
    GPUSize32 maxSamplersPerShaderStage = 16;
    GPUSize32 maxStorageBuffersPerShaderStage = 4;
    GPUSize32 maxStorageTexturesPerShaderStage = 4;
    GPUSize32 maxUniformBuffersPerShaderStage = 12;
};
maxBindGroups, of type GPUSize32, defaulting to 4

The maximum number of GPUBindGroupLayouts allowed in bindGroupLayouts when creating a GPUPipelineLayout.

Higher is better.

maxDynamicUniformBuffersPerPipelineLayout, of type GPUSize32, defaulting to 8

The maximum number of entries for which:

across all bindGroupLayouts when creating a GPUPipelineLayout.

Higher is better.

maxDynamicStorageBuffersPerPipelineLayout, of type GPUSize32, defaulting to 4

The maximum number of entries for which:

across all bindGroupLayouts when creating a GPUPipelineLayout.

Higher is better.

maxSampledTexturesPerShaderStage, of type GPUSize32, defaulting to 16

For each possible GPUShaderStage stage, the maximum number of entries for which:

across all bindGroupLayouts when creating a GPUPipelineLayout.

Higher is better.

maxSamplersPerShaderStage, of type GPUSize32, defaulting to 16

For each possible GPUShaderStage stage, the maximum number of entries for which:

across all bindGroupLayouts when creating a GPUPipelineLayout.

Higher is better.

maxStorageBuffersPerShaderStage, of type GPUSize32, defaulting to 4

For each possible GPUShaderStage stage, the maximum number of entries for which:

across all bindGroupLayouts when creating a GPUPipelineLayout.

Higher is better.

maxStorageTexturesPerShaderStage, of type GPUSize32, defaulting to 4

For each possible GPUShaderStage stage, the maximum number of entries for which:

across all bindGroupLayouts when creating a GPUPipelineLayout.

Higher is better.

maxUniformBuffersPerShaderStage, of type GPUSize32, defaulting to 12

For each possible GPUShaderStage stage, the maximum number of entries for which:

across all bindGroupLayouts when creating a GPUPipelineLayout.

Higher is better.

6.5. GPUDevice

A GPUDevice encapsulates a device and exposes the functionality of that device.

GPUDevice is the top-level interface through which WebGPU interfaces are created.

To get a GPUDevice, use requestDevice().

[Exposed=(Window, DedicatedWorker), Serializable]
interface GPUDevice : EventTarget {
    [SameObject] readonly attribute GPUAdapter adapter;
    readonly attribute FrozenArray<GPUExtensionName> extensions;
    readonly attribute object limits;

    [SameObject] readonly attribute GPUQueue defaultQueue;

    GPUBuffer createBuffer(GPUBufferDescriptor descriptor);
    GPUMappedBuffer createBufferMapped(GPUBufferDescriptor descriptor);
    GPUTexture createTexture(GPUTextureDescriptor descriptor);
    GPUSampler createSampler(optional GPUSamplerDescriptor descriptor = {});

    GPUBindGroupLayout createBindGroupLayout(GPUBindGroupLayoutDescriptor descriptor);
    GPUPipelineLayout createPipelineLayout(GPUPipelineLayoutDescriptor descriptor);
    GPUBindGroup createBindGroup(GPUBindGroupDescriptor descriptor);

    GPUShaderModule createShaderModule(GPUShaderModuleDescriptor descriptor);
    GPUComputePipeline createComputePipeline(GPUComputePipelineDescriptor descriptor);
    GPURenderPipeline createRenderPipeline(GPURenderPipelineDescriptor descriptor);

    GPUCommandEncoder createCommandEncoder(optional GPUCommandEncoderDescriptor descriptor = {});
    GPURenderBundleEncoder createRenderBundleEncoder(GPURenderBundleEncoderDescriptor descriptor);

    GPUQuerySet createQuerySet(GPUQuerySetDescriptor descriptor);
};
GPUDevice includes GPUObjectBase;

GPUDevice has:

GPUDevice objects are serializable objects.

The steps to serialize a GPUDevice object, given value, serialized, and forStorage, are:
  1. If forStorage is true, throw a "DataCloneError".

  2. Set serialized.device to the value of value.[[device]].

The steps to deserialize a GPUDevice object, given serialized and value, are:
  1. Set value.[[device]] to serialized.device.

7. Buffers

7.1. GPUBuffer

define buffer (internal object)

A GPUBuffer represents a block of memory that can be used in GPU operations. Data is stored in linear layout, meaning that each byte of the allocation can be addressed by its offset from the start of the GPUBuffer, subject to alignment restrictions depending on the operation. Some GPUBuffers can be mapped which makes the block of memory accessible via an ArrayBuffer called its mapping.

GPUBuffers are created via GPUDevice.createBuffer(descriptor) that returns a new buffer in the mapped or unmapped state.

[Serializable]
interface GPUBuffer {
    Promise<void> mapAsync(optional GPUSize64 offset = 0, optional GPUSize64 size = 0);
    ArrayBuffer getMappedRange(optional GPUSize64 offset = 0, optional GPUSize64 size = 0);
    void unmap();

    void destroy();
};
GPUBuffer includes GPUObjectBase;

GPUBuffer has the following internal slots:

[[size]] of type GPUSize64.

The length of the GPUBuffer allocation in bytes.

[[usage]] of type GPUBufferUsageFlags.

The allowed usages for this GPUBuffer.

[[state]] of type buffer state.

The current state of the GPUBuffer.

[[mapping]] of type ArrayBuffer or Promise or null.

The mapping for this GPUBuffer. The ArrayBuffer isn’t directly accessible and is instead accessed through views into it, called the mapped ranges, that are stored in [[mapped_ranges]]

Specify [[mapping]] in term of DataBlock similarly to AllocateArrayBuffer? <https://github.com/gpuweb/gpuweb/issues/605>

[[mapping_range]] of type sequence<Number> or null.

The range of this GPUBuffer that is mapped.

[[mapped_ranges]] of type sequence<ArrayBuffer> or null.

The ArrayBuffers returned via getMappedRange to the application. They are tracked so they can be detached when unmap is called.

[[usage]] is differently named from [[textureUsage]]. We should make it consistent.

Each GPUBuffer has a current buffer state on the Content timeline which is one of the following:

Note: [[size]] and [[usage]] are immutable once the GPUBuffer has been created.

Note: GPUBuffer has a state machine with the following states. ([[mapping]], [[mapping_range]], and [[mapped_ranges]] are null when not specified.)

GPUBuffer is Serializable. It is a reference to an internal buffer object, and Serializable means that the reference can be copied between realms (threads/workers), allowing multiple realms to access it concurrently. Since GPUBuffer has internal state (mapped, destroyed), that state is internally-synchronized - these state changes occur atomically across realms.

7.2. Buffer Creation

7.2.1. GPUBufferDescriptor

This specifies the options to use in creating a GPUBuffer.

dictionary GPUBufferDescriptor : GPUObjectDescriptorBase {
    required GPUSize64 size;
    required GPUBufferUsageFlags usage;
    boolean mappedAtCreation = false;
};
validating GPUBufferDescriptor(device, descriptor)
  1. If device is lost return false.

  2. If any of the bits of descriptor’s usage aren’t present in this device’s [[allowed buffer usages]] return false.

  3. If both the MAP_READ and MAP_WRITE bits of descriptor’s usage attribute are set, return false.

  4. Return true.

7.3. Buffer Usage

typedef [EnforceRange] unsigned long GPUBufferUsageFlags;
interface GPUBufferUsage {
    const GPUBufferUsageFlags MAP_READ      = 0x0001;
    const GPUBufferUsageFlags MAP_WRITE     = 0x0002;
    const GPUBufferUsageFlags COPY_SRC      = 0x0004;
    const GPUBufferUsageFlags COPY_DST      = 0x0008;
    const GPUBufferUsageFlags INDEX         = 0x0010;
    const GPUBufferUsageFlags VERTEX        = 0x0020;
    const GPUBufferUsageFlags UNIFORM       = 0x0040;
    const GPUBufferUsageFlags STORAGE       = 0x0080;
    const GPUBufferUsageFlags INDIRECT      = 0x0100;
    const GPUBufferUsageFlags QUERY_RESOLVE = 0x0200;
};

7.3.1. createBuffer(descriptor)

Arguments:

Returns: GPUBuffer

  1. If this call doesn’t follow the createBuffer Valid Usage:

    1. Retun an error buffer.

    Explain that the resulting error buffer can still be mapped at creation. <https://github.com/gpuweb/gpuweb/issues/605>

  2. Let b be a new GPUBuffer object.

  3. Set b.[[size]] to descriptor.size.

  4. Set b.[[usage]] to descriptor.usage.

  5. If descriptor.mappedAtCreation is true:

    1. Set b.[[mapping]] to a new ArrayBuffer of size b.[[size]].

    2. Set b.[[mapping_range]] to [0, descriptor.size].

    3. Set b.[[mapped_ranges]] to [].

    4. Set b.[[state]] to mapped at creation.

  6. Else:

    1. Set b.[[mapping]] to null.

    2. Set b.[[mapping_range]] to null.

    3. Set b.[[mapped_ranges]] to null.

    4. Set b.[[state]] to unmapped.

  7. Set each byte of b’s allocation to zero.

  8. Return b.

Note: it is valid to set mappedAtCreation to true without MAP_READ or MAP_WRITE in usage. This can be used to set the buffer’s initial data.

createBuffer Valid Usage Given a GPUDevice this and a GPUBufferDescriptor descriptor the following validation rules apply:
  1. this must be a valid GPUDevice.

  2. descriptor.usage must be a subset of this.[[allowed buffer usages]].

  3. If descriptor.usage contains MAP_READ then the only other usage it may contain is COPY_DST.

  4. If descriptor.usage contains MAP_WRITE then the only other usage it may contain is COPY_SRC.

Explain what are a GPUDevice's [[allowed buffer usages]] <https://github.com/gpuweb/gpuweb/issues/605>

7.4. Buffer Destruction

An application that no longer requires a GPUBuffer can choose to lose access to it before garbage collection by calling destroy().

Note: This allows the user agent to reclaim the GPU memory associated with the GPUBuffer once all previously submitted operations using it are complete.

7.4.1. destroy()

this: of type GPUBuffer.
  1. If the this.[[state]] is mapped or mapped at creation:

    1. Run the steps to unmap this

  2. Set this.[[state]] to destroyed

Handle error buffers once we have a description of the error monad.

7.5. Buffer Mapping

An application can request to map a GPUBuffer so that they can access its content via ArrayBuffers that represent part of the GPUBuffer's allocations. Mapping a GPUBuffer is requested asynchronously with mapAsync so that the user agent can ensure the GPU finished using the GPUBuffer before the application can access its content. Once the GPUBuffer is mapped the application can synchronously ask for access to ranges of its content with getMappedRange. A mapped GPUBuffer cannot be used by the GPU and must be unmapped using unmap before work using it can be submitted to the Queue timeline.

Add client-side validation that a mapped buffer can only be unmapped and destroyed on the worker on which it was mapped. Likewise getMappedRange can only be called on that worker. <https://github.com/gpuweb/gpuweb/issues/605>

7.5.1. mapAsync(offset, size)

There is concern that it should be clearer at a mapAsync call point if it is meant for reading or writing because the semantics are very different. Alternatives suggested include splitting into mapReadAsync vs. mapWriteAsync, or adding a GPUMapFlags as an argument to the call that can later be used to extend the method. <https://github.com/gpuweb/gpuweb/issues/605>

this: of type GPUBuffer.

Arguments:

Returns: Promise

Handle error buffers once we have a description of the error monad. <https://github.com/gpuweb/gpuweb/issues/605>

  1. If size is 0 and offset is less than this.[[size]]:

    1. Set size to this.[[size]] - offset

  2. If this call doesn’t follow mapAsync Valid Usage:

    1. Record a validation error on the current scope.

    2. Return a promise rejected with an AbortError on the Device timeline.

  3. Let p be a new Promise.

  4. Set this.[[mapping]] to p.

  5. Set this.[[state]] to mapping pending.

  6. Enqueue an operation on the default queue’s Queue timeline that will execute the following:

    1. If this.[[state]] is mapping pending:

      1. Let m be a new ArrayBuffer of size size.

      2. Set the content of m to the content of this’s allocation starting at offset offset and for size bytes.

      3. Set this.[[mapping]] to m.

      4. Set this.[[state]] to mapped.

      5. Set this.[[mapping_range]] to [start, offset].

      6. Set this.[[mapped_ranges]] to [].

      7. Resolve p.

  7. Return p.

mapAsync Valid Usage

Given a GPUBuffer this, a GPUSize64 offset and a GPUSize64 size the following validation rules apply:

  1. this must be a valid GPUBuffer.

  2. offset must be a multiple of 4.

  3. size must be a multiple of 4.

  4. offset + size must be less or equal to this.[[size]]

  5. this.[[usage]] must contain MAP_READ or MAP_WRITE.

  6. this.[[state]] must be unmapped

7.5.2. getMappedRange(offset, size)

this: of type GPUBuffer.

Arguments:

Returns: ArrayBuffer

  1. If this call doesn’t follow the getMappedRange Valid Usage:

    1. Throw an OperationError.

  2. Let m be a new ArrayBuffer of size size pointing at the content of this.[[mapping]] at offset offset - this.[[mapping_range]][0].

  3. Append m to this.[[mapped_ranges]].

  4. Return m.

getMappedRange Valid Usage

Given a GPUBuffer this, a GPUSize64 offset and a GPUSize64 size the following validation rules apply:

  1. this.[[state]] must be mapped or mapped at creation.

  2. offset must be a multiple of 8.

  3. size must be a multiple of 4.

  4. offset must be greater than or equal to this.[[mapping_range]][0].

  5. offset + size must be less than or equal to this.[[mapping_range]][0] + this.[[mapping_range]][1].

  6. [offset, offset + size) must not overlap another range in this.[[mapped_ranges]].

Note: It is valid to get mapped ranges of an error GPUBuffer that is mapped at creation because the Content timeline might not know it is an error GPUBuffer.

7.5.3. unmap()

this: of type GPUBuffer.
  1. If this call doesn’t follow unmap Valid Usage:

    1. Record a validation error on the current scope.

    2. Return.

  2. If this.[[state]] is mapping pending:

    1. Reject [[mapping]] with an OperationError.

    2. Set this.[[mapping]] to null.

  3. If this.[[state]] is mapped or mapped at creation:

    1. If one of the two following conditions holds:

      1. this.[[state]] is mapped at creation

      2. this.[[state]] is mapped and this.[[usage]] contains MAP_WRITE

    2. Then:

      1. Enqueue an operation on the default queue’s Queue timeline that updates the this.[[mapping_range]] of this’s allocation to the content of this.[[mapping]].

    3. Detach each ArrayBuffer in this.[[mapped_ranges]] from its content.

    4. Set this.[[mapping]] to null.

    5. Set this.[[mapping_range]] to null.

    6. Set this.[[mapped_ranges]] to null.

  4. Set this.[[state]] to unmapped.

Note: When a MAP_READ buffer (not currently mapped at creation) is unmapped, any local modifications done by the application to the mapped ranges ArrayBuffer are discarded and will not affect the content of follow-up mappings.

unmap Valid Usage

Given a GPUBuffer the following validation rules apply:

  1. this.[[state]] must not be unmapped

  2. this.[[state]] must not be destroyed

Note: It is valid to unmap an error GPUBuffer that is mapped at creation because the Content timeline might not know it is an error GPUBuffer.

8. Textures and Texture Views

define texture (internal object)

define mipmap level, array layer, slice (concepts)

8.1. GPUTexture

[Serializable]
interface GPUTexture {
    GPUTextureView createView(optional GPUTextureViewDescriptor descriptor = {});

    void destroy();
};
GPUTexture includes GPUObjectBase;

GPUTexture has the following internal slots:

[[textureSize]] of type GPUExtent3D.

The size of the GPUTexture in texels in mipmap level 0.

[[mipLevelCount]] of type GPUIntegerCoordinate.

The total number of the mipmap levels of the GPUTexture.

[[sampleCount]] of type GPUSize32.

The number of samples in each texel of the GPUTexture.

[[dimension]] of type GPUTextureDimension.

The dimension of the GPUTexture.

[[format]] of type GPUTextureFormat.

The format of the GPUTexture.

[[textureUsage]] of type GPUTextureUsageFlags.

The allowed usages for this GPUTexture.

8.1.1. Texture Creation

dictionary GPUTextureDescriptor : GPUObjectDescriptorBase {
    required GPUExtent3D size;
    GPUIntegerCoordinate mipLevelCount = 1;
    GPUSize32 sampleCount = 1;
    GPUTextureDimension dimension = "2d";
    required GPUTextureFormat format;
    required GPUTextureUsageFlags usage;
};
enum GPUTextureDimension {
    "1d",
    "2d",
    "3d"
};
typedef [EnforceRange] unsigned long GPUTextureUsageFlags;
interface GPUTextureUsage {
    const GPUTextureUsageFlags COPY_SRC          = 0x01;
    const GPUTextureUsageFlags COPY_DST          = 0x02;
    const GPUTextureUsageFlags SAMPLED           = 0x04;
    const GPUTextureUsageFlags STORAGE           = 0x08;
    const GPUTextureUsageFlags OUTPUT_ATTACHMENT = 0x10;
};

8.2. GPUTextureView

interface GPUTextureView {
};
GPUTextureView includes GPUObjectBase;

8.2.1. Texture View Creation

dictionary GPUTextureViewDescriptor : GPUObjectDescriptorBase {
    GPUTextureFormat format;
    GPUTextureViewDimension dimension;
    GPUTextureAspect aspect = "all";
    GPUIntegerCoordinate baseMipLevel = 0;
    GPUIntegerCoordinate mipLevelCount = 0;
    GPUIntegerCoordinate baseArrayLayer = 0;
    GPUIntegerCoordinate arrayLayerCount = 0;
};

Make this a standalone algorithm used in the createView algorithm.

The references to GPUTextureDescriptor here should actually refer to internal slots of a texture internal object once we have one.

enum GPUTextureViewDimension {
    "1d",
    "2d",
    "2d-array",
    "cube",
    "cube-array",
    "3d"
};
enum GPUTextureAspect {
    "all",
    "stencil-only",
    "depth-only"
};

8.2.2. GPUTexture.createView(descriptor)

this: of type GPUTexture.

Arguments:

Returns: view, of type GPUTextureView.

write definition. this descriptor view

8.3. Texture Formats

The name of the format specifies the order of components, bits per component, and data type for the component.

If the format has the -srgb suffix, then sRGB conversions from gamma to linear and vice versa are applied during the reading and writing of color values in the shader. Compressed texture formats are provided by extensions. Their naming should follow the convention here, with the texture name as a prefix. e.g. etc2-rgba8unorm.

The texel block is a single addressable element of the textures in pixel-based GPUTextureFormats, and a single compressed block of the textures in block-based compressed GPUTextureFormats.

The texel block width and texel block height specifies the dimension of one texel block.

The texel block size of a GPUTextureFormat is the number of bytes to store one texel block. The texel block size of each GPUTextureFormat is constant except for "depth24plus" and "depth24plus-stencil8".

enum GPUTextureFormat {
    // 8-bit formats
    "r8unorm",
    "r8snorm",
    "r8uint",
    "r8sint",

    // 16-bit formats
    "r16uint",
    "r16sint",
    "r16float",
    "rg8unorm",
    "rg8snorm",
    "rg8uint",
    "rg8sint",

    // 32-bit formats
    "r32uint",
    "r32sint",
    "r32float",
    "rg16uint",
    "rg16sint",
    "rg16float",
    "rgba8unorm",
    "rgba8unorm-srgb",
    "rgba8snorm",
    "rgba8uint",
    "rgba8sint",
    "bgra8unorm",
    "bgra8unorm-srgb",
    // Packed 32-bit formats
    "rgb10a2unorm",
    "rg11b10float",

    // 64-bit formats
    "rg32uint",
    "rg32sint",
    "rg32float",
    "rgba16uint",
    "rgba16sint",
    "rgba16float",

    // 128-bit formats
    "rgba32uint",
    "rgba32sint",
    "rgba32float",

    // Depth and stencil formats
    "depth32float",
    "depth24plus",
    "depth24plus-stencil8"
};
enum GPUTextureComponentType {
    "float",
    "sint",
    "uint"
};

9. Samplers

9.1. GPUSampler

interface GPUSampler {
};
GPUSampler includes GPUObjectBase;

GPUSampler has the following internal slots:

[[compareEnable]] of type boolean.

Whether the GPUSampler is used as a comparison sampler.

9.1.1. Creation

dictionary GPUSamplerDescriptor : GPUObjectDescriptorBase {
    GPUAddressMode addressModeU = "clamp-to-edge";
    GPUAddressMode addressModeV = "clamp-to-edge";
    GPUAddressMode addressModeW = "clamp-to-edge";
    GPUFilterMode magFilter = "nearest";
    GPUFilterMode minFilter = "nearest";
    GPUFilterMode mipmapFilter = "nearest";
    float lodMinClamp = 0;
    float lodMaxClamp = 0xffffffff; // TODO: What should this be? Was Number.MAX_VALUE.
    GPUCompareFunction compare;
};

9.1.2. GPUDevice.createSampler(descriptor)

Arguments:

Returns: GPUSampler

  1. Let s be a new GPUSampler object.

  2. Set the [[compareEnable]] slot of s to false if the compare attribute of descriptor is null or undefined. Otherwise, set it to true.

  3. Return s.

Valid Usage
enum GPUAddressMode {
    "clamp-to-edge",
    "repeat",
    "mirror-repeat"
};
enum GPUFilterMode {
    "nearest",
    "linear"
};
enum GPUCompareFunction {
    "never",
    "less",
    "equal",
    "less-equal",
    "greater",
    "not-equal",
    "greater-equal",
    "always"
};

10. Resource Binding

10.1. GPUBindGroupLayout

A GPUBindGroupLayout defines the interface between a set of resources bound in a GPUBindGroup and their accessibility in shader stages.

[Serializable]
interface GPUBindGroupLayout {
};
GPUBindGroupLayout includes GPUObjectBase;

10.1.1. Creation

A GPUBindGroupLayout is created via GPUDevice.createBindGroupLayout().

dictionary GPUBindGroupLayoutDescriptor : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayoutEntry> entries;
};

A GPUBindGroupLayoutEntry describes a single shader resource binding to be included in a GPUBindGroupLayout.

dictionary GPUBindGroupLayoutEntry {
    required GPUIndex32 binding;
    required GPUShaderStageFlags visibility;
    required GPUBindingType type;
    GPUTextureViewDimension viewDimension = "2d";
    GPUTextureComponentType textureComponentType = "float";
    GPUTextureFormat storageTextureFormat;
    boolean multisampled = false;
    boolean hasDynamicOffset = false;
};
typedef [EnforceRange] unsigned long GPUShaderStageFlags;
interface GPUShaderStage {
    const GPUShaderStageFlags VERTEX   = 0x1;
    const GPUShaderStageFlags FRAGMENT = 0x2;
    const GPUShaderStageFlags COMPUTE  = 0x4;
};
enum GPUBindingType {
    "uniform-buffer",
    "storage-buffer",
    "readonly-storage-buffer",
    "sampler",
    "comparison-sampler",
    "sampled-texture",
    "readonly-storage-texture",
    "writeonly-storage-texture"
    // TODO: other binding types
};

A GPUBindGroupLayout object has the following internal slots:

[[entryMap]] of type map .

The map of binding indices pointing to the GPUBindGroupLayoutEntrys, which this GPUBindGroupLayout describes.

10.1.2. GPUDevice.createBindGroupLayout(GPUBindGroupLayoutDescriptor)

this: of type GPUDevice.

Arguments:

Returns: GPUBindGroupLayout.

The createBindGroupLayout(descriptor) method is used to create GPUBindGroupLayouts.

  1. Ensure bind group layout device validation is not violated.

  2. Let layout be a new valid GPUBindGroupLayout object.

  3. For each GPUBindGroupLayoutEntry bindingDescriptor in descriptor.entries:

    1. Ensure bindingDescriptor.binding does not violate binding validation.

    2. If bindingDescriptor.visibility includes VERTEX , ensure vertex shader binding validation is not violated.

    3. If bindingDescriptor.type is uniform-buffer:

      1. Ensure uniform buffer validation is not violated.

      2. If bindingDescriptor.hasDynamicOffset is true, ensure dynamic uniform buffer validation is not violated.

    4. If bindingDescriptor.type is storage-buffer or readonly-storage-buffer:

      1. Ensure storage buffer validation is not violated.

      2. If bindingDescriptor.hasDynamicOffset is true, ensure dynamic storage buffer validation is not violated.

    5. If bindingDescriptor.type is sampled-texture , ensure sampled texture validation is not violated.

    6. If bindingDescriptor.type is readonly-storage-texture or writeonly-storage-texture , ensure storage texture validation is not violated.

    7. If bindingDescriptor.type is sampler , ensure sampler validation is not violated.

    8. Insert bindingDescriptor into layout.[[entryMap]] with the key of bindingDescriptor/binding.

  4. Return layout.

Valid Usage

If any of the following conditions are violated:

  1. Generate a GPUValidationError in the current scope with appropriate error message.

  2. Create a new invalid GPUBindGroupLayout and return the result.

bind group layout device validation: The GPUDevice must not be lost.

binding validation: Each bindingDescriptor.binding in descriptor must be unique.

vertex shader binding validation: storage-buffer is not allowed.

uniform buffer validation: There must be GPULimits.maxUniformBuffersPerShaderStage or fewer bindingDescriptors of type uniform-buffer visible on each shader stage in descriptor.

dynamic uniform buffer validation: There must be GPULimits.maxDynamicUniformBuffersPerPipelineLayout or fewer bindingDescriptors of type uniform-buffer with hasDynamicOffset set to true in descriptor that are visible to any shader stage.

storage buffer validation: There must be GPULimits.maxStorageBuffersPerShaderStage or fewer bindingDescriptors of type storage-buffer visible on each shader stage in descriptor.

dynamic storage buffer validation: There must be GPULimits.maxDynamicStorageBuffersPerPipelineLayout or fewer bindingDescriptors of type storage-buffer with hasDynamicOffset set to true in descriptor that are visible to any shader stage.

sampled texture validation: There must be GPULimits.maxSampledTexturesPerShaderStage or fewer bindingDescriptors of type sampled-texture visible on each shader stage in descriptor. bindingDescriptor.hasDynamicOffset must be false.

storage texture validation: There must be GPULimits.maxStorageTexturesPerShaderStage or fewer bindingDescriptors of type readonly-storage-texture and writeonly-storage-texture visible on each shader stage in descriptor. bindingDescriptor.hasDynamicOffset must be false.

sampler validation: There must be GPULimits.maxSamplersPerShaderStage or fewer bindingDescriptors of type sampler visible on each shader stage in descriptor. bindingDescriptor.hasDynamicOffset must be false.

10.1.3. Compatibility

Two GPUBindGroupLayout objects a and b are considered group-equivalent if and only if, for any binding number binding, one of the following is true:
Two GPUBindGroupLayoutEntry entries a and b are considered entry-equivalent if all of the conditions are true:
  1. a.binding == b.binding

  2. a.visibility == b.visibility

  3. a.type == b.type

  4. if a.type is "uniform-buffer", "storage-buffer", or "readonly-storage-buffer", then:

  5. if a.type is "sampled-texture", then:

  6. if a.type is "readonly-storage-texture" or "writeonly-storage-texture", then:

If bind groups layouts are group-equivalent they can be interchangeably used in all contents.

10.2. GPUBindGroup

A GPUBindGroup defines a set of resources to be bound together in a group and how the resources are used in shader stages.

interface GPUBindGroup {
};
GPUBindGroup includes GPUObjectBase;

10.2.1. Bind Group Creation

A GPUBindGroup is created via GPUDevice.createBindGroup().

dictionary GPUBindGroupDescriptor : GPUObjectDescriptorBase {
    required GPUBindGroupLayout layout;
    required sequence<GPUBindGroupEntry> entries;
};

A GPUBindGroupEntry describes a single resource to be bound in a GPUBindGroup.

typedef (GPUSampler or GPUTextureView or GPUBufferBinding) GPUBindingResource;

dictionary GPUBindGroupEntry {
    required GPUIndex32 binding;
    required GPUBindingResource resource;
};
dictionary GPUBufferBinding {
    required GPUBuffer buffer;
    GPUSize64 offset = 0;
    GPUSize64 size;
};

A GPUBindGroup object has the following internal slots:

[[layout]] of type GPUBindGroupLayout.

The GPUBindGroupLayout associated with this GPUBindGroup.

[[entries]] of type sequence<GPUBindGroupEntry>.

The set of GPUBindGroupEntrys this GPUBindGroup describes.

[[usedBuffers]] of type maplike<GPUBuffer, GPUBufferUsage>.

The set of buffers used by this bind group and the corresponding usage flags.

[[usedTextures]] of type maplike<GPUTexture subresource, GPUTextureUsage>.

The set of texure subresources used by this bind group. Each subresource is stored with the union of usage flags that apply to it.

10.2.2. GPUDevice.createBindGroup(GPUBindGroupDescriptor)

Arguments:

Returns: GPUBindGroup.

The createBindGroup(descriptor) method is used to create GPUBindGroups.

If any of the conditions below are violated:

  1. Generate a GPUValidationError in the current scope with appropriate error message.

  2. Create a new invalid GPUBindGroup and return the result.

  1. Ensure bind group device validation is not violated.

  2. Ensure descriptor.layout is a valid GPUBindGroupLayout.

  3. Ensure the number of entries of descriptor.layout exactly equals to the number of descriptor.entries.

  4. For each GPUBindGroupEntry bindingDescriptor in descriptor.entries:

    1. Ensure there is exactly one GPUBindGroupLayoutEntry layoutBinding in entries of descriptor.layout such that layoutBinding.binding equals to bindingDescriptor.binding.

    2. If layoutBinding.type is "sampler":

      1. Ensure bindingDescriptor.resource is a valid GPUSampler object and [[compareEnable]] is false.

    3. If layoutBinding.type is "comparison-sampler":

      1. Ensure bindingDescriptor.resource is a valid GPUSampler object and [[compareEnable]] is true.

    4. If layoutBinding.type is "sampled-texture" or "readonly-storage-texture" or "writeonly-storage-texture".

      1. Ensure bindingDescriptor.resource is a valid GPUTextureView object.

      2. Ensure texture view binding validation is not violated.

      3. Ensure bindingDescriptor.storageTextureFormat is a valid GPUTextureFormat.

    5. If layoutBinding.type is "uniform-buffer" or "storage-buffer" or "readonly-storage-buffer".

      1. Ensure bindingDescriptor.resource is a valid GPUBufferBinding object.

      2. Ensure buffer binding validation is not violated.

  5. Return a new GPUBindGroup object with:

Valid Usage

bind group device validation: The GPUDevice must not be lost.

texture view binding validation: Let view be bindingDescriptor.resource, a GPUTextureView. This layoutBinding must be compatible with this view. This requires:

  1. Its layoutBinding.viewDimension must equal view’s dimension.

  2. Its layoutBinding.textureComponentType must be compatible with view’s format.

  3. If layoutBinding.multisampled is true, view’s texture’s sampleCount must be greater than 1. Otherwise, if bindingDescriptor.multisampled is false, view’s texture’s sampleCount must be 1.

  4. If layoutBinding.type is "sampled-texture", view’s texture’s usage must include SAMPLED. Each texture subresource seen by view is added to [[usedTextures]] with SAMPLED flag.

  5. If layoutBinding.type is "readonly-storage-texture" or "writeonly-storage-texture", view’s texture’s usage must include STORAGE. Each texture subresource seen by view is added to [[usedTextures]] with STORAGE flag.

buffer binding validation: Let bufferBinding be bindingDescriptor.resource, a GPUBufferBinding. This layoutBinding must be compatible with this bufferBinding. This requires:

  1. If layoutBinding.type is "uniform-buffer", the bufferBinding.buffer's usage must include UNIFORM. The buffer is added to the [[usedBuffers]] map with UNIFORM flag.

  2. If layoutBinding.type is "storage-buffer" or "readonly-storage-buffer", the bufferBinding.buffer's usage must include STORAGE. The buffer is added to the [[usedBuffers]] map with STORAGE flag.

  3. The bound part designated by bufferBinding.offset and bufferBinding.size must reside inside the buffer.

10.3. GPUPipelineLayout

A GPUPipelineLayout defines the mapping between resources of all GPUBindGroup objects set up during command encoding in setBindGroup, and the shaders of the pipeline set by GPURenderEncoderBase.setPipeline or GPUComputePassEncoder.setPipeline.

The full binding address of a resource can be defined as a trio of:

  1. shader stage mask, to which the resource is visible

  2. bind group index

  3. binding number

The components of this address can also be seen as the binding space of a pipeline. A GPUBindGroup (with the corresponding GPUBindGroupLayout) covers that space for a fixed bind group index. The contained bindings need to be a superset of the resources used by the shader at this bind group index.

[Serializable]
interface GPUPipelineLayout {
};
GPUPipelineLayout includes GPUObjectBase;

GPUPipelineLayout has the following internal slots:

[[bindGroupLayouts]] of type sequence<GPUBindGroupLayout>.

The GPUBindGroupLayout objects provided at creation in GPUPipelineLayoutDescriptor.bindGroupLayouts.

Note: using the same GPUPipelineLayout for many GPURenderPipeline or GPUComputePipeline pipelines guarantees that the user agent doesn’t need to rebind any resources internally when there is a switch between these pipelines.

GPUComputePipeline object X was created with GPUPipelineLayout.bindGroupLayouts A, B, C. GPUComputePipeline object Y was created with GPUPipelineLayout.bindGroupLayouts A, D, C. Supposing the command encoding sequence has two dispatches:
  1. setBindGroup(0, ...)

  2. setBindGroup(1, ...)

  3. setBindGroup(2, ...)

  4. setPipeline(X)

  5. dispatch()

  6. setBindGroup(1, ...)

  7. setPipeline(Y)

  8. dispatch()

In this scenario, the user agent would have to re-bind the group slot 2 for the second dispatch, even though neither the GPUBindGroupLayout at index 2 of GPUPipelineLayout.bindGrouplayouts, or the GPUBindGroup at slot 2, change.

should this example and the note be moved to some "best practices" document?

Note: the expected usage of the GPUPipelineLayout is placing the most common and the least frequently changing bind groups at the "bottom" of the layout, meaning lower bind group slot numbers, like 0 or 1. The more frequently a bind group needs to change between draw calls, the higher its index should be. This general guideline allows the user agent to minimize state changes between draw calls, and consequently lower the CPU overhead.

10.3.1. Creation

A GPUPipelineLayout is created via GPUDevice.createPipelineLayout().

dictionary GPUPipelineLayoutDescriptor : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayout> bindGroupLayouts;
};

10.3.2. GPUDevice.createPipelineLayout(descriptor)

Arguments:

Returns: GPUPipelineLayout.

  1. Ensure pipeline layout device validation is not violated.

  2. Ensure pipeline layout entries validation is not violated.

  3. Let pl be a new GPUPipelineLayout object.

  4. Set the pl.[[bindGroupLayouts]] to descriptor.bindGroupLayouts.

  5. Return pl.

Valid Usage

If any of the following conditions are violated:

  1. Generate a GPUValidationError in the current scope with appropriate error message.

  2. Create a new invalid GPUPipelineLayout and return the result.

pipeline layout device validation: The GPUDevice must not be lost.

pipeline layout entries validation: There must be GPULimits.maxBindGroups or fewer elements in descriptor.bindGroupLayouts. All these GPUBindGroupLayout entries have to be valid.

there will be more limits applicable to the whole pipeline layout.

Note: two GPUPipelineLayout objects are considered equivalent for any usage if their internal [[bindGroupLayouts]] sequences contain GPUBindGroupLayout objects that are group-equivalent.

11. Shader Modules

11.1. GPUShaderModule

enum GPUCompilationMessageType {
    "error",
    "warning",
    "info"
};

[Serializable]
interface GPUCompilationMessage {
    readonly attribute DOMString message;
    readonly attribute GPUCompilationMessageType type;
    readonly attribute unsigned long long lineNum;
    readonly attribute unsigned long long linePos;
};

[Serializable]
interface GPUCompilationInfo {
    readonly attribute sequence<GPUCompilationMessage> messages;
};

[Serializable]
interface GPUShaderModule {
    Promise<GPUCompilationInfo> compilationInfo();
};
GPUShaderModule includes GPUObjectBase;

GPUShaderModule is Serializable. It is a reference to an internal shader module object, and Serializable means that the reference can be copied between realms (threads/workers), allowing multiple realms to access it concurrently. Since GPUShaderModule is immutable, there are no race conditions.

11.1.1. Shader Module Creation

dictionary GPUShaderModuleDescriptor : GPUObjectDescriptorBase {
    required USVString code;
    object sourceMap;
};

sourceMap, if defined, MAY be interpreted as a source-map-v3 format. (https://sourcemaps.info/spec.html) Source maps are optional, but serve as a standardized way to support dev-tool integration such as source-language debugging.

12. Pipelines

A pipeline, be it GPUComputePipeline or GPURenderPipeline, represents the complete function done by a combination of the GPU hardware, the driver, and the user agent, that process the input data in the shape of bindings and vertex buffers, and produces some output, like the colors in the output render targets.

Structurally, the pipeline consists of a sequence of programmable stages (shaders) and fixed-function states, such as the blending modes.

Note: Internally, depending on the target platform, the driver may convert some of the fixed-function states into shader code, and link it together with the shaders provided by the user. This linking is one of the reason the object is created as a whole.

This combination state is created as a single object (by GPUDevice.createComputePipeline() or GPUDevice.createRenderPipeline()), and switched as one (by GPUComputePassEncoder.setPipeline or GPURenderEncoderBase.setPipeline correspondingly).

12.1. Base pipelines

dictionary GPUPipelineDescriptorBase : GPUObjectDescriptorBase {
    GPUPipelineLayout layout;
};

interface mixin GPUPipelineBase {
    GPUBindGroupLayout getBindGroupLayout(unsigned long index);
};

GPUPipelineBase has the following internal slots:

[[layout]] of type GPUPipelineLayout.

The definition of the layout of resources which can be used with this.

12.1.1. getBindGroupLayout(index)

Arguments:

Returns: GPUBindGroupLayout

  1. If index is greater or equal to maxBindGroups:

    1. Throw a RangeError.

  2. If this is not valid:

    1. Return a new error GPUBindGroupLayout.

  3. Return a new GPUBindGroupLayout object that references the same internal object as this.[[layout]].[[bindGroupLayouts]][index].

Specify this more properly once we have internal objects for GPUBindGroupLayout. Alternatively only spec is as a new internal objects that’s group-equivalent

Note: Only returning new GPUBindGroupLayout objects ensures no synchronization is necessary between the Content timeline and the Device timeline.

12.1.2. Default pipeline layout

A GPUPipelineBase object that was created without a layout has a default layout created and used instead.

  1. Let groupDescs be a sequence of device.[[limits]].maxBindGroups new GPUBindGroupLayoutDescriptor objects.

  2. For each groupDesc in groupDescs:

    1. Set groupDesc.entries to an empty sequence.

  3. For each GPUProgrammableStageDescriptor stageDesc in the descriptor used to create the pipeline:

    1. Let stageInfo be the "reflection information" for stageDesc.

      Define the reflection information concept so that this spec can interface with the WGSL spec and get information what the interface is for a GPUShaderModule for a specific entrypoint.

    2. Let shaderStage be the GPUShaderStageFlags for stageDesc.entryPoint in stageDesc.module.

    3. For each resource resource in stageInfo’s resource interface:

      1. Let group be resource’s "group" decoration.

      2. Let binding be resource’s "binding" decoration.

      3. Let entry be a new GPUBindGroupLayoutEntry.

      4. Set entry.binding to binding.

      5. Set entry.visibility to shaderStage.

      6. If resource is for a sampler binding:

        1. Set entry.type to sampler.

      7. If resource is for a comparison sampler binding:

        1. Set entry.type to comparison-sampler.

      8. If resource is for a buffer binding:

        1. Set entry.hasDynamicOffset to false.

        2. If resource is for a uniform buffer:

          1. Set entry.type to uniform-buffer.

        3. If resource is for a read-only storage buffer:

          1. Set entry.type to readonly-storage-buffer.

        4. If resource is for a storage buffer:

          1. Set entry.type to storage-buffer.

      9. If resource is for a texture binding:

        1. Set entry.textureComponentType to resource’s component type.

        2. Set entry.viewDimension to resource’s dimension.

        3. If resource is multisampled:

          1. Set entry.multisampled to true.

        4. If resource is for a sampled texture:

          1. Set entry.type to sampled-texture.

        5. If resource is for a read-only storage texture:

          1. Set entry.type to readonly-storage-texture.

          2. Set entry.storageTextureFormat to resource’s format.

        6. If resource is for a write-only storage texture:

          1. Set entry.type to writeonly-storage-texture.

          2. Set entry.storageTextureFormat to resource’s format.

      10. If groupDescs[group] has an entry previousEntry with binding equal to binding:

        1. If previousEntry is equal to entry up to visibility:

          1. Add the bits set in entry.visibility into previousEntry.visibility

        2. Else

          1. Return null (which will cause the creation of the pipeline to fail).

      11. Else

        1. Append entry to groupDescs[group].

  4. Let groupLayouts be a new sequence.

  5. For each groupDesc in groupDescs:

    1. Append device.createBindGroupLayout()(groupDesc) to groupLayouts.

  6. Let desc be a new GPUPipelineLayoutDescriptor.

  7. Set desc.bindGroupLayouts to groupLayouts.

  8. Return device.createPipelineLayout()(desc).

This fills the pipeline layout with empty bindgroups. Revisit once the behavior of empty bindgroups is specified.

12.1.3. GPUProgrammableStageDescriptor

dictionary GPUProgrammableStageDescriptor {
    required GPUShaderModule module;
    required USVString entryPoint;
};

A GPUProgrammableStageDescriptor describes the entry point in the user-provided GPUShaderModule that controls one of the programmable stages of a pipeline.

validating GPUProgrammableStageDescriptor(stage, descriptor, layout) Arguments:
  1. If the descriptor.module is not a valid GPUShaderModule return false.

  2. If the descriptor.module doesn’t contain an entry point at stage named descriptor.entryPoint return false.

  3. For each binding that is statically used by the shader entry point, if the result of validating shader binding(binding, layout) is false, return false.

  4. Return true.

validating shader binding(binding, layout) Arguments:

Consider the shader binding annotation of bindIndex for the binding index and bindGroup for the bind group index.

Return true if all of the following conditions are satisfied:

  1. layout.[[bindGroupLayouts]][bindGroup] contains a GPUBindGroupLayoutEntry entry whose entry.binding == bindIndex.

  2. If entry.type is "sampler", the binding has to be a non-comparison sampler.

  3. If entry.type is "comparison-sampler", the binding has to be a comparison sampler.

  4. If entry.type is "sampled-texture", the binding has to be a sampled texture with the component type of entry.textureComponentType, and it must be multisampled if and only if entry.multisampled is true.

  5. If entry.type is "readonly-storage-texture", the binding has to be a read-only storage texture with format of entry.storageTextureFormat.

  6. If entry.type is "writeonly-storage-texture", the binding has to be a writable storage texture with format of entry.storageTextureFormat.

  7. If entry.type is "uniform-buffer", the binding has to be a uniform buffer.

  8. If entry.type is "storage-buffer", the binding has to be a storage buffer.

  9. If entry.type is "readonly-storage-buffer", the binding has to be a read-only storage buffer.

  10. If entry.type is "sampled-texture", "readonly-storage-texture", or "writeonly-storage-texture", the shader view dimension of the texture has to match entry.viewDimension.

is there a match/switch statement in bikeshed?

A resource binding is considered to be statically used by a shader entry point if and only if it’s reachable by the control flow graph of the shader module, starting at the entry point.

12.2. GPUComputePipeline

A GPUComputePipeline is a kind of pipeline that controls the compute shader stage, and can be used in GPUComputePassEncoder.

Compute inputs and outputs are all contained in the bindings, according to the given GPUPipelineLayout. The outputs correspond to "storage-buffer" and "writeonly-storage-texture" binding types.

Stages of a compute pipeline:

  1. Compute shader

[Serializable]
interface GPUComputePipeline {
};
GPUComputePipeline includes GPUObjectBase;
GPUComputePipeline includes GPUPipelineBase;

12.2.1. Creation

dictionary GPUComputePipelineDescriptor : GPUPipelineDescriptorBase {
    required GPUProgrammableStageDescriptor computeStage;
};

12.2.2. GPUDevice.createComputePipeline(GPUComputePipelineDescriptor)

Arguments:

Returns: GPUComputePipeline.

The createComputePipeline(descriptor) method is used to create GPUComputePipelines.

If any of the conditions below are violated:

  1. Generate a GPUValidationError in the current scope with appropriate error message.

  2. Create a new invalid GPUComputePipeline and return the result.

  1. Ensure the GPUDevice is not lost.

  2. Ensure the descriptor.layout is a valid GPUPipelineLayout.

  3. Ensure the validating GPUProgrammableStageDescriptor(COMPUTE, descriptor.computeStage, descriptor.layout) succeeds.

12.3. GPURenderPipeline

A GPURenderPipeline is a kind of pipeline that controls the vertex and fragment shader stages, and can be used in GPURenderPassEncoder as well as GPURenderBundleEncoder.

Render pipeline inputs are:

Render pipeline outputs are:

Stages of a render pipeline:

  1. Vertex fetch, controlled by GPUVertexStateDescriptor

  2. Vertex shader

  3. Primitive assembly, controlled by GPUPrimitiveTopology

  4. Rasterization, controlled by GPURasterizationStateDescriptor

  5. Fragment shader

  6. Stencil test and operation, controlled by GPUDepthStencilStateDescriptor

  7. Depth test and write, controlled by GPUDepthStencilStateDescriptor

  8. Output merging, controlled by GPUColorStateDescriptor

we need a deeper description of these stages

[Serializable]
interface GPURenderPipeline {
};
GPURenderPipeline includes GPUObjectBase;
GPURenderPipeline includes GPUPipelineBase;

12.3.1. Creation

dictionary GPURenderPipelineDescriptor : GPUPipelineDescriptorBase {
    required GPUProgrammableStageDescriptor vertexStage;
    GPUProgrammableStageDescriptor fragmentStage;

    required GPUPrimitiveTopology primitiveTopology;
    GPURasterizationStateDescriptor rasterizationState = {};
    required sequence<GPUColorStateDescriptor> colorStates;
    GPUDepthStencilStateDescriptor depthStencilState;
    GPUVertexStateDescriptor vertexState = {};

    GPUSize32 sampleCount = 1;
    GPUSampleMask sampleMask = 0xFFFFFFFF;
    boolean alphaToCoverageEnabled = false;
};

12.3.2. No Color Output

In no-color-output mode, pipeline does not produce any color attachment outputs, and the colorStates is expected to be empty.

The pipeline still performs rasterization and produces depth values based on the vertex position output. The depth testing and stencil operations can still be used.

12.3.3. Alpha to Coverage

In alpha-to-coverage mode, an additional alpha-to-coverage mask of MSAA samples is generated based on the alpha component of the fragment shader output value of the colorStates[0].

The algorithm of producing the extra mask is platform-dependent. It guarantees that:

12.3.4. Sample Masking

The final sample mask for a pixel is computed as: rasterization mask & sampleMask & shader-output mask.

Only the lower sampleCount bits of the mask are considered.

If the least-significant bit at position N of the final sample mask has value of "0", the sample color outputs (corresponding to sample N) to all attachments of the fragment shader are discarded. Also, no depth test or stencil operations are executed on the relevant samples of the depth-stencil attachment.

Note: the color output for sample N is produced by the fragment shader execution with SV_SampleIndex == N for the current pixel. If the fragment shader doesn’t use this semantics, it’s only executed once per pixel.

The rasterization mask is produced by the rasterization stage, based on the shape of the rasterized polygon. The samples incuded in the shape get the relevant bits 1 in the mask.

The shader-output mask takes the output value of SV_Coverage semantics in the fragment shader. If the semantics is not statically used by the shader, and alphaToCoverageEnabled is enabled, the shader-output mask becomes the alpha-to-coverage mask. Otherwise, it defaults to 0xFFFFFFFF.

link to the semantics of SV_SampleIndex and SV_Coverage in WGSL spec.

12.3.5. GPUDevice.createRenderPipeline(GPURenderPipelineDescriptor)

Arguments:

Returns: GPURenderPipeline.

The createRenderPipeline(descriptor) method is used to create GPURenderPipelines.

If any of the conditions below are violated:

  1. Generate a GPUValidationError in the current scope with appropriate error message.

  2. Create a new invalid GPURenderPipeline and return the result.

  1. Ensure the GPUDevice is not lost.

  2. Ensure the descriptor.layout is a valid GPUPipelineLayout.

  3. Ensure the validating GPUProgrammableStageDescriptor(VERTEX, descriptor.vertexStage, descriptor.layout) succeeds.

  4. If descriptor.fragmentStage is not "null", ensure the validating GPUProgrammableStageDescriptor(FRAGMENT, descriptor.fragmentStage, descriptor.layout) succeeds.

  5. Ensure the descriptor.colorStates.length is less than or equal to 4.

  6. Ensure validating GPUVertexStateDescriptor(descriptor.vertexState, descriptor.vertexStage) passes.

  7. If descriptor.alphaToCoverageEnabled is true, ensure descriptor.sampleCount is greater than 1.

  8. If the output SV_Coverage semantics is statically used by descriptor.fragmentStage, ensure descriptor.alphaToCoverageEnabled is false.

need a proper limit for the maximum number of color targets.

need a more detailed validation of the render states.

need description of the render states.

12.3.6. Primitive Topology

enum GPUPrimitiveTopology {
    "point-list",
    "line-list",
    "line-strip",
    "triangle-list",
    "triangle-strip"
};

12.3.7. Rasterization State

dictionary GPURasterizationStateDescriptor {
    GPUFrontFace frontFace = "ccw";
    GPUCullMode cullMode = "none";

    GPUDepthBias depthBias = 0;
    float depthBiasSlopeScale = 0;
    float depthBiasClamp = 0;
};
enum GPUFrontFace {
    "ccw",
    "cw"
};
enum GPUCullMode {
    "none",
    "front",
    "back"
};

12.3.8. Color State

dictionary GPUColorStateDescriptor {
    required GPUTextureFormat format;

    GPUBlendDescriptor alphaBlend = {};
    GPUBlendDescriptor colorBlend = {};
    GPUColorWriteFlags writeMask = 0xF;  // GPUColorWrite.ALL
};
typedef [EnforceRange] unsigned long GPUColorWriteFlags;
interface GPUColorWrite {
    const GPUColorWriteFlags RED   = 0x1;
    const GPUColorWriteFlags GREEN = 0x2;
    const GPUColorWriteFlags BLUE  = 0x4;
    const GPUColorWriteFlags ALPHA = 0x8;
    const GPUColorWriteFlags ALL   = 0xF;
};
12.3.8.1. Blend State
dictionary GPUBlendDescriptor {
    GPUBlendFactor srcFactor = "one";
    GPUBlendFactor dstFactor = "zero";
    GPUBlendOperation operation = "add";
};
enum GPUBlendFactor {
    "zero",
    "one",
    "src-color",
    "one-minus-src-color",
    "src-alpha",
    "one-minus-src-alpha",
    "dst-color",
    "one-minus-dst-color",
    "dst-alpha",
    "one-minus-dst-alpha",
    "src-alpha-saturated",
    "blend-color",
    "one-minus-blend-color"
};
enum GPUBlendOperation {
    "add",
    "subtract",
    "reverse-subtract",
    "min",
    "max"
};
enum GPUStencilOperation {
    "keep",
    "zero",
    "replace",
    "invert",
    "increment-clamp",
    "decrement-clamp",
    "increment-wrap",
    "decrement-wrap"
};

12.3.9. Depth/Stencil State

dictionary GPUDepthStencilStateDescriptor {
    required GPUTextureFormat format;

    boolean depthWriteEnabled = false;
    GPUCompareFunction depthCompare = "always";

    GPUStencilStateFaceDescriptor stencilFront = {};
    GPUStencilStateFaceDescriptor stencilBack = {};

    GPUStencilValue stencilReadMask = 0xFFFFFFFF;
    GPUStencilValue stencilWriteMask = 0xFFFFFFFF;
};
dictionary GPUStencilStateFaceDescriptor {
    GPUCompareFunction compare = "always";
    GPUStencilOperation failOp = "keep";
    GPUStencilOperation depthFailOp = "keep";
    GPUStencilOperation passOp = "keep";
};

12.3.10. Vertex State

enum GPUIndexFormat {
    "uint16",
    "uint32"
};
12.3.10.1. Vertex Formats

The name of the format specifies the data type of the component, the number of values, and whether the data is normalized.

If no number of values is given in the name, a single value is provided. If the format has the -bgra suffix, it means the values are arranged as blue, green, red and alpha values.

enum GPUVertexFormat {
    "uchar2",
    "uchar4",
    "char2",
    "char4",
    "uchar2norm",
    "uchar4norm",
    "char2norm",
    "char4norm",
    "ushort2",
    "ushort4",
    "short2",
    "short4",
    "ushort2norm",
    "ushort4norm",
    "short2norm",
    "short4norm",
    "half2",
    "half4",
    "float",
    "float2",
    "float3",
    "float4",
    "uint",
    "uint2",
    "uint3",
    "uint4",
    "int",
    "int2",
    "int3",
    "int4"
};
enum GPUInputStepMode {
    "vertex",
    "instance"
};
dictionary GPUVertexStateDescriptor {
    GPUIndexFormat indexFormat = "uint32";
    sequence<GPUVertexBufferLayoutDescriptor?> vertexBuffers = [];
};

A vertex buffer is, conceptually, a view into buffer memory as an array of structures. arrayStride is the stride, in bytes, between elements of that array. Each element of a vertex buffer is like a structure with a memory layout defined by its attributes, which describe the members of the structure.

Each GPUVertexAttributeDescriptor describes its format and its offset, in bytes, within the structure.

Each attribute appears as a separate input in a vertex shader, each bound by a numeric location, which is specified by shaderLocation. Every location must be unique within the GPUVertexStateDescriptor.

dictionary GPUVertexBufferLayoutDescriptor {
    required GPUSize64 arrayStride;
    GPUInputStepMode stepMode = "vertex";
    required sequence<GPUVertexAttributeDescriptor> attributes;
};
dictionary GPUVertexAttributeDescriptor {
    required GPUVertexFormat format;
    required GPUSize64 offset;

    required GPUIndex32 shaderLocation;
};
validating GPUVertexBufferLayoutDescriptor(descriptor, vertexStage) Arguments:

Return true, if and only if, all of the following conditions are true:

  1. descriptor.attributes.length is less than or equal to 16.

  2. descriptor.arrayStride is less then or equal to 2048.

  3. Any attribute at in the list descriptor.attributes has at.{{GPUVertexAttributeDescriptor/offset} + sizeOf(at.format less or equal to descriptor.arrayStride.

  4. For every vertex attribute in the shader reflection of vertexStage.module that is know to be statically used by vertexStage.entryPoint, there is a corresponding at element of descriptor.attributes that:

    1. The shader format is at.format.

    2. The shader location is at.shaderLocation.

add a limit to the number of vertex attributes

validating GPUVertexStateDescriptor(descriptor, vertexStage) Arguments:

Return true, if and only if, all of the following conditions are true:

  1. descriptor.vertexBuffers.length is less than or equal to 8

  2. Each vertexBuffer layout descriptor in the list descriptor.vertexBuffers passes validating GPUVertexBufferLayoutDescriptor(vertexBuffer, vertexStage)

  3. Each at in the union of all GPUVertexAttributeDescriptor across descriptor.vertexBuffers has a distinct at.shaderLocation value.

add a limit to the number of vertex buffers

13. Command Buffers

13.1. GPUCommandBuffer

interface GPUCommandBuffer {
};
GPUCommandBuffer includes GPUObjectBase;

13.1.1. Creation

dictionary GPUCommandBufferDescriptor : GPUObjectDescriptorBase {
};

14. Command Encoding

14.1. GPUCommandEncoder

interface GPUCommandEncoder {
    GPURenderPassEncoder beginRenderPass(GPURenderPassDescriptor descriptor);
    GPUComputePassEncoder beginComputePass(optional GPUComputePassDescriptor descriptor = {});

    void copyBufferToBuffer(
        GPUBuffer source,
        GPUSize64 sourceOffset,
        GPUBuffer destination,
        GPUSize64 destinationOffset,
        GPUSize64 size);

    void copyBufferToTexture(
        GPUBufferCopyView source,
        GPUTextureCopyView destination,
        GPUExtent3D copySize);

    void copyTextureToBuffer(
        GPUTextureCopyView source,
        GPUBufferCopyView destination,
        GPUExtent3D copySize);

    void copyTextureToTexture(
        GPUTextureCopyView source,
        GPUTextureCopyView destination,
        GPUExtent3D copySize);

    void pushDebugGroup(USVString groupLabel);
    void popDebugGroup();
    void insertDebugMarker(USVString markerLabel);

    void resolveQuerySet(
        GPUQuerySet querySet,
        GPUSize32 firstQuery,
        GPUSize32 queryCount,
        GPUBuffer destination,
        GPUSize64 destinationOffset);

    GPUCommandBuffer finish(optional GPUCommandBufferDescriptor descriptor = {});
};
GPUCommandEncoder includes GPUObjectBase;

GPUCommandEncoder has the following internal slots:

[[state]] of type encoder state.

The current state of the GPUCommandEncoder, initially set to open.

[[debug_group_stack]] of type sequence<USVString>.

A stack of active debug group labels.

Each GPUCommandEncoder has a current encoder state on the Content timeline which may be one of the following:

"open"

Indicates the GPUCommandEncoder is available to begin new operations. The [[state]] is open any time the GPUCommandEncoder is valid and has no active GPURenderPassEncoder or GPUComputePassEncoder.

"encoding a render pass"

Indicates the GPUCommandEncoder has an active GPURenderPassEncoder. The [[state]] becomes encoding a render pass once beginRenderPass() is called sucessfully until endPass() is called on the returned GPURenderPassEncoder, at which point the [[state]] (if the encoder is still valid) reverts to open.

"encoding a compute pass"

Indicates the GPUCommandEncoder has an active GPUComputePassEncoder. The [[state]] becomes encoding a compute pass once beginComputePass() is called sucessfully until endPass() is called on the returned GPUComputePassEncoder, at which point the [[state]] (if the encoder is still valid) reverts to open.

"closed"

Indicates the GPUCommandEncoder is no longer available for any operations. The [[state]] becomes closed once finish() is called or the GPUCommandEncoder otherwise becomes invalid.

14.1.1. Creation

dictionary GPUCommandEncoderDescriptor : GPUObjectDescriptorBase {
    // TODO: reusability flag?
};

14.2. Copy Commands

14.2.1. GPUTextureDataLayout

dictionary GPUTextureDataLayout {
    GPUSize64 offset = 0;
    required GPUSize32 bytesPerRow;
    GPUSize32 rowsPerImage = 0;
};

A GPUTextureDataLayout is a layout of images within some linear memory. It’s used when copying data between a texture and a buffer, or when scheduling a write into a texture from the GPUQueue.

Define images more precisely. In particular, define them as being comprised of texel blocks.

Define the exact copy semantics, by reference to common algorithms shared by the copy methods.

bytesPerRow, of type GPUSize32

The stride, in bytes, between the beginning of each row of texel blocks and the subsequent row.

rowsPerImage, of type GPUSize32, defaulting to 0

rowsPerImage ÷ texel block height × bytesPerRow is the stride, in bytes, between the beginning of each image of data and the subsequent image.

14.2.2. GPUBufferCopyView

dictionary GPUBufferCopyView : GPUTextureDataLayout {
    required GPUBuffer buffer;
};

A GPUBufferCopyView contains the actual texture data placed in a buffer according to GPUTextureDataLayout.

validating GPUBufferCopyView

Arguments:

Returns: boolean

Return true if and only if all of the following conditions apply:

14.2.3. GPUTextureCopyView

dictionary GPUTextureCopyView {
    required GPUTexture texture;
    GPUIntegerCoordinate mipLevel = 0;
    GPUOrigin3D origin = {};
};

A GPUTextureCopyView is a view of a sub-region of one or multiple contiguous texture subresources with the initial offset GPUOrigin3D in texels, used when copying data from or to a GPUTexture.

validating GPUTextureCopyView

Arguments:

Returns: boolean

Let:

Return true if and only if all of the following conditions apply:

Define the copies with 1d and 3d textures. <https://github.com/gpuweb/gpuweb/issues/69>

14.2.4. GPUImageBitmapCopyView

dictionary GPUImageBitmapCopyView {
    required ImageBitmap imageBitmap;
    GPUOrigin2D origin = {};
};

14.2.5. copyBufferToBuffer(source, sourceOffset, destination, destinationOffset, size)

Arguments:

Returns: void

Encode a command into the GPUCommandEncoder that copies size bytes of data from the sourceOffset of a GPUBuffer source to the destinationOffset of another GPUBuffer destination.

Valid Usage

Given a GPUCommandEncoder encoder and the arguments GPUBuffer source, GPUSize64 sourceOffset, GPUBuffer destination, GPUSize64 destinationOffset, GPUSize64 size, the following validation rules apply:

  • encoder.[[state]] must be open.

  • source must be a valid GPUBuffer.

  • destination must be a valid GPUBuffer.

  • The [[usage]] of source must contain COPY_SRC.

  • The [[usage]] of destination must contain COPY_DST.

  • size must be a multiple of 4.

  • sourceOffset must be a multiple of 4.

  • destinationOffset must be a multiple of 4.

  • (sourceOffset + size) must not overflow a GPUSize64.

  • (destinationOffset + size) must not overflow a GPUSize64.

  • The [[size]] of source must be greater than or equal to (sourceOffset + size).

  • The [[size]] of destination must be greater than or equal to (destinationOffset + size).

  • source and destination must not be the same GPUBuffer.

Define the state machine for GPUCommandEncoder. <https://github.com/gpuweb/gpuweb/issues/21>

figure out how to handle overflows in the spec. <https://github.com/gpuweb/gpuweb/issues/69>

14.2.6. Copy Between Buffer and Texture

WebGPU provides copyBufferToTexture() for buffer-to-texture copies and copyTextureToBuffer() for texture-to-buffer copies.

The following definitions and validation rules apply to both copyBufferToTexture() and copyTextureToBuffer().

textureCopyView subresource size and Valid Texture Copy Range also applies to copyTextureToTexture().

textureCopyView subresource size

Arguments:

Returns:

The textureCopyView subresource size of textureCopyView is calculated as follows:

Its width, height and depth are the width, height, and depth, respectively, of the physical size of textureCopyView.texture subresource at mipmap level textureCopyView.mipLevel.

define this as an algorithm with (texture, mipmapLevel) parameters and use the call syntax instead of referring to the definition by label.

validating linear texture data(layout, byteSize, format, copyExtent)

Arguments:

Let:

The following validation rules apply:

For the copy being in-bounds:

For the texel block alignments:

For other members in layout:

Valid Texture Copy Range

Given a GPUTextureCopyView textureCopyView and a GPUExtent3D copySize, let

The following validation rules apply:

Define the copies with 1d and 3d textures. <https://github.com/gpuweb/gpuweb/issues/69>

Additional restrictions on rowsPerImage if needed. <https://github.com/gpuweb/gpuweb/issues/537>

Define the copies with "depth24plus" and "depth24plus-stencil8". <https://github.com/gpuweb/gpuweb/issues/652>

convert "Valid Texture Copy Range" into an algorithm with parameters, similar to "validating linear texture data"

14.2.6.1. copyBufferToTexture(source, destination, copySize)

Arguments:

Returns: void

Encode a command into the GPUCommandEncoder that copies data from a sub-region of a GPUBuffer to a sub-region of one or multiple continuous GPUTexture subresources.

source and copySize define the region of the source buffer.

destination and copySize define the region of the destination texture subresource.

copyBufferToTexture Valid Usage

Given a GPUCommandEncoder encoder and the arguments GPUBufferCopyView source, GPUTextureCopyView destination and GPUExtent3D copySize, the following validation rules apply:

For encoder:

For source:

For destination:

For the copy ranges:

14.2.6.2. copyTextureToBuffer(source, destination, copySize)

Arguments:

Returns: void

Encode a command into the GPUCommandEncoder that copies data from a sub-region of one or multiple continuous GPUTexture subresourcesto a sub-region of a GPUBuffer.

source and copySize define the region of the source texture subresource.

destination and copySize define the region of the destination buffer.

copyTextureToBuffer Valid Usage

Given a GPUCommandEncoder encoder and the arguments GPUTextureCopyView source, GPUBufferCopyView destination, GPUExtent3D copySize, the following validation rules apply:

For encoder:

For source:

For destination:

For the copy ranges:

14.2.7. copyTextureToTexture(source, destination, copySize)

Arguments:

Returns: void

Encode a command into the GPUCommandEncoder that copies data from a sub-region of one or multiple contiguous GPUTexture subresources to another sub-region of one or multiple continuous GPUTexture subresources.

source and copySize define the region of the source texture subresources.

destination and copySize define the region of the destination texture subresources.

copyTextureToTexture Valid Usage

Given a GPUCommandEncoder encoder and the arguments GPUTextureCopyView source, GPUTextureCopyView destination, GPUExtent3D copySize, let:

The following validation rules apply:

For encoder:

For source:

For destination:

For the texture [[sampleCount]]:

For the texture [[format]]:

For the copy ranges:

The set of subresources for texture copy(textureCopyView, copySize) is the set containing:

14.3. Debug Markers

Both command encoders and programmable pass encoders provide methods to apply debug labels to groups of commands or insert a single label into the command sequence. Debug groups can be nested to create a hierarchy of labeled commands. These labels may be passed to the native API backends for tooling, may be used by the user agent’s internal tooling, or may be a no-op when such tooling is not available or applicable.

Debug groups in a GPUCommandEncoder or GPUProgrammablePassEncoder must be well nested.

14.3.1. pushDebugGroup(groupLabel)

this: of type GPUCommandEncoder.

Arguments:

Returns: void

Marks the beginning of a labeled group of commands for the GPUCommandEncoder.

groupLabel defines the label for the command group.

On the Device timeline, the following steps occur:

Valid Usage

14.3.2. popDebugGroup()

this: of type GPUCommandEncoder.

Returns: void

Marks the end of a labeled group of commands for the GPUCommandEncoder.

On the Device timeline, the following steps occur:

Valid Usage

14.3.3. insertDebugMarker(markerLabel)

this: of type GPUCommandEncoder.

Arguments:

Returns: void

Inserts a single debug marker label into the GPUCommandEncoder's commands sequence .

markerLabel defines the label to insert.

Valid Usage

14.4. Finalization

A GPUCommandBuffer containing the commands recorded by the GPUCommandEncoder can be created by calling finish(). Once finish() has been called the command encoder can no longer be used.

14.4.1. finish(descriptor)

this: of type GPUCommandEncoder.

Arguments:

Returns: GPUCommandBuffer

Completes recording of the commands sequence and returns a corresponding GPUCommandBuffer.

Valid Usage

Add remaining validation.

15. Programmable Passes

interface mixin GPUProgrammablePassEncoder {
    void setBindGroup(GPUIndex32 index, GPUBindGroup bindGroup,
                      optional sequence<GPUBufferDynamicOffset> dynamicOffsets = []);

    void setBindGroup(GPUIndex32 index, GPUBindGroup bindGroup,
                      Uint32Array dynamicOffsetsData,
                      GPUSize64 dynamicOffsetsDataStart,
                      GPUSize32 dynamicOffsetsDataLength);

    void pushDebugGroup(USVString groupLabel);
    void popDebugGroup();
    void insertDebugMarker(USVString markerLabel);

    void beginPipelineStatisticsQuery(GPUQuerySet querySet, GPUSize32 queryIndex);
    void endPipelineStatisticsQuery(GPUQuerySet querySet, GPUSize32 queryIndex);
};

GPUProgrammablePassEncoder has the following internal slots:

[[debug_group_stack]] of type sequence<USVString>.

A stack of active debug group labels.

15.1. Debug Markers

Debug marker methods for programmable pass encoders provide the same functionality as command encoder debug markers while recording a programmable pass.

15.1.1. pushDebugGroup(groupLabel)

this: of type GPUProgrammablePassEncoder.

Arguments:

Returns: void

Marks the beginning of a labeled group of commands for the GPUProgrammablePassEncoder.

groupLabel defines the label for the command group.

On the Device timeline, the following steps occur:

15.1.2. popDebugGroup()

this: of type GPUProgrammablePassEncoder.

Returns: void

Marks the end of a labeled group of commands for the GPUProgrammablePassEncoder.

On the Device timeline, the following steps occur:

Valid Usage

15.1.3. insertDebugMarker(markerLabel)

Arguments:

Returns: void

Inserts a single debug marker label into the GPUProgrammablePassEncoder's commands sequence .

markerLabel defines the label to insert.

16. Compute Passes

16.1. GPUComputePassEncoder

interface GPUComputePassEncoder {
    void setPipeline(GPUComputePipeline pipeline);
    void dispatch(GPUSize32 x, optional GPUSize32 y = 1, optional GPUSize32 z = 1);
    void dispatchIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);

    void endPass();
};
GPUComputePassEncoder includes GPUObjectBase;
GPUComputePassEncoder includes GPUProgrammablePassEncoder;

16.1.1. Creation

dictionary GPUComputePassDescriptor : GPUObjectDescriptorBase {
};

16.2. Finalization

The compute pass encoder can be ended by calling endPass() once the user has finished recording commands for the pass. Once endPass() has been called the compute pass encoder can no longer be used.

16.2.1. endPass()

this: of type GPUComputePassEncoder.

Returns: void

Completes recording of the compute pass commands sequence.

Valid Usage

Add remaining validation.

17. Render Passes

17.1. GPURenderPassEncoder

interface mixin GPURenderEncoderBase {
    void setPipeline(GPURenderPipeline pipeline);

    void setIndexBuffer(GPUBuffer buffer, optional GPUSize64 offset = 0, optional GPUSize64 size = 0);
    void setVertexBuffer(GPUIndex32 slot, GPUBuffer buffer, optional GPUSize64 offset = 0, optional GPUSize64 size = 0);

    void draw(GPUSize32 vertexCount, optional GPUSize32 instanceCount = 1,
              optional GPUSize32 firstVertex = 0, optional GPUSize32 firstInstance = 0);
    void drawIndexed(GPUSize32 indexCount, optional GPUSize32 instanceCount = 1,
                     optional GPUSize32 firstIndex = 0,
                     optional GPUSignedOffset32 baseVertex = 0,
                     optional GPUSize32 firstInstance = 0);

    void drawIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
    void drawIndexedIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
};

interface GPURenderPassEncoder {
    void setViewport(float x, float y,
                     float width, float height,
                     float minDepth, float maxDepth);

    void setScissorRect(GPUIntegerCoordinate x, GPUIntegerCoordinate y,
                        GPUIntegerCoordinate width, GPUIntegerCoordinate height);

    void setBlendColor(GPUColor color);
    void setStencilReference(GPUStencilValue reference);

    void beginOcclusionQuery(GPUSize32 queryIndex);
    void endOcclusionQuery(GPUSize32 queryIndex);

    void executeBundles(sequence<GPURenderBundle> bundles);
    void endPass();
};
GPURenderPassEncoder includes GPUObjectBase;
GPURenderPassEncoder includes GPUProgrammablePassEncoder;
GPURenderPassEncoder includes GPURenderEncoderBase;

When a GPURenderPassEncoder is created, it has the following default state:

When a GPURenderBundle is executed, it does not inherit the pass’s pipeline, bind groups, or vertex or index buffers. After a GPURenderBundle has executed, the pass’s pipeline, bind groups, and vertex and index buffers are cleared. If zero GPURenderBundles are executed, the command buffer state is unchanged.

17.1.1. Creation

dictionary GPURenderPassDescriptor : GPUObjectDescriptorBase {
    required sequence<GPURenderPassColorAttachmentDescriptor> colorAttachments;
    GPURenderPassDepthStencilAttachmentDescriptor depthStencilAttachment;
    GPUQuerySet occlusionQuerySet;
};
17.1.1.1. Color Attachments
dictionary GPURenderPassColorAttachmentDescriptor {
    required GPUTextureView attachment;
    GPUTextureView resolveTarget;

    required (GPULoadOp or GPUColor) loadValue;
    GPUStoreOp storeOp = "store";
};
17.1.1.2. Depth/Stencil Attachments
dictionary GPURenderPassDepthStencilAttachmentDescriptor {
    required GPUTextureView attachment;

    required (GPULoadOp or float) depthLoadValue;
    required GPUStoreOp depthStoreOp;
    boolean depthReadOnly = false;

    required (GPULoadOp or GPUStencilValue) stencilLoadValue;
    required GPUStoreOp stencilStoreOp;
    boolean stencilReadOnly = false;
};

17.1.2. Load & Store Operations

enum GPULoadOp {
    "load"
};
enum GPUStoreOp {
    "store",
    "clear"
};

17.2. Finalization

The render pass encoder can be ended by calling endPass() once the user has finished recording commands for the pass. Once endPass() has been called the render pass encoder can no longer be used.

17.2.1. endPass()

this: of type GPURenderPassEncoder.

Returns: void

Completes recording of the compute pass commands sequence.

Valid Usage

Add remaining validation.

18. Bundles

18.1. GPURenderBundle

interface GPURenderBundle {
};
GPURenderBundle includes GPUObjectBase;

18.1.1. Creation

dictionary GPURenderBundleDescriptor : GPUObjectDescriptorBase {
};
interface GPURenderBundleEncoder {
    GPURenderBundle finish(optional GPURenderBundleDescriptor descriptor = {});
};
GPURenderBundleEncoder includes GPUObjectBase;
GPURenderBundleEncoder includes GPUProgrammablePassEncoder;
GPURenderBundleEncoder includes GPURenderEncoderBase;

18.1.2. Encoding

dictionary GPURenderBundleEncoderDescriptor : GPUObjectDescriptorBase {
    required sequence<GPUTextureFormat> colorFormats;
    GPUTextureFormat depthStencilFormat;
    GPUSize32 sampleCount = 1;
};

19. Queues

interface GPUQueue {
    void submit(sequence<GPUCommandBuffer> commandBuffers);

    GPUFence createFence(optional GPUFenceDescriptor descriptor = {});
    void signal(GPUFence fence, GPUFenceValue signalValue);

    void writeBuffer(
        GPUBuffer buffer,
        GPUSize64 bufferOffset,
        ArrayBuffer data,
        optional GPUSize64 dataOffset = 0,
        optional GPUSize64 size);

    void writeTexture(
      GPUTextureCopyView destination,
      ArrayBuffer data,
      GPUTextureDataLayout dataLayout,
      GPUExtent3D size);

    void copyImageBitmapToTexture(
        GPUImageBitmapCopyView source,
        GPUTextureCopyView destination,
        GPUExtent3D copySize);
};
GPUQueue includes GPUObjectBase;
writeBuffer(buffer, bufferOffset, data, dataOffset, size)

Takes the data contents of size size, starting from the byte offset dataOffset, and schedules a write operation of these contents to the buffer buffer on the Queue timeline starting at bufferOffset. Any subsequent modifications to data do not affect what is written at the time that the scheduled operation runs.

If size is 0, it is set to data.byteLength - dataOffset if the result is non-negative, or throws OperationError otherwise.

The operation throws OperationError if any of the following is true:

  • buffer buffer isn’t in the "unmapped" buffer state.

  • bufferOffset is not a multiple of 4.

  • size is not a positive multiple of 4.

  • dataOffset + size exceeds data.byteLength.

The operation does nothing and produces an error if any of the following is true:

writeTexture(destination, data, dataLayout, size)

Takes the data contents and schedules a write operation of these contents to the destination texture copy view in the queue. Any subsequent modifications to data do not affect what is written at the time that the scheduled operation runs.

The operation throws OperationError if dataLayout.offset exceeds data.byteLength.

The operation does nothing and produces an error if any of the following is true:

Note: unlike GPUCommandEncoder.copyBufferToTexture, there is no alignment requirement on dataLayout.bytesPerRow.

copyImageBitmapToTexture(source, destination, copySize)

Schedules a copy operation of the contents of an image bitmap into the destination texture.

The operation throws OperationError if any of the following any of the following requirements are unmet:

submit(commandBuffers)

Schedules the execution of the command buffers by the GPU on this queue.

Does nothing and produces an error if any of the following is true:

19.1. GPUFence

interface GPUFence {
    GPUFenceValue getCompletedValue();
    Promise<void> onCompletion(GPUFenceValue completionValue);
};
GPUFence includes GPUObjectBase;

19.1.1. Creation

dictionary GPUFenceDescriptor : GPUObjectDescriptorBase {
    GPUFenceValue initialValue = 0;
};

20. Queries

20.1. QuerySet

interface GPUQuerySet {
    void destroy();
};
GPUQuerySet includes GPUObjectBase;

20.1.1. Creation

dictionary GPUQuerySetDescriptor : GPUObjectDescriptorBase {
    required GPUQueryType type;
    required GPUSize32 count;
    sequence<GPUPipelineStatisticName> pipelineStatistics = [];
};
pipelineStatistics, of type sequence<GPUPipelineStatisticName>, defaulting to []

The set of GPUPipelineStatisticName values in this sequence defines which pipeline statistics will be returned in the new query set.

Valid Usage
  1. pipelineStatistics is ignored if type is not pipeline-statistics.

  2. If pipeline-statistics-query is not available, type must not be pipeline-statistics.

  3. If type is pipeline-statistics, pipelineStatistics must be a sequence of GPUPipelineStatisticName values which cannot be duplicated.

20.2. QueryType

enum GPUQueryType {
    "occlusion",
    "pipeline-statistics"
};

20.3. Pipeline Statistics Query

enum GPUPipelineStatisticName {
    "vertex-shader-invocations",
    "clipper-invocations",
    "clipper-primitives-out",
    "fragment-shader-invocations",
    "compute-shader-invocations"
};

When resolving pipeline statistics query, each result is written into uint64, and the number and order of the results written to GPU buffer matches the number and order of GPUPipelineStatisticName specified in pipelineStatistics.

21. Canvas Rendering & Swap Chains

interface GPUCanvasContext {
    GPUSwapChain configureSwapChain(GPUSwapChainDescriptor descriptor);

    Promise<GPUTextureFormat> getSwapChainPreferredFormat(GPUDevice device);
};
dictionary GPUSwapChainDescriptor : GPUObjectDescriptorBase {
    required GPUDevice device;
    required GPUTextureFormat format;
    GPUTextureUsageFlags usage = 0x10;  // GPUTextureUsage.OUTPUT_ATTACHMENT
};
interface GPUSwapChain {
    GPUTexture getCurrentTexture();
};
GPUSwapChain includes GPUObjectBase;

In the "update the rendering [of the] Document" step of the "Update the rendering" HTML processing model, the contents of the GPUTexture most recently returned by getCurrentTexture() are used to update the rendering for the canvas, and it is as if destroy() were called on it (making it unusable elsewhere in WebGPU).

Before this drawing buffer is presented for compositing, the implementation shall ensure that all rendering operations have been flushed to the drawing buffer.

22. Errors & Debugging

22.1. Fatal Errors

interface GPUDeviceLostInfo {
    readonly attribute DOMString message;
};

partial interface GPUDevice {
    readonly attribute Promise<GPUDeviceLostInfo> lost;
};

22.2. Error Scopes

enum GPUErrorFilter {
    "none",
    "out-of-memory",
    "validation"
};
interface GPUOutOfMemoryError {
    constructor();
};

interface GPUValidationError {
    constructor(DOMString message);
    readonly attribute DOMString message;
};

typedef (GPUOutOfMemoryError or GPUValidationError) GPUError;
partial interface GPUDevice {
    void pushErrorScope(GPUErrorFilter filter);
    Promise<GPUError?> popErrorScope();
};

popErrorScope() throws OperationError if there are no error scopes on the stack. popErrorScope() rejects with OperationError if the device is lost.

22.3. Telemetry

[
    Exposed=(Window, DedicatedWorker)
]
interface GPUUncapturedErrorEvent : Event {
    constructor(
        DOMString type,
        GPUUncapturedErrorEventInit gpuUncapturedErrorEventInitDict
    );
    [SameObject] readonly attribute GPUError error;
};

dictionary GPUUncapturedErrorEventInit : EventInit {
    required GPUError error;
};
partial interface GPUDevice {
    [Exposed=(Window, DedicatedWorker)]
    attribute EventHandler onuncapturederror;
};

23. Type Definitions

typedef [EnforceRange] unsigned long GPUBufferDynamicOffset;
typedef [EnforceRange] unsigned long long GPUFenceValue;
typedef [EnforceRange] unsigned long GPUStencilValue;
typedef [EnforceRange] unsigned long GPUSampleMask;
typedef [EnforceRange] long GPUDepthBias;

typedef [EnforceRange] unsigned long long GPUSize64;
typedef [EnforceRange] unsigned long GPUIntegerCoordinate;
typedef [EnforceRange] unsigned long GPUIndex32;
typedef [EnforceRange] unsigned long GPUSize32;
typedef [EnforceRange] long GPUSignedOffset32;

23.1. Colors & Vectors

dictionary GPUColorDict {
    required double r;
    required double g;
    required double b;
    required double a;
};
typedef (sequence<double> or GPUColorDict) GPUColor;

Note: double is large enough to precisely hold 32-bit signed/unsigned integers and single-precision floats.

dictionary GPUOrigin2DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin2DDict) GPUOrigin2D;
dictionary GPUOrigin3DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
    GPUIntegerCoordinate z = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin3DDict) GPUOrigin3D;

An Origin3D is a GPUOrigin3D. Origin3D is a spec namespace for the following definitions:

For a given GPUOrigin3D value origin, depending on its type, the syntax:
dictionary GPUExtent3DDict {
    required GPUIntegerCoordinate width;
    required GPUIntegerCoordinate height;
    required GPUIntegerCoordinate depth;
};
typedef (sequence<GPUIntegerCoordinate> or GPUExtent3DDict) GPUExtent3D;

An Extent3D is a GPUExtent3D. Extent3D is a spec namespace for the following definitions:

For a given GPUExtent3D value extent, depending on its type, the syntax:
typedef sequence<(GPUBuffer or ArrayBuffer)> GPUMappedBuffer;

GPUMappedBuffer is always a sequence of 2 elements, of types GPUBuffer and ArrayBuffer, respectively.

24. Temporary usages of non-exported dfns

Eventually all of these should disappear but they are useful to avoid warning while building the specification.

vertex buffer

Conformance

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[ECMASCRIPT]
ECMAScript Language Specification. URL: https://tc39.es/ecma262/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://tools.ietf.org/html/rfc2119
[WebIDL]
Boris Zbarsky. Web IDL. 15 December 2016. ED. URL: https://heycam.github.io/webidl/

Informative References

[CSP3]
Mike West. Content Security Policy Level 3. 15 October 2018. WD. URL: https://www.w3.org/TR/CSP3/

IDL Index

interface mixin GPUObjectBase {
    attribute USVString? label;
};

dictionary GPUObjectDescriptorBase {
    USVString label;
};

[Exposed=Window]
partial interface Navigator {
    [SameObject] readonly attribute GPU gpu;
};

[Exposed=DedicatedWorker]
partial interface WorkerNavigator {
    [SameObject] readonly attribute GPU gpu;
};

[Exposed=(Window, DedicatedWorker)]
interface GPU {
    Promise<GPUAdapter> requestAdapter(optional GPURequestAdapterOptions options = {});
};

dictionary GPURequestAdapterOptions {
    GPUPowerPreference powerPreference;
};

enum GPUPowerPreference {
    "low-power",
    "high-performance"
};

interface GPUAdapter {
    readonly attribute DOMString name;
    readonly attribute FrozenArray<GPUExtensionName> extensions;
    //readonly attribute GPULimits limits; Don’t expose higher limits for now.

    Promise<GPUDevice> requestDevice(optional GPUDeviceDescriptor descriptor = {});
};

dictionary GPUDeviceDescriptor : GPUObjectDescriptorBase {
    sequence<GPUExtensionName> extensions = [];
    GPULimits limits = {};
};

enum GPUExtensionName {
    "texture-compression-bc",
    "pipeline-statistics-query"
};

dictionary GPULimits {
    GPUSize32 maxBindGroups = 4;
    GPUSize32 maxDynamicUniformBuffersPerPipelineLayout = 8;
    GPUSize32 maxDynamicStorageBuffersPerPipelineLayout = 4;
    GPUSize32 maxSampledTexturesPerShaderStage = 16;
    GPUSize32 maxSamplersPerShaderStage = 16;
    GPUSize32 maxStorageBuffersPerShaderStage = 4;
    GPUSize32 maxStorageTexturesPerShaderStage = 4;
    GPUSize32 maxUniformBuffersPerShaderStage = 12;
};

[Exposed=(Window, DedicatedWorker), Serializable]
interface GPUDevice : EventTarget {
    [SameObject] readonly attribute GPUAdapter adapter;
    readonly attribute FrozenArray<GPUExtensionName> extensions;
    readonly attribute object limits;

    [SameObject] readonly attribute GPUQueue defaultQueue;

    GPUBuffer createBuffer(GPUBufferDescriptor descriptor);
    GPUMappedBuffer createBufferMapped(GPUBufferDescriptor descriptor);
    GPUTexture createTexture(GPUTextureDescriptor descriptor);
    GPUSampler createSampler(optional GPUSamplerDescriptor descriptor = {});

    GPUBindGroupLayout createBindGroupLayout(GPUBindGroupLayoutDescriptor descriptor);
    GPUPipelineLayout createPipelineLayout(GPUPipelineLayoutDescriptor descriptor);
    GPUBindGroup createBindGroup(GPUBindGroupDescriptor descriptor);

    GPUShaderModule createShaderModule(GPUShaderModuleDescriptor descriptor);
    GPUComputePipeline createComputePipeline(GPUComputePipelineDescriptor descriptor);
    GPURenderPipeline createRenderPipeline(GPURenderPipelineDescriptor descriptor);

    GPUCommandEncoder createCommandEncoder(optional GPUCommandEncoderDescriptor descriptor = {});
    GPURenderBundleEncoder createRenderBundleEncoder(GPURenderBundleEncoderDescriptor descriptor);

    GPUQuerySet createQuerySet(GPUQuerySetDescriptor descriptor);
};
GPUDevice includes GPUObjectBase;

[Serializable]
interface GPUBuffer {
    Promise<void> mapAsync(optional GPUSize64 offset = 0, optional GPUSize64 size = 0);
    ArrayBuffer getMappedRange(optional GPUSize64 offset = 0, optional GPUSize64 size = 0);
    void unmap();

    void destroy();
};
GPUBuffer includes GPUObjectBase;

dictionary GPUBufferDescriptor : GPUObjectDescriptorBase {
    required GPUSize64 size;
    required GPUBufferUsageFlags usage;
    boolean mappedAtCreation = false;
};

typedef [EnforceRange] unsigned long GPUBufferUsageFlags;
interface GPUBufferUsage {
    const GPUBufferUsageFlags MAP_READ      = 0x0001;
    const GPUBufferUsageFlags MAP_WRITE     = 0x0002;
    const GPUBufferUsageFlags COPY_SRC      = 0x0004;
    const GPUBufferUsageFlags COPY_DST      = 0x0008;
    const GPUBufferUsageFlags INDEX         = 0x0010;
    const GPUBufferUsageFlags VERTEX        = 0x0020;
    const GPUBufferUsageFlags UNIFORM       = 0x0040;
    const GPUBufferUsageFlags STORAGE       = 0x0080;
    const GPUBufferUsageFlags INDIRECT      = 0x0100;
    const GPUBufferUsageFlags QUERY_RESOLVE = 0x0200;
};

[Serializable]
interface GPUTexture {
    GPUTextureView createView(optional GPUTextureViewDescriptor descriptor = {});

    void destroy();
};
GPUTexture includes GPUObjectBase;

dictionary GPUTextureDescriptor : GPUObjectDescriptorBase {
    required GPUExtent3D size;
    GPUIntegerCoordinate mipLevelCount = 1;
    GPUSize32 sampleCount = 1;
    GPUTextureDimension dimension = "2d";
    required GPUTextureFormat format;
    required GPUTextureUsageFlags usage;
};

enum GPUTextureDimension {
    "1d",
    "2d",
    "3d"
};

typedef [EnforceRange] unsigned long GPUTextureUsageFlags;
interface GPUTextureUsage {
    const GPUTextureUsageFlags COPY_SRC          = 0x01;
    const GPUTextureUsageFlags COPY_DST          = 0x02;
    const GPUTextureUsageFlags SAMPLED           = 0x04;
    const GPUTextureUsageFlags STORAGE           = 0x08;
    const GPUTextureUsageFlags OUTPUT_ATTACHMENT = 0x10;
};

interface GPUTextureView {
};
GPUTextureView includes GPUObjectBase;

dictionary GPUTextureViewDescriptor : GPUObjectDescriptorBase {
    GPUTextureFormat format;
    GPUTextureViewDimension dimension;
    GPUTextureAspect aspect = "all";
    GPUIntegerCoordinate baseMipLevel = 0;
    GPUIntegerCoordinate mipLevelCount = 0;
    GPUIntegerCoordinate baseArrayLayer = 0;
    GPUIntegerCoordinate arrayLayerCount = 0;
};

enum GPUTextureViewDimension {
    "1d",
    "2d",
    "2d-array",
    "cube",
    "cube-array",
    "3d"
};

enum GPUTextureAspect {
    "all",
    "stencil-only",
    "depth-only"
};

enum GPUTextureFormat {
    // 8-bit formats
    "r8unorm",
    "r8snorm",
    "r8uint",
    "r8sint",

    // 16-bit formats
    "r16uint",
    "r16sint",
    "r16float",
    "rg8unorm",
    "rg8snorm",
    "rg8uint",
    "rg8sint",

    // 32-bit formats
    "r32uint",
    "r32sint",
    "r32float",
    "rg16uint",
    "rg16sint",
    "rg16float",
    "rgba8unorm",
    "rgba8unorm-srgb",
    "rgba8snorm",
    "rgba8uint",
    "rgba8sint",
    "bgra8unorm",
    "bgra8unorm-srgb",
    // Packed 32-bit formats
    "rgb10a2unorm",
    "rg11b10float",

    // 64-bit formats
    "rg32uint",
    "rg32sint",
    "rg32float",
    "rgba16uint",
    "rgba16sint",
    "rgba16float",

    // 128-bit formats
    "rgba32uint",
    "rgba32sint",
    "rgba32float",

    // Depth and stencil formats
    "depth32float",
    "depth24plus",
    "depth24plus-stencil8"
};

enum GPUTextureComponentType {
    "float",
    "sint",
    "uint"
};

interface GPUSampler {
};
GPUSampler includes GPUObjectBase;

dictionary GPUSamplerDescriptor : GPUObjectDescriptorBase {
    GPUAddressMode addressModeU = "clamp-to-edge";
    GPUAddressMode addressModeV = "clamp-to-edge";
    GPUAddressMode addressModeW = "clamp-to-edge";
    GPUFilterMode magFilter = "nearest";
    GPUFilterMode minFilter = "nearest";
    GPUFilterMode mipmapFilter = "nearest";
    float lodMinClamp = 0;
    float lodMaxClamp = 0xffffffff; // TODO: What should this be? Was Number.MAX_VALUE.
    GPUCompareFunction compare;
};

enum GPUAddressMode {
    "clamp-to-edge",
    "repeat",
    "mirror-repeat"
};

enum GPUFilterMode {
    "nearest",
    "linear"
};

enum GPUCompareFunction {
    "never",
    "less",
    "equal",
    "less-equal",
    "greater",
    "not-equal",
    "greater-equal",
    "always"
};

[Serializable]
interface GPUBindGroupLayout {
};
GPUBindGroupLayout includes GPUObjectBase;

dictionary GPUBindGroupLayoutDescriptor : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayoutEntry> entries;
};

dictionary GPUBindGroupLayoutEntry {
    required GPUIndex32 binding;
    required GPUShaderStageFlags visibility;
    required GPUBindingType type;
    GPUTextureViewDimension viewDimension = "2d";
    GPUTextureComponentType textureComponentType = "float";
    GPUTextureFormat storageTextureFormat;
    boolean multisampled = false;
    boolean hasDynamicOffset = false;
};

typedef [EnforceRange] unsigned long GPUShaderStageFlags;
interface GPUShaderStage {
    const GPUShaderStageFlags VERTEX   = 0x1;
    const GPUShaderStageFlags FRAGMENT = 0x2;
    const GPUShaderStageFlags COMPUTE  = 0x4;
};

enum GPUBindingType {
    "uniform-buffer",
    "storage-buffer",
    "readonly-storage-buffer",
    "sampler",
    "comparison-sampler",
    "sampled-texture",
    "readonly-storage-texture",
    "writeonly-storage-texture"
    // TODO: other binding types
};

interface GPUBindGroup {
};
GPUBindGroup includes GPUObjectBase;

dictionary GPUBindGroupDescriptor : GPUObjectDescriptorBase {
    required GPUBindGroupLayout layout;
    required sequence<GPUBindGroupEntry> entries;
};

typedef (GPUSampler or GPUTextureView or GPUBufferBinding) GPUBindingResource;

dictionary GPUBindGroupEntry {
    required GPUIndex32 binding;
    required GPUBindingResource resource;
};

dictionary GPUBufferBinding {
    required GPUBuffer buffer;
    GPUSize64 offset = 0;
    GPUSize64 size;
};

[Serializable]
interface GPUPipelineLayout {
};
GPUPipelineLayout includes GPUObjectBase;

dictionary GPUPipelineLayoutDescriptor : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayout> bindGroupLayouts;
};

enum GPUCompilationMessageType {
    "error",
    "warning",
    "info"
};

[Serializable]
interface GPUCompilationMessage {
    readonly attribute DOMString message;
    readonly attribute GPUCompilationMessageType type;
    readonly attribute unsigned long long lineNum;
    readonly attribute unsigned long long linePos;
};

[Serializable]
interface GPUCompilationInfo {
    readonly attribute sequence<GPUCompilationMessage> messages;
};

[Serializable]
interface GPUShaderModule {
    Promise<GPUCompilationInfo> compilationInfo();
};
GPUShaderModule includes GPUObjectBase;

dictionary GPUShaderModuleDescriptor : GPUObjectDescriptorBase {
    required USVString code;
    object sourceMap;
};

dictionary GPUPipelineDescriptorBase : GPUObjectDescriptorBase {
    GPUPipelineLayout layout;
};

interface mixin GPUPipelineBase {
    GPUBindGroupLayout getBindGroupLayout(unsigned long index);
};

dictionary GPUProgrammableStageDescriptor {
    required GPUShaderModule module;
    required USVString entryPoint;
};

[Serializable]
interface GPUComputePipeline {
};
GPUComputePipeline includes GPUObjectBase;
GPUComputePipeline includes GPUPipelineBase;

dictionary GPUComputePipelineDescriptor : GPUPipelineDescriptorBase {
    required GPUProgrammableStageDescriptor computeStage;
};

[Serializable]
interface GPURenderPipeline {
};
GPURenderPipeline includes GPUObjectBase;
GPURenderPipeline includes GPUPipelineBase;

dictionary GPURenderPipelineDescriptor : GPUPipelineDescriptorBase {
    required GPUProgrammableStageDescriptor vertexStage;
    GPUProgrammableStageDescriptor fragmentStage;

    required GPUPrimitiveTopology primitiveTopology;
    GPURasterizationStateDescriptor rasterizationState = {};
    required sequence<GPUColorStateDescriptor> colorStates;
    GPUDepthStencilStateDescriptor depthStencilState;
    GPUVertexStateDescriptor vertexState = {};

    GPUSize32 sampleCount = 1;
    GPUSampleMask sampleMask = 0xFFFFFFFF;
    boolean alphaToCoverageEnabled = false;
};

enum GPUPrimitiveTopology {
    "point-list",
    "line-list",
    "line-strip",
    "triangle-list",
    "triangle-strip"
};

dictionary GPURasterizationStateDescriptor {
    GPUFrontFace frontFace = "ccw";
    GPUCullMode cullMode = "none";

    GPUDepthBias depthBias = 0;
    float depthBiasSlopeScale = 0;
    float depthBiasClamp = 0;
};

enum GPUFrontFace {
    "ccw",
    "cw"
};

enum GPUCullMode {
    "none",
    "front",
    "back"
};

dictionary GPUColorStateDescriptor {
    required GPUTextureFormat format;

    GPUBlendDescriptor alphaBlend = {};
    GPUBlendDescriptor colorBlend = {};
    GPUColorWriteFlags writeMask = 0xF;  // GPUColorWrite.ALL
};

typedef [EnforceRange] unsigned long GPUColorWriteFlags;
interface GPUColorWrite {
    const GPUColorWriteFlags RED   = 0x1;
    const GPUColorWriteFlags GREEN = 0x2;
    const GPUColorWriteFlags BLUE  = 0x4;
    const GPUColorWriteFlags ALPHA = 0x8;
    const GPUColorWriteFlags ALL   = 0xF;
};

dictionary GPUBlendDescriptor {
    GPUBlendFactor srcFactor = "one";
    GPUBlendFactor dstFactor = "zero";
    GPUBlendOperation operation = "add";
};

enum GPUBlendFactor {
    "zero",
    "one",
    "src-color",
    "one-minus-src-color",
    "src-alpha",
    "one-minus-src-alpha",
    "dst-color",
    "one-minus-dst-color",
    "dst-alpha",
    "one-minus-dst-alpha",
    "src-alpha-saturated",
    "blend-color",
    "one-minus-blend-color"
};

enum GPUBlendOperation {
    "add",
    "subtract",
    "reverse-subtract",
    "min",
    "max"
};

enum GPUStencilOperation {
    "keep",
    "zero",
    "replace",
    "invert",
    "increment-clamp",
    "decrement-clamp",
    "increment-wrap",
    "decrement-wrap"
};

dictionary GPUDepthStencilStateDescriptor {
    required GPUTextureFormat format;

    boolean depthWriteEnabled = false;
    GPUCompareFunction depthCompare = "always";

    GPUStencilStateFaceDescriptor stencilFront = {};
    GPUStencilStateFaceDescriptor stencilBack = {};

    GPUStencilValue stencilReadMask = 0xFFFFFFFF;
    GPUStencilValue stencilWriteMask = 0xFFFFFFFF;
};

dictionary GPUStencilStateFaceDescriptor {
    GPUCompareFunction compare = "always";
    GPUStencilOperation failOp = "keep";
    GPUStencilOperation depthFailOp = "keep";
    GPUStencilOperation passOp = "keep";
};

enum GPUIndexFormat {
    "uint16",
    "uint32"
};

enum GPUVertexFormat {
    "uchar2",
    "uchar4",
    "char2",
    "char4",
    "uchar2norm",
    "uchar4norm",
    "char2norm",
    "char4norm",
    "ushort2",
    "ushort4",
    "short2",
    "short4",
    "ushort2norm",
    "ushort4norm",
    "short2norm",
    "short4norm",
    "half2",
    "half4",
    "float",
    "float2",
    "float3",
    "float4",
    "uint",
    "uint2",
    "uint3",
    "uint4",
    "int",
    "int2",
    "int3",
    "int4"
};

enum GPUInputStepMode {
    "vertex",
    "instance"
};

dictionary GPUVertexStateDescriptor {
    GPUIndexFormat indexFormat = "uint32";
    sequence<GPUVertexBufferLayoutDescriptor?> vertexBuffers = [];
};

dictionary GPUVertexBufferLayoutDescriptor {
    required GPUSize64 arrayStride;
    GPUInputStepMode stepMode = "vertex";
    required sequence<GPUVertexAttributeDescriptor> attributes;
};

dictionary GPUVertexAttributeDescriptor {
    required GPUVertexFormat format;
    required GPUSize64 offset;

    required GPUIndex32 shaderLocation;
};

interface GPUCommandBuffer {
};
GPUCommandBuffer includes GPUObjectBase;

dictionary GPUCommandBufferDescriptor : GPUObjectDescriptorBase {
};

interface GPUCommandEncoder {
    GPURenderPassEncoder beginRenderPass(GPURenderPassDescriptor descriptor);
    GPUComputePassEncoder beginComputePass(optional GPUComputePassDescriptor descriptor = {});

    void copyBufferToBuffer(
        GPUBuffer source,
        GPUSize64 sourceOffset,
        GPUBuffer destination,
        GPUSize64 destinationOffset,
        GPUSize64 size);

    void copyBufferToTexture(
        GPUBufferCopyView source,
        GPUTextureCopyView destination,
        GPUExtent3D copySize);

    void copyTextureToBuffer(
        GPUTextureCopyView source,
        GPUBufferCopyView destination,
        GPUExtent3D copySize);

    void copyTextureToTexture(
        GPUTextureCopyView source,
        GPUTextureCopyView destination,
        GPUExtent3D copySize);

    void pushDebugGroup(USVString groupLabel);
    void popDebugGroup();
    void insertDebugMarker(USVString markerLabel);

    void resolveQuerySet(
        GPUQuerySet querySet,
        GPUSize32 firstQuery,
        GPUSize32 queryCount,
        GPUBuffer destination,
        GPUSize64 destinationOffset);

    GPUCommandBuffer finish(optional GPUCommandBufferDescriptor descriptor = {});
};
GPUCommandEncoder includes GPUObjectBase;

dictionary GPUCommandEncoderDescriptor : GPUObjectDescriptorBase {
    // TODO: reusability flag?
};

dictionary GPUTextureDataLayout {
    GPUSize64 offset = 0;
    required GPUSize32 bytesPerRow;
    GPUSize32 rowsPerImage = 0;
};

dictionary GPUBufferCopyView : GPUTextureDataLayout {
    required GPUBuffer buffer;
};

dictionary GPUTextureCopyView {
    required GPUTexture texture;
    GPUIntegerCoordinate mipLevel = 0;
    GPUOrigin3D origin = {};
};

dictionary GPUImageBitmapCopyView {
    required ImageBitmap imageBitmap;
    GPUOrigin2D origin = {};
};

interface mixin GPUProgrammablePassEncoder {
    void setBindGroup(GPUIndex32 index, GPUBindGroup bindGroup,
                      optional sequence<GPUBufferDynamicOffset> dynamicOffsets = []);

    void setBindGroup(GPUIndex32 index, GPUBindGroup bindGroup,
                      Uint32Array dynamicOffsetsData,
                      GPUSize64 dynamicOffsetsDataStart,
                      GPUSize32 dynamicOffsetsDataLength);

    void pushDebugGroup(USVString groupLabel);
    void popDebugGroup();
    void insertDebugMarker(USVString markerLabel);

    void beginPipelineStatisticsQuery(GPUQuerySet querySet, GPUSize32 queryIndex);
    void endPipelineStatisticsQuery(GPUQuerySet querySet, GPUSize32 queryIndex);
};

interface GPUComputePassEncoder {
    void setPipeline(GPUComputePipeline pipeline);
    void dispatch(GPUSize32 x, optional GPUSize32 y = 1, optional GPUSize32 z = 1);
    void dispatchIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);

    void endPass();
};
GPUComputePassEncoder includes GPUObjectBase;
GPUComputePassEncoder includes GPUProgrammablePassEncoder;

dictionary GPUComputePassDescriptor : GPUObjectDescriptorBase {
};

interface mixin GPURenderEncoderBase {
    void setPipeline(GPURenderPipeline pipeline);

    void setIndexBuffer(GPUBuffer buffer, optional GPUSize64 offset = 0, optional GPUSize64 size = 0);
    void setVertexBuffer(GPUIndex32 slot, GPUBuffer buffer, optional GPUSize64 offset = 0, optional GPUSize64 size = 0);

    void draw(GPUSize32 vertexCount, optional GPUSize32 instanceCount = 1,
              optional GPUSize32 firstVertex = 0, optional GPUSize32 firstInstance = 0);
    void drawIndexed(GPUSize32 indexCount, optional GPUSize32 instanceCount = 1,
                     optional GPUSize32 firstIndex = 0,
                     optional GPUSignedOffset32 baseVertex = 0,
                     optional GPUSize32 firstInstance = 0);

    void drawIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
    void drawIndexedIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
};

interface GPURenderPassEncoder {
    void setViewport(float x, float y,
                     float width, float height,
                     float minDepth, float maxDepth);

    void setScissorRect(GPUIntegerCoordinate x, GPUIntegerCoordinate y,
                        GPUIntegerCoordinate width, GPUIntegerCoordinate height);

    void setBlendColor(GPUColor color);
    void setStencilReference(GPUStencilValue reference);

    void beginOcclusionQuery(GPUSize32 queryIndex);
    void endOcclusionQuery(GPUSize32 queryIndex);

    void executeBundles(sequence<GPURenderBundle> bundles);
    void endPass();
};
GPURenderPassEncoder includes GPUObjectBase;
GPURenderPassEncoder includes GPUProgrammablePassEncoder;
GPURenderPassEncoder includes GPURenderEncoderBase;

dictionary GPURenderPassDescriptor : GPUObjectDescriptorBase {
    required sequence<GPURenderPassColorAttachmentDescriptor> colorAttachments;
    GPURenderPassDepthStencilAttachmentDescriptor depthStencilAttachment;
    GPUQuerySet occlusionQuerySet;
};

dictionary GPURenderPassColorAttachmentDescriptor {
    required GPUTextureView attachment;
    GPUTextureView resolveTarget;

    required (GPULoadOp or GPUColor) loadValue;
    GPUStoreOp storeOp = "store";
};

dictionary GPURenderPassDepthStencilAttachmentDescriptor {
    required GPUTextureView attachment;

    required (GPULoadOp or float) depthLoadValue;
    required GPUStoreOp depthStoreOp;
    boolean depthReadOnly = false;

    required (GPULoadOp or GPUStencilValue) stencilLoadValue;
    required GPUStoreOp stencilStoreOp;
    boolean stencilReadOnly = false;
};

enum GPULoadOp {
    "load"
};

enum GPUStoreOp {
    "store",
    "clear"
};

interface GPURenderBundle {
};
GPURenderBundle includes GPUObjectBase;

dictionary GPURenderBundleDescriptor : GPUObjectDescriptorBase {
};

interface GPURenderBundleEncoder {
    GPURenderBundle finish(optional GPURenderBundleDescriptor descriptor = {});
};
GPURenderBundleEncoder includes GPUObjectBase;
GPURenderBundleEncoder includes GPUProgrammablePassEncoder;
GPURenderBundleEncoder includes GPURenderEncoderBase;

dictionary GPURenderBundleEncoderDescriptor : GPUObjectDescriptorBase {
    required sequence<GPUTextureFormat> colorFormats;
    GPUTextureFormat depthStencilFormat;
    GPUSize32 sampleCount = 1;
};

interface GPUQueue {
    void submit(sequence<GPUCommandBuffer> commandBuffers);

    GPUFence createFence(optional GPUFenceDescriptor descriptor = {});
    void signal(GPUFence fence, GPUFenceValue signalValue);

    void writeBuffer(
        GPUBuffer buffer,
        GPUSize64 bufferOffset,
        ArrayBuffer data,
        optional GPUSize64 dataOffset = 0,
        optional GPUSize64 size);

    void writeTexture(
      GPUTextureCopyView destination,
      ArrayBuffer data,
      GPUTextureDataLayout dataLayout,
      GPUExtent3D size);

    void copyImageBitmapToTexture(
        GPUImageBitmapCopyView source,
        GPUTextureCopyView destination,
        GPUExtent3D copySize);
};
GPUQueue includes GPUObjectBase;

interface GPUFence {
    GPUFenceValue getCompletedValue();
    Promise<void> onCompletion(GPUFenceValue completionValue);
};
GPUFence includes GPUObjectBase;

dictionary GPUFenceDescriptor : GPUObjectDescriptorBase {
    GPUFenceValue initialValue = 0;
};

interface GPUQuerySet {
    void destroy();
};
GPUQuerySet includes GPUObjectBase;

dictionary GPUQuerySetDescriptor : GPUObjectDescriptorBase {
    required GPUQueryType type;
    required GPUSize32 count;
    sequence<GPUPipelineStatisticName> pipelineStatistics = [];
};

enum GPUQueryType {
    "occlusion",
    "pipeline-statistics"
};

enum GPUPipelineStatisticName {
    "vertex-shader-invocations",
    "clipper-invocations",
    "clipper-primitives-out",
    "fragment-shader-invocations",
    "compute-shader-invocations"
};

interface GPUCanvasContext {
    GPUSwapChain configureSwapChain(GPUSwapChainDescriptor descriptor);

    Promise<GPUTextureFormat> getSwapChainPreferredFormat(GPUDevice device);
};

dictionary GPUSwapChainDescriptor : GPUObjectDescriptorBase {
    required GPUDevice device;
    required GPUTextureFormat format;
    GPUTextureUsageFlags usage = 0x10;  // GPUTextureUsage.OUTPUT_ATTACHMENT
};

interface GPUSwapChain {
    GPUTexture getCurrentTexture();
};
GPUSwapChain includes GPUObjectBase;

interface GPUDeviceLostInfo {
    readonly attribute DOMString message;
};

partial interface GPUDevice {
    readonly attribute Promise<GPUDeviceLostInfo> lost;
};

enum GPUErrorFilter {
    "none",
    "out-of-memory",
    "validation"
};

interface GPUOutOfMemoryError {
    constructor();
};

interface GPUValidationError {
    constructor(DOMString message);
    readonly attribute DOMString message;
};

typedef (GPUOutOfMemoryError or GPUValidationError) GPUError;

partial interface GPUDevice {
    void pushErrorScope(GPUErrorFilter filter);
    Promise<GPUError?> popErrorScope();
};

[
    Exposed=(Window, DedicatedWorker)
]
interface GPUUncapturedErrorEvent : Event {
    constructor(
        DOMString type,
        GPUUncapturedErrorEventInit gpuUncapturedErrorEventInitDict
    );
    [SameObject] readonly attribute GPUError error;
};

dictionary GPUUncapturedErrorEventInit : EventInit {
    required GPUError error;
};

partial interface GPUDevice {
    [Exposed=(Window, DedicatedWorker)]
    attribute EventHandler onuncapturederror;
};

typedef [EnforceRange] unsigned long GPUBufferDynamicOffset;
typedef [EnforceRange] unsigned long long GPUFenceValue;
typedef [EnforceRange] unsigned long GPUStencilValue;
typedef [EnforceRange] unsigned long GPUSampleMask;
typedef [EnforceRange] long GPUDepthBias;

typedef [EnforceRange] unsigned long long GPUSize64;
typedef [EnforceRange] unsigned long GPUIntegerCoordinate;
typedef [EnforceRange] unsigned long GPUIndex32;
typedef [EnforceRange] unsigned long GPUSize32;
typedef [EnforceRange] long GPUSignedOffset32;

dictionary GPUColorDict {
    required double r;
    required double g;
    required double b;
    required double a;
};
typedef (sequence<double> or GPUColorDict) GPUColor;

dictionary GPUOrigin2DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin2DDict) GPUOrigin2D;

dictionary GPUOrigin3DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
    GPUIntegerCoordinate z = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin3DDict) GPUOrigin3D;

dictionary GPUExtent3DDict {
    required GPUIntegerCoordinate width;
    required GPUIntegerCoordinate height;
    required GPUIntegerCoordinate depth;
};
typedef (sequence<GPUIntegerCoordinate> or GPUExtent3DDict) GPUExtent3D;

typedef sequence<(GPUBuffer or ArrayBuffer)> GPUMappedBuffer;

Issues Index

Consider merging all read-only usages. <https://github.com/gpuweb/gpuweb/issues/296>
Document read-only states for depth views. <https://github.com/gpuweb/gpuweb/issues/514>
This section will need to be revised to support multiple queues.
Define "ownership".
Need a robust example like the one in ErrorHandling.md, which handles all situations. Possibly also include a simple example with no handling.
Write a spec section for this, and link to it.
define buffer (internal object)
Specify [[mapping]] in term of DataBlock similarly to AllocateArrayBuffer? <https://github.com/gpuweb/gpuweb/issues/605>
[[usage]] is differently named from [[textureUsage]]. We should make it consistent.
Explain that the resulting error buffer can still be mapped at creation. <https://github.com/gpuweb/gpuweb/issues/605>
Explain what are a GPUDevice's [[allowed buffer usages]] <https://github.com/gpuweb/gpuweb/issues/605>
Handle error buffers once we have a description of the error monad.
Add client-side validation that a mapped buffer can only be unmapped and destroyed on the worker on which it was mapped. Likewise getMappedRange can only be called on that worker. <https://github.com/gpuweb/gpuweb/issues/605>
There is concern that it should be clearer at a mapAsync call point if it is meant for reading or writing because the semantics are very different. Alternatives suggested include splitting into mapReadAsync vs. mapWriteAsync, or adding a GPUMapFlags as an argument to the call that can later be used to extend the method. <https://github.com/gpuweb/gpuweb/issues/605>
Handle error buffers once we have a description of the error monad. <https://github.com/gpuweb/gpuweb/issues/605>
define texture (internal object)
define mipmap level, array layer, slice (concepts)
Make this a standalone algorithm used in the createView algorithm.
The references to GPUTextureDescriptor here should actually refer to internal slots of a texture internal object once we have one.
write definition. this descriptor view
should this example and the note be moved to some "best practices" document?
there will be more limits applicable to the whole pipeline layout.
Specify this more properly once we have internal objects for GPUBindGroupLayout. Alternatively only spec is as a new internal objects that’s group-equivalent
Define the reflection information concept so that this spec can interface with the WGSL spec and get information what the interface is for a GPUShaderModule for a specific entrypoint.
This fills the pipeline layout with empty bindgroups. Revisit once the behavior of empty bindgroups is specified.
is there a match/switch statement in bikeshed?
we need a deeper description of these stages
link to the semantics of SV_SampleIndex and SV_Coverage in WGSL spec.
need a proper limit for the maximum number of color targets.
need a more detailed validation of the render states.
need description of the render states.
add a limit to the number of vertex attributes
add a limit to the number of vertex buffers
Define images more precisely. In particular, define them as being comprised of texel blocks.
Define the exact copy semantics, by reference to common algorithms shared by the copy methods.
Define the copies with 1d and 3d textures. <https://github.com/gpuweb/gpuweb/issues/69>
Define the state machine for GPUCommandEncoder. <https://github.com/gpuweb/gpuweb/issues/21>
figure out how to handle overflows in the spec. <https://github.com/gpuweb/gpuweb/issues/69>
define this as an algorithm with (texture, mipmapLevel) parameters and use the call syntax instead of referring to the definition by label.
Define the copies with 1d and 3d textures. <https://github.com/gpuweb/gpuweb/issues/69>
Additional restrictions on rowsPerImage if needed. <https://github.com/gpuweb/gpuweb/issues/537>
Define the copies with "depth24plus" and "depth24plus-stencil8". <https://github.com/gpuweb/gpuweb/issues/652>
convert "Valid Texture Copy Range" into an algorithm with parameters, similar to "validating linear texture data"
Add remaining validation.
Add remaining validation.
Add remaining validation.