Skip to content

Vulkan计算与跨平台GPU

Vulkan是唯一在每种主流平台上运行的GPU计算API:NVIDIA、AMD、Intel、Apple(通过MoltenVK)、Android甚至浏览器(通过WebGPU)。本文件涵盖Vulkan架构、计算流水线、GLSL计算着色器编写、GPU计算程序的完整C++设置、共享内存和同步、WebGPU浏览器GPU计算以及实际ML推理示例

  • CUDA在NVIDIA硬件上主导ML训练。但并非每个部署目标都有NVIDIA GPU。移动应用运行在Qualcomm Adreno或ARM Mali GPU上。Web应用运行在浏览器中。游戏引擎需要同时支持AMD、Intel和NVIDIA。对于所有这些,Vulkan是答案。

  • Vulkan很冗长——一个"hello world"计算程序约300行C++。但这种冗长是显式控制的代价:你亲自管理每个GPU资源(内存、流水线、命令缓冲区)。这种控制以开发速度为代价换取了最高性能和可移植性。

Vulkan架构概述

  • Vulkan是一个由Khronos Group(创建OpenGL的同一组织)创建的低级GPU API。与CUDA(隐藏GPU资源管理)不同,Vulkan要求你显式管理:

    • 实例和设备:创建Vulkan实例,枚举可用GPU,选择一个。
    • 内存:显式分配GPU内存,指定内存类型(设备本地用于速度、主机可见用于CPU访问)。
    • 缓冲区:创建引用已分配内存的缓冲区对象。
    • 描述符集:将缓冲区绑定到着色器输入(类似计算着色器的函数参数)。
    • 计算流水线:编译着色器并创建流水线对象。
    • 命令缓冲区:记录GPU命令序列(绑定流水线、绑定描述符、分发计算)。
    • 队列提交:将命令缓冲区提交给GPU执行。
    • 同步:使用fence和barrier确保正确顺序。
  • 这与CUDA的cudaMalloc + 内核发射模型完全不同。在CUDA中,驱动程序在幕后处理大部分。在Vulkan中,你自己做。

为什么如此冗长?

  • Vulkan的显式性存在有两个原因:

    1. 驱动程序简洁性:OpenGL驱动程序极其复杂(它们必须猜测应用程序的意图并进行相应的优化)。Vulkan将该责任转移给应用程序,使驱动程序更薄、更可预测,且更容易跨厂商正确实现。

    2. 性能:对内存布局、同步和命令批处理的显式控制让应用程序做出最优决策。在CUDA中,驱动程序可能插入不必要的同步。在Vulkan中,你只在需要时同步。

GLSL计算着色器

  • 计算着色器是一个在GPU上运行的程序,类似CUDA内核。它用GLSL(OpenGL着色语言)编写,并编译为SPIR-V字节码(一种可移植二进制格式)。

向量加法

// add.comp — 编译:glslangValidator -V add.comp -o add.spv
#version 450

// Workgroup大小:每workgroup 256次调用(= CUDA中每block的线程数)
layout(local_size_x = 256) in;

// 缓冲区绑定(类似内核参数)
layout(set = 0, binding = 0) buffer InputA { float a[]; };
layout(set = 0, binding = 1) buffer InputB { float b[]; };
layout(set = 0, binding = 2) buffer Output { float c[]; };

// Push常量:小型统一数据(类似内核参数)
layout(push_constant) uniform PushConstants {
    uint n;  // 元素数量
};

void main() {
    uint idx = gl_GlobalInvocationID.x;  // 全局线程索引
    if (idx < n) {
        c[idx] = a[idx] + b[idx];
    }
}
  • 映射到CUDA概念
Vulkan CUDA 含义
Workgroup Block 可共享内存的线程组
Invocation Thread 单执行单元
gl_GlobalInvocationID blockIdx * blockDim + threadIdx 全局线程索引
gl_LocalInvocationID threadIdx Workgroup内线程索引
gl_WorkGroupID blockIdx Workgroup索引
local_size_x blockDim.x 每Workgroup线程数
Storage buffer 全局内存 读/写GPU内存
Shared memory (shared) __shared__ 每Workgroup快速内存
Push constant 内核参数 小型统一数据

带共享内存的ReLU

// relu_shared.comp
#version 450

layout(local_size_x = 256) in;

layout(set = 0, binding = 0) buffer Input  { float input_data[]; };
layout(set = 0, binding = 1) buffer Output { float output_data[]; };

layout(push_constant) uniform PushConstants { uint n; };

// 共享内存(等价于CUDA __shared__)
shared float tile[256];

void main() {
    uint gid = gl_GlobalInvocationID.x;
    uint lid = gl_LocalInvocationID.x;

    // 加载到共享内存
    if (gid < n) {
        tile[lid] = input_data[gid];
    }

    // 屏障:等待workgroup中所有调用完成加载
    barrier();  // 等价于CUDA __syncthreads()

    // 计算ReLU
    if (gid < n) {
        output_data[gid] = max(tile[lid], 0.0);
    }
}
  • 对于ReLU,共享内存并非严格必要(操作是逐元素的)。但这演示了模式:加载到共享内存 → barrier → 计算 → 存储。对于需要来自相邻线程数据的操作(卷积、归约、softmax),共享内存是必需的。

并行归约(求和)

// reduce_sum.comp
#version 450

layout(local_size_x = 256) in;

layout(set = 0, binding = 0) buffer Input  { float input_data[]; };
layout(set = 0, binding = 1) buffer Output { float partial_sums[]; };

layout(push_constant) uniform PushConstants { uint n; };

shared float sdata[256];

void main() {
    uint gid = gl_GlobalInvocationID.x;
    uint lid = gl_LocalInvocationID.x;
    uint wgid = gl_WorkGroupID.x;

    // 加载到共享内存
    sdata[lid] = (gid < n) ? input_data[gid] : 0.0;
    barrier();

    // Workgroup内树形归约
    for (uint stride = 128; stride > 0; stride >>= 1) {
        if (lid < stride) {
            sdata[lid] += sdata[lid + stride];
        }
        barrier();
    }

    // 线程0写入workgroup的部分和
    if (lid == 0) {
        partial_sums[wgid] = sdata[0];
    }
}
  • 这是经典的并行归约模式(与CUDA相同)。每个workgroup产生一个部分和。第二次分发将部分和归约为最终结果。树形归约每一步将活动线程减半:256 → 128 → 64 → ... → 1。

带分块的矩阵乘法

// matmul_tiled.comp
#version 450

#define TILE_SIZE 16

layout(local_size_x = TILE_SIZE, local_size_y = TILE_SIZE) in;

layout(set = 0, binding = 0) buffer MatA { float A[]; };
layout(set = 0, binding = 1) buffer MatB { float B[]; };
layout(set = 0, binding = 2) buffer MatC { float C[]; };

layout(push_constant) uniform PushConstants {
    uint M, N, K;
};

shared float tileA[TILE_SIZE][TILE_SIZE];
shared float tileB[TILE_SIZE][TILE_SIZE];

void main() {
    uint row = gl_GlobalInvocationID.y;
    uint col = gl_GlobalInvocationID.x;
    uint lr = gl_LocalInvocationID.y;
    uint lc = gl_LocalInvocationID.x;

    float sum = 0.0;

    for (uint t = 0; t < (K + TILE_SIZE - 1) / TILE_SIZE; t++) {
        // 将A和B的分块加载到共享内存
        uint aCol = t * TILE_SIZE + lc;
        uint bRow = t * TILE_SIZE + lr;

        tileA[lr][lc] = (row < M && aCol < K) ? A[row * K + aCol] : 0.0;
        tileB[lr][lc] = (bRow < K && col < N) ? B[bRow * N + col] : 0.0;

        barrier();

        // 计算部分点积
        for (uint k = 0; k < TILE_SIZE; k++) {
            sum += tileA[lr][k] * tileB[k][lc];
        }

        barrier();
    }

    if (row < M && col < N) {
        C[row * N + col] = sum;
    }
}
  • 这与CUDA版本(文件04)的分块算法相同,只是GLSL语法。概念完全相同:将分块加载到共享内存,barrier,计算,barrier,重复。

C++ Vulkan设置

  • 计算着色器是容易的部分。困难部分是创建Vulkan实例、分配内存、绑定缓冲区和提交命令的C++样板代码。以下是一个精简版的完整流水线:
// vulkan_compute.cpp — 最小但完整的Vulkan计算示例
// 编译:g++ -O3 -o vulkan_compute vulkan_compute.cpp -lvulkan
// 需要:已安装Vulkan SDK,从add.comp编译的add.spv

#include <vulkan/vulkan.h>
#include <iostream>
#include <vector>
#include <fstream>
#include <cassert>

// 辅助:读取SPIR-V文件
std::vector<uint32_t> readSPIRV(const std::string& filename) {
    std::ifstream file(filename, std::ios::ate | std::ios::binary);
    size_t fileSize = file.tellg();
    std::vector<uint32_t> buffer(fileSize / sizeof(uint32_t));
    file.seekg(0);
    file.read(reinterpret_cast<char*>(buffer.data()), fileSize);
    return buffer;
}

int main() {
    const uint32_t N = 1024;
    const size_t bufferSize = N * sizeof(float);

    // ========== 1. 创建Vulkan实例 ==========
    VkApplicationInfo appInfo{};
    appInfo.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
    appInfo.apiVersion = VK_API_VERSION_1_2;

    VkInstanceCreateInfo instanceInfo{};
    instanceInfo.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
    instanceInfo.pApplicationInfo = &appInfo;

    VkInstance instance;
    vkCreateInstance(&instanceInfo, nullptr, &instance);

    // ========== 2. 选择物理设备(GPU) ==========
    uint32_t deviceCount = 0;
    vkEnumeratePhysicalDevices(instance, &deviceCount, nullptr);
    std::vector<VkPhysicalDevice> devices(deviceCount);
    vkEnumeratePhysicalDevices(instance, &deviceCount, devices.data());
    VkPhysicalDevice physicalDevice = devices[0];

    // 打印GPU名称
    VkPhysicalDeviceProperties props;
    vkGetPhysicalDeviceProperties(physicalDevice, &props);
    std::cout << "使用GPU: " << props.deviceName << "\n";

    // ========== 3. 找到计算队列族 ==========
    uint32_t queueFamilyCount = 0;
    vkGetPhysicalDeviceQueueFamilyProperties(physicalDevice, &queueFamilyCount, nullptr);
    std::vector<VkQueueFamilyProperties> queueFamilies(queueFamilyCount);
    vkGetPhysicalDeviceQueueFamilyProperties(physicalDevice, &queueFamilyCount, queueFamilies.data());

    uint32_t computeFamily = 0;
    for (uint32_t i = 0; i < queueFamilyCount; i++) {
        if (queueFamilies[i].queueFlags & VK_QUEUE_COMPUTE_BIT) {
            computeFamily = i;
            break;
        }
    }

    // ========== 4. 创建逻辑设备和队列 ==========
    float queuePriority = 1.0f;
    VkDeviceQueueCreateInfo queueInfo{};
    queueInfo.sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO;
    queueInfo.queueFamilyIndex = computeFamily;
    queueInfo.queueCount = 1;
    queueInfo.pQueuePriorities = &queuePriority;

    VkDeviceCreateInfo deviceInfo{};
    deviceInfo.sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO;
    deviceInfo.queueCreateInfoCount = 1;
    deviceInfo.pQueueCreateInfos = &queueInfo;

    VkDevice device;
    vkCreateDevice(physicalDevice, &deviceInfo, nullptr, &device);

    VkQueue computeQueue;
    vkGetDeviceQueue(device, computeFamily, 0, &computeQueue);

    // ========== 5. 分配缓冲区(A、B、C) ==========
    auto createBuffer = [&](VkBuffer& buffer, VkDeviceMemory& memory) {
        VkBufferCreateInfo bufInfo{};
        bufInfo.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
        bufInfo.size = bufferSize;
        bufInfo.usage = VK_BUFFER_USAGE_STORAGE_BUFFER_BIT;
        vkCreateBuffer(device, &bufInfo, nullptr, &buffer);

        VkMemoryRequirements memReqs;
        vkGetBufferMemoryRequirements(device, buffer, &memReqs);

        // Find host-visible memory type
        VkPhysicalDeviceMemoryProperties memProps;
        vkGetPhysicalDeviceMemoryProperties(physicalDevice, &memProps);
        uint32_t memType = 0;
        for (uint32_t i = 0; i < memProps.memoryTypeCount; i++) {
            if ((memReqs.memoryTypeBits & (1 << i)) &&
                (memProps.memoryTypes[i].propertyFlags &
                 (VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT))) {
                memType = i;
                break;
            }
        }

        VkMemoryAllocateInfo allocInfo{};
        allocInfo.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO;
        allocInfo.allocationSize = memReqs.size;
        allocInfo.memoryTypeIndex = memType;
        vkAllocateMemory(device, &allocInfo, nullptr, &memory);
        vkBindBufferMemory(device, buffer, memory, 0);
    };

    VkBuffer bufA, bufB, bufC;
    VkDeviceMemory memA, memB, memC;
    createBuffer(bufA, memA);
    createBuffer(bufB, memB);
    createBuffer(bufC, memC);

    // ========== 6. 填充输入缓冲区 ==========
    float* ptrA;
    vkMapMemory(device, memA, 0, bufferSize, 0, (void**)&ptrA);
    for (uint32_t i = 0; i < N; i++) ptrA[i] = 1.0f;
    vkUnmapMemory(device, memA);

    float* ptrB;
    vkMapMemory(device, memB, 0, bufferSize, 0, (void**)&ptrB);
    for (uint32_t i = 0; i < N; i++) ptrB[i] = 2.0f;
    vkUnmapMemory(device, memB);

    // ========== 7. 创建计算流水线 ==========
    auto spirvCode = readSPIRV("add.spv");
    VkShaderModuleCreateInfo shaderInfo{};
    shaderInfo.sType = VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO;
    shaderInfo.codeSize = spirvCode.size() * sizeof(uint32_t);
    shaderInfo.pCode = spirvCode.data();
    VkShaderModule shaderModule;
    vkCreateShaderModule(device, &shaderInfo, nullptr, &shaderModule);

    // Descriptor set layout (tells Vulkan about the buffer bindings)
    VkDescriptorSetLayoutBinding bindings[3] = {};
    for (int i = 0; i < 3; i++) {
        bindings[i].binding = i;
        bindings[i].descriptorType = VK_DESCRIPTOR_TYPE_STORAGE_BUFFER;
        bindings[i].descriptorCount = 1;
        bindings[i].stageFlags = VK_SHADER_STAGE_COMPUTE_BIT;
    }

    VkDescriptorSetLayoutCreateInfo layoutInfo{};
    layoutInfo.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO;
    layoutInfo.bindingCount = 3;
    layoutInfo.pBindings = bindings;
    VkDescriptorSetLayout descLayout;
    vkCreateDescriptorSetLayout(device, &layoutInfo, nullptr, &descLayout);

    VkPushConstantRange pushRange{};
    pushRange.stageFlags = VK_SHADER_STAGE_COMPUTE_BIT;
    pushRange.offset = 0;
    pushRange.size = sizeof(uint32_t);

    VkPipelineLayoutCreateInfo pipeLayoutInfo{};
    pipeLayoutInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO;
    pipeLayoutInfo.setLayoutCount = 1;
    pipeLayoutInfo.pSetLayouts = &descLayout;
    pipeLayoutInfo.pushConstantRangeCount = 1;
    pipeLayoutInfo.pPushConstantRanges = &pushRange;
    VkPipelineLayout pipelineLayout;
    vkCreatePipelineLayout(device, &pipeLayoutInfo, nullptr, &pipelineLayout);

    VkComputePipelineCreateInfo pipeInfo{};
    pipeInfo.sType = VK_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO;
    pipeInfo.stage.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
    pipeInfo.stage.stage = VK_SHADER_STAGE_COMPUTE_BIT;
    pipeInfo.stage.module = shaderModule;
    pipeInfo.stage.pName = "main";
    pipeInfo.layout = pipelineLayout;
    VkPipeline pipeline;
    vkCreateComputePipelines(device, VK_NULL_HANDLE, 1, &pipeInfo, nullptr, &pipeline);

    // ========== 8. 描述符集 ==========
    VkDescriptorPoolSize poolSize{};
    poolSize.type = VK_DESCRIPTOR_TYPE_STORAGE_BUFFER;
    poolSize.descriptorCount = 3;

    VkDescriptorPoolCreateInfo poolInfo{};
    poolInfo.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO;
    poolInfo.maxSets = 1;
    poolInfo.poolSizeCount = 1;
    poolInfo.pPoolSizes = &poolSize;
    VkDescriptorPool descPool;
    vkCreateDescriptorPool(device, &poolInfo, nullptr, &descPool);

    VkDescriptorSetAllocateInfo descAllocInfo{};
    descAllocInfo.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_ALLOCATE_INFO;
    descAllocInfo.descriptorPool = descPool;
    descAllocInfo.descriptorSetCount = 1;
    descAllocInfo.pSetLayouts = &descLayout;
    VkDescriptorSet descSet;
    vkAllocateDescriptorSets(device, &descAllocInfo, &descSet);

    VkDescriptorBufferInfo bufInfos[3] = {
        {bufA, 0, bufferSize}, {bufB, 0, bufferSize}, {bufC, 0, bufferSize}
    };
    VkWriteDescriptorSet writes[3] = {};
    for (int i = 0; i < 3; i++) {
        writes[i].sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
        writes[i].dstSet = descSet;
        writes[i].dstBinding = i;
        writes[i].descriptorCount = 1;
        writes[i].descriptorType = VK_DESCRIPTOR_TYPE_STORAGE_BUFFER;
        writes[i].pBufferInfo = &bufInfos[i];
    }
    vkUpdateDescriptorSets(device, 3, writes, 0, nullptr);

    // ========== 9. 记录并提交命令缓冲区 ==========
    VkCommandPoolCreateInfo cmdPoolInfo{};
    cmdPoolInfo.sType = VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO;
    cmdPoolInfo.queueFamilyIndex = computeFamily;
    VkCommandPool cmdPool;
    vkCreateCommandPool(device, &cmdPoolInfo, nullptr, &cmdPool);

    VkCommandBufferAllocateInfo cmdAllocInfo{};
    cmdAllocInfo.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO;
    cmdAllocInfo.commandPool = cmdPool;
    cmdAllocInfo.level = VK_COMMAND_BUFFER_LEVEL_PRIMARY;
    cmdAllocInfo.commandBufferCount = 1;
    VkCommandBuffer cmdBuf;
    vkAllocateCommandBuffers(device, &cmdAllocInfo, &cmdBuf);

    VkCommandBufferBeginInfo beginInfo{};
    beginInfo.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO;
    vkBeginCommandBuffer(cmdBuf, &beginInfo);

    vkCmdBindPipeline(cmdBuf, VK_PIPELINE_BIND_POINT_COMPUTE, pipeline);
    vkCmdBindDescriptorSets(cmdBuf, VK_PIPELINE_BIND_POINT_COMPUTE,
                            pipelineLayout, 0, 1, &descSet, 0, nullptr);
    vkCmdPushConstants(cmdBuf, pipelineLayout, VK_SHADER_STAGE_COMPUTE_BIT,
                       0, sizeof(uint32_t), &N);
    vkCmdDispatch(cmdBuf, (N + 255) / 256, 1, 1);

    vkEndCommandBuffer(cmdBuf);

    VkFenceCreateInfo fenceInfo{};
    fenceInfo.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO;
    VkFence fence;
    vkCreateFence(device, &fenceInfo, nullptr, &fence);

    VkSubmitInfo submitInfo{};
    submitInfo.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO;
    submitInfo.commandBufferCount = 1;
    submitInfo.pCommandBuffers = &cmdBuf;
    vkQueueSubmit(computeQueue, 1, &submitInfo, fence);
    vkWaitForFences(device, 1, &fence, VK_TRUE, UINT64_MAX);

    // ========== 10. 读取结果 ==========
    float* ptrC;
    vkMapMemory(device, memC, 0, bufferSize, 0, (void**)&ptrC);
    std::cout << "结果: c[0]=" << ptrC[0] << " c[1]=" << ptrC[1]
              << " (期望 3.0)\n";
    bool correct = true;
    for (uint32_t i = 0; i < N; i++) {
        if (ptrC[i] != 3.0f) { correct = false; break; }
    }
    std::cout << (correct ? "全部正确" : "发现错误") << "\n";
    vkUnmapMemory(device, memC);

    // ========== 清理(简略) ==========
    vkDestroyFence(device, fence, nullptr);
    vkDestroyCommandPool(device, cmdPool, nullptr);
    vkDestroyPipeline(device, pipeline, nullptr);
    vkDestroyPipelineLayout(device, pipelineLayout, nullptr);
    vkDestroyDescriptorPool(device, descPool, nullptr);
    vkDestroyDescriptorSetLayout(device, descLayout, nullptr);
    vkDestroyShaderModule(device, shaderModule, nullptr);
    vkDestroyBuffer(device, bufA, nullptr); vkFreeMemory(device, memA, nullptr);
    vkDestroyBuffer(device, bufB, nullptr); vkFreeMemory(device, memB, nullptr);
    vkDestroyBuffer(device, bufC, nullptr); vkFreeMemory(device, memC, nullptr);
    vkDestroyDevice(device, nullptr);
    vkDestroyInstance(instance, nullptr);

    return 0;
}
  • 是的,向量加法约200行。 对比CUDA的约30行。这是显式性的代价。但注意:每行都有其目的。没有隐藏的驱动程序决策、没有隐式同步、没有意外的分配。你控制一切。

  • 实践中,你会将此样板代码封装在辅助库中(或使用现有的,如vk-bootstrapVMA用于内存分配,或kompute用于ML专注的Vulkan计算)。

Kompute:面向ML的简化Vulkan

  • Kompute是一个开源C++库,封装了Vulkan用于GPU计算的样板代码。相同的向量加法变为:
#include <kompute/Kompute.hpp>

int main() {
    kp::Manager mgr;

    auto tensorA = mgr.tensor({1, 1, 1, 1, 1});
    auto tensorB = mgr.tensor({2, 2, 2, 2, 2});
    auto tensorC = mgr.tensor({0, 0, 0, 0, 0});

    std::string shader = R"(
        #version 450
        layout(local_size_x = 1) in;
        layout(set=0, binding=0) buffer A { float a[]; };
        layout(set=0, binding=1) buffer B { float b[]; };
        layout(set=0, binding=2) buffer C { float c[]; };
        void main() {
            uint i = gl_GlobalInvocationID.x;
            c[i] = a[i] + b[i];
        }
    )";

    auto algorithm = mgr.algorithm({tensorA, tensorB, tensorC},
                                     kompute::Shader::compile_source(shader));

    mgr.sequence()
        ->record<kp::OpTensorSyncDevice>({tensorA, tensorB, tensorC})
        ->record<kp::OpAlgoDispatch>(algorithm)
        ->record<kp::OpTensorSyncLocal>({tensorC})
        ->eval();

    // tensorC现在包含[3, 3, 3, 3, 3]
}
  • 可读性高得多。Kompute处理实例创建、设备选择、内存分配、描述符集和命令缓冲区管理。你专注于着色器和数据。

WebGPU:浏览器中的GPU计算

  • WebGPU是WebGL的后继者,从JavaScript提供现代GPU访问。它基于Vulkan(Linux/Android)、Metal(macOS/iOS)和DirectX 12(Windows),抽象平台差异。

  • WebGPU使用WGSL(WebGPU着色语言)而非GLSL:

// add.wgsl — WebGPU计算着色器
@group(0) @binding(0) var<storage, read> a: array<f32>;
@group(0) @binding(1) var<storage, read> b: array<f32>;
@group(0) @binding(2) var<storage, read_write> c: array<f32>;

@compute @workgroup_size(256)
fn main(@builtin(global_invocation_id) id: vec3<u32>) {
    let i = id.x;
    c[i] = a[i] + b[i];
}
  • JavaScript设置(精简):
const adapter = await navigator.gpu.requestAdapter();
const device = await adapter.requestDevice();

// 创建缓冲区
const bufferA = device.createBuffer({
    size: N * 4, usage: GPUBufferUsage.STORAGE, mappedAtCreation: true
});
new Float32Array(bufferA.getMappedRange()).fill(1.0);
bufferA.unmap();

// ...(B和C类似)

// 从WGSL着色器创建流水线
const pipeline = device.createComputePipeline({
    layout: 'auto',
    compute: {
        module: device.createShaderModule({ code: wgslSource }),
        entryPoint: 'main'
    }
});

// 分发
const encoder = device.createCommandEncoder();
const pass = encoder.beginComputePass();
pass.setPipeline(pipeline);
pass.setBindGroup(0, bindGroup);
pass.dispatchWorkgroups(Math.ceil(N / 256));
pass.end();
device.queue.submit([encoder.finish()]);
  • WebGPU对ML的重要性:在浏览器中运行推理意味着没有服务器成本、无延迟、用户数据永不离开设备。像ONNX Runtime WebTransformers.js这样的库使用WebGPU完全在客户端运行模型(包括小型LLM)。

When to Use Vulkan

场景 使用Vulkan? 原因 / 替代方案
ML训练 CUDA/Triton在NVIDIA上更简单更快
NVIDIA GPU推理 TensorRT或CUDA更优
AMD/Intel GPU推理 唯一跨厂商GPU计算选项
移动端推理 (Android) Vulkan是Android上的标准GPU API
移动端推理 (iOS) 直接使用Metal(MoltenVK会增加开销)
浏览器推理 WebGPU 基于Vulkan/Metal/DX12构建
游戏引擎 + ML 引擎已使用Vulkan进行渲染
跨平台库 同一代码库覆盖所有GPU厂商
学习GPU编程 也许 CUDA入门更简单;Vulkan教得更多

编码任务(使用 g++ -lvulkan 编译,需要 Vulkan SDK)

  1. 编译并运行上面的向量加法示例。修改着色器计算 c[i] = a[i] * b[i] + a[i](融合乘加)并验证结果。

  2. 编写一个计算着色器,使用共享内存进行归约步骤(max和sum),对一行数据应用softmax。用已知值进行测试。

// softmax.comp — compile with: glslangValidator -V softmax.comp -o softmax.spv
#version 450

#define WG_SIZE 256

layout(local_size_x = WG_SIZE) in;

layout(set = 0, binding = 0) buffer Input  { float input_data[]; };
layout(set = 0, binding = 1) buffer Output { float output_data[]; };

layout(push_constant) uniform PC { uint n; };

shared float sdata[WG_SIZE];

void main() {
    uint gid = gl_GlobalInvocationID.x;
    uint lid = gl_LocalInvocationID.x;

    // Step 1: find max (for numerical stability)
    sdata[lid] = (gid < n) ? input_data[gid] : -1e30;
    barrier();
    for (uint s = WG_SIZE / 2; s > 0; s >>= 1) {
        if (lid < s) sdata[lid] = max(sdata[lid], sdata[lid + s]);
        barrier();
    }
    float maxVal = sdata[0];
    barrier();

    // Step 2: compute exp(x - max)
    float expVal = (gid < n) ? exp(input_data[gid] - maxVal) : 0.0;
    sdata[lid] = expVal;
    barrier();

    // Step 3: sum of exp values
    for (uint s = WG_SIZE / 2; s > 0; s >>= 1) {
        if (lid < s) sdata[lid] += sdata[lid + s];
        barrier();
    }
    float sumExp = sdata[0];

    // Step 4: normalise
    if (gid < n) {
        output_data[gid] = expVal / sumExp;
    }
}
  1. 修改C++主机代码对计算着色器进行基准测试:使用Vulkan时间戳查询或CPU端fence对dispatch计时(不包括设置),并计算达到的带宽(GB/s)。