shithub: gpufswip

Download patch

ref: e265c0569049c056349d8c1f1cbcf69e660517d5
parent: 1676716413ae6e2368025dfc98cc9212f6e18dfb
author: sirjofri <[email protected]>
date: Sat Feb 17 11:19:17 EST 2024

adds descriptor pools and sets

--- a/gpufs.txt
+++ b/gpufs.txt
@@ -36,7 +36,7 @@
 # The implementation
 
 Since driver development is hard and it's hard to get reference implementations for common GPU drivers, this first implementation will be fully CPU based.
-The interfaces however should look as similar as possible to a true GPU implementation.
+The interfaces however should look as similar as possible to a potential GPU implementation.
 
 Due to the nature of filesystems, this should make it easy to ``upgrade'' applications to actual GPU hardware: Using a different GPU filesystem is all that's needed.
 The software itself doesn't need to change.
@@ -59,10 +59,15 @@
 
 Management and control is handled via a console like interface as a control file.
 Using this interface, it is possible to initialize new shaders and allocate new buffers, as well as control binding and program execution.
+For debugging purposes, an additional
+[[[ms
+.CW desc
+]]]
+file is used to display all shader and buffer bindings.
 
 Shaders and buffers are represented as the general concept of ``objects''.
 Each object has it's own subdirectory within the GPU filesystem (1).
-Initializing a new shader or buffer using the control file, we can read back the ID of the object.
+After initializing a new shader or buffer using the control file we can read back the ID of the object.
 With that, our application can know which object directory to access.
 
 [[[ms
@@ -71,8 +76,11 @@
 .B1
 .CW
 /ctl        (control file)
+/desc       (descriptors file)
 /0/buffer   (sample buffer file)
+/0/ctl      (sample buffer ctl file)
 /1/shader   (sample shader file)
+/1/ctl      (sample shader ctl file)
 .B2
 .DE
 ]]]
@@ -87,6 +95,7 @@
 cat myfile.spv > /dev/gpu/1/shader
 cat mydata.bin > /dev/gpu/0/buffer
+# bindings, see (4)
 # compile shader and run, see (3)
 cp /dev/gpu/0/buffer > result.bin
@@ -105,29 +114,94 @@
 3. Compiling and running a shader.
 .B1
 .CW
-echo c > /dev/gpu/1/ctl
-echo r main > /dev/gpu/1/ctl
+echo c       > /dev/gpu/1/ctl
+echo r main  > /dev/gpu/1/ctl
 .B2
 .DE
 ]]]
 
-Binding buffers and shaders is something I still have to think about.
+# Binding shaders and buffers
 
+Shaders and buffers need to be bound in some way to enable shaders to access the buffers.
+It's hard to understand all the requirements of the actual hardware without diving deep into GPU architecture and existing APIs. [desc]
 
+Our implementation provides a very simple abstraction that is based on Vulkan and the concept of ‥descriptor pools‥ and ‥descriptor sets‥ with their bindings.
+Ideally, the same abstraction can be used for GPU hardware.
+
+Each shader is bound to a descriptor pool.
+A descriptor pool can describe many descriptor sets, which in turn hold the buffer bindings.
+
+While shaders are bound to a full descriptor pool, buffers are bound to a single slot within a descriptor set.
+Shaders have everything needed to access a specific buffer compiled in their code.
+They know the set and binding of the buffer they want to access.
+
+Because this information is compiled in the shader, it is still possible to switch buffers by changing the binding itself.
+
+(4) shows how to create a new descriptor pool and set up bindings.
+In this example, buffer 0 is bound to the second (index 1) binding of the first (index 0) descriptor set of descriptor pool 0.
+
+[[[ms
+.DS B
+4. Binding shaders and buffers.
+.B1
+.CW
+# set up descriptor pool with 2 descriptor sets
+echo n p 2      > /dev/gpu/ctl
+# allocate 4 bindings
+echo s 0 0 4    > /dev/gpu/ctl
+# bind buffer 0
+echo b 0 0 0 1  > /dev/gpu/ctl
+# bind shader to pool 0
+echo b 0        > /dev/gpu/1/ctl
+.B2
+.DE
+]]]
+
+Reading the file
+[[[ms
+.CW desc
+]]]
+shows us the layout of this structure (5).
+We can see that only one binding is bound (showing the number of the buffer), while the other bindings are unset (-1).
+
+[[[ms
+.DS B
+5. Example descriptor table.
+.B1
+.CW
+DescPool 0
+    Set 0
+        0  -1
+        1   0
+        2  -1
+        3  -1
+    Set 1
+.B2
+.DE
+]]]
+
+While the
+[[[ms
+.CW desc
+]]]
+file can be parsed and interpreted, it should be noted that it's only meant for debugging and reviewing.
+Applications should use the interface provided by the control files.
+
+
 # State of code and future work
 
 The code currently covers the described filesystem interface completely, however not all functionality is implemented.
 Furthermore, there are bugs to be expected. [gpufs]
 
-There's a SPIR-V assembler as well as a SPIR-V disassembler.
+There's a rudimentary SPIR-V assembler as well as a SPIR-V disassembler.
 Both are far from feature complete according to the SPIR-V specification, missing instructions can be added easily.
 
 It is planned to build the embedded SPIR-V compiler as soon as possible, as well as the runtime engine, so we can finally run shaders and use the filesystem as intended.
 
-Due to the lack of actual GPU hardware support and the fact that the first implementation is single threaded I don't expect much performance gain compared to other implementations of the same logic.
+Due to the lack of actual GPU hardware support I don't expect much performance gain compared to other implementations with the same logic.
 However, the interface is generic enough to allow applications to use different GPU implementations: GPU hardware, CPU hardware (single or multi threaded), network scenarios.
 
-It also makes sense to think about future integrations into devdraw: the GPU filesystem could control actual images of devdraw and enable faster draw times for graphics rendering.
+It makes sense to think about future integrations into devdraw: the GPU filesystem could control actual images of devdraw and enable faster draw times for graphics rendering.
 
 Since SPIR-V is very low-level, it also makes sense to develop shader compilers for higher level languages like GLSL or HLSL.
 Applications are developed by different people and for different reasons, so those compilers should not be part of the specific filesystem implementations.
@@ -139,18 +213,23 @@
 .nr PS -1
 .nr VS -2
 .IP "[Nanite]" 10
-Epic Games, ``Unreal Engine Public Roadmap: Nanite - Optimized Shading'',
+Epic Games. ``Unreal Engine Public Roadmap: Nanite - Optimized Shading'',
 .CW https://portal.productboard.com/epicgames/1-unreal-engine-
 .CW public-roadmap/c/1250-nanite-optimized-shading ,
 2024.
 .IP "[SPIR-V]" 10
-The Khronos Group Inc., ``SPIR-V Specification'',
+The Khronos Group Inc.. ``SPIR-V Specification'',
 .CW https://registry.khronos.org/SPIR-V/specs/unified1/SPIRV.html ,
 2024.
 .IP "[gpufs]" 10
-Meyer, ``gpufs'' and ``spirva'',
+Meyer, Joel. ``gpufs'' and ``spirva'',
 .CW https://shithub.us/sirjofri/gpufs/HEAD/info.html
 and
 .CW https://shithub.us/sirjofri/spirva/HEAD/info.html ,
+2024.
+.IP "[desc]" 10
+vulkan tutorial. ``Descriptor pool and sets'',
+.CW https://vulkan-tutorial.com/Uniform_buffers/
+.CW Descriptor_pool_and_sets ,
 2024.
 ]]]