-struct D3D12_RT_FORMAT_ARRAY +typedef struct D3D12_RT_FORMAT_ARRAY { DXGI_FORMAT RTFormats[D3D12_SIMULTANEOUS_RENDER_TARGET_COUNT]; UINT NumRenderTargets; -}; +} D3D12_RT_FORMAT_ARRAY;
That should be a separate change.
- if (FAILED(hr = d3d12_pipeline_state_create_graphics(device, desc, &object))) + if (FAILED(hr = pipeline_state_desc_from_d3d12_graphics_desc(&pipeline_desc, desc))) + return hr; + + if (FAILED(hr = d3d12_pipeline_state_create_graphics(device, &pipeline_desc, &object))) return hr;
It seems a fair bit nicer to keep d3d12_device_CreateGraphicsPipelineState() the way it is, and then do the conversion from D3D12_GRAPHICS_PIPELINE_STATE_DESC to d3d12_pipeline_state_desc inside d3d12_pipeline_state_create_graphics(). Likewise for d3d12_pipeline_state_create_compute() and d3d12_pipeline_state_create().
This commit can probably also be split a little; for example, the d3d12_pipeline_state_init_compute() changes don't depend on the d3d12_pipeline_state_init_graphics() changes, and the d3d12_pipeline_state_init_graphics() changes don't depend on pipeline_state_desc_from_d3d12_stream_desc().
+static void d3d12_promote_depth_stencil_desc(D3D12_DEPTH_STENCIL_DESC1 *dst, const D3D12_DEPTH_STENCIL_DESC *src) +{ + dst->DepthEnable = src->DepthEnable; + dst->DepthWriteMask = src->DepthWriteMask; + dst->DepthFunc = src->DepthFunc; + dst->StencilEnable = src->StencilEnable; + dst->StencilReadMask = src->StencilReadMask; + dst->StencilWriteMask = src->StencilWriteMask; + dst->FrontFace = src->FrontFace; + dst->BackFace = src->BackFace; + dst->DepthBoundsTestEnable = FALSE; +}
We'd typically implement that as something like this:
```c memcpy(dst, src, sizeof(*src)); dst->DepthBoundsTestEnable = FALSE; ```
and then likely just inline it in pipeline_state_desc_from_d3d12_graphics_desc() where we could drop the "DepthBoundsTestEnable" assignment because "desc" was zero-initialised.
+static VkShaderStageFlags pipeline_state_desc_get_shader_stages(const struct d3d12_pipeline_state_desc *desc) +{ + VkShaderStageFlags result = 0; + + if (desc->vs.BytecodeLength && desc->vs.pShaderBytecode) + result |= VK_SHADER_STAGE_VERTEX_BIT; + if (desc->hs.BytecodeLength && desc->hs.pShaderBytecode) + result |= VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT; + if (desc->ds.BytecodeLength && desc->ds.pShaderBytecode) + result |= VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT; + if (desc->gs.BytecodeLength && desc->gs.pShaderBytecode) + result |= VK_SHADER_STAGE_GEOMETRY_BIT; + if (desc->ps.BytecodeLength && desc->ps.pShaderBytecode) + result |= VK_SHADER_STAGE_FRAGMENT_BIT; + if (desc->cs.BytecodeLength && desc->cs.pShaderBytecode) + result |= VK_SHADER_STAGE_COMPUTE_BIT; + + /* If we use rasterizer discard, force fragment shader to not exist. + * See VUID-VkGraphicsPipelineCreateInfo-pStages-06894. */ + if (desc->stream_output.NumEntries && + desc->stream_output.RasterizedStream == D3D12_SO_NO_RASTERIZED_STREAM) + { + result &= ~VK_SHADER_STAGE_FRAGMENT_BIT; + }
How would we trigger that? We use this to set "defined_stages" in pipeline_state_desc_from_d3d12_stream_desc(), but that never checks VK_SHADER_STAGE_FRAGMENT_BIT. In fact, it only really seems to care about VK_SHADER_STAGE_VERTEX_BIT and VK_SHADER_STAGE_COMPUTE_BIT.
+#define DCL_SUBOBJECT_INFO(type, field) {__alignof__(type), sizeof(type), offsetof(struct d3d12_pipeline_state_desc, field)}
I don't think we need this outside subject_info[]. I.e., we can do something like this:
```c static const struct { ... } subobject_info[] = { #define DCL_SUBOBJECT_INFO(type, field) ... ... #undef DCL_SUBOBJECT_INFO }; ```
+ const char *stream_ptr, *stream_end;
uint8_t, probably.
+ /* Structs are packed, but padded so that their size + * is always a multiple of the size of a pointer. */ + stream_ptr = d3d12_desc->pPipelineStateSubobjectStream; + stream_end = stream_ptr + d3d12_desc->SizeInBytes; + desc_char = (char *)desc;
I.e., "Stream packets are aligned to the size of pointers.", right? In any case, this doesn't necessarily seem like the ideal place for this comment; it would seem more appropriate right before we align(i + size, ...) in the loop further below.
+ subobject_type = *(const D3D12_PIPELINE_STATE_SUBOBJECT_TYPE *)stream_ptr; + subobject_bit = 1ull << subobject_type;
This only works for subobject types smaller than 64, of course. In principle that's fine, but we only validate the subobject type a few lines later, which is a little ugly.
+ if (stream_ptr + i + size > stream_end) + { + WARN("Invalid pipeline state stream.\n"); + return E_INVALIDARG; + }
It's probably fine in practice, but this kind of thing is always a bit of a trigger when reviewing code. It seems nicer to use vkd3d_bound_range(), here and in the similar comparison at the start of the loop.
+union d3d12_root_signature_subobject +{ + struct + { + D3D12_PIPELINE_STATE_SUBOBJECT_TYPE type; + ID3D12RootSignature *root_signature; + }; + void *dummy_align; +};
Do we need "dummy_align"? Could we use DECLSPEC_ALIGN? Could we make these unions local to test_create_pipeline_state()? I don't think we need them at the top level, right?
The placement of "{" and "}" is a bit unusual in a few places in the test. For example in the tests[] declaration.