Search Issue Tracker
By Design
Votes
0
Found in
2018.4
2019.4
2020.3
2020.3.12f1
2021.1
2021.2
2022.1
Issue ID
1352733
Regression
No
SetRenderTarget does not correctly display the texture when depthSlice is set to -1
Reproduction steps:
1. Open the attached project "texarraytest" and load Scene "SampleScene"
2. Enter Play Mode
3. Observe the Game View
Expected result: The Screen becomes red when entering Play Mode
Actual result: The Screen becomes black when entering Play Mode
Reproducible with: 2018.4.36f1, 2019.4.29f1, 2020.3.15f1, 2021.1.16f1, 2021.2.0b5, 2022.1.0a3
Notes:
- In this reproduction project, the issue can be worked around by setting the following values to be the same:
The depthSlice parameter of SetRenderTarget() in Line 43 of Tex2DArrayTest.cs,
The Index variable in Line 104 of ArrayTest.shader (setting this to 0 allows -1 to be left as the value of depthSlice).
Add comment
All about bugs
View bugs we have successfully reproduced, and vote for the bugs you want to see fixed most urgently.
Latest issues
- Standalone Player crashes with "TDerived GetOrLoad<TDerived>() where TDerived : T" when IL2CPP Code generation is set to "Faster (smaller) Builds"
- IndexOutOfRangeException and InvalidOperationException when logging XML string
- Script missing in "Assets/Settings/Mobile_Renderer/GlobalVolumeFeature" of "com.unity.template.urp-blank" template
- “Font Asset Creator - Error Code [Invalid_File_Structure]…“ error is logged when generating Font Assets from fonts with meta files from previous Editor versions
- Input.mousePosition returns (NaN, NaN, 0.00) when Scene view is opened
Resolution Note:
Closing this by design because you cannot set a TextureArray as active RenderTexture in this way.
First off, a TextureArray is a single resource (with multiple subresources, one for each layer) just like a pixel shader output (SV_TARGET0 for example) is also a single resource. Even if it was allowed to bind an array in this way to each SV_TARGET, it still does not inform the GPU about what layer you wish.
Typically you would call SetRenderTarget X times (x == number of layers in your array), each time specifying the slice index.
Specifying DepthSlice as -1 (== all slices) is only supported when "Layered Rendering" is supported (see https://docs.unity3d.com/Manual/class-Texture2DArray.html). Layered rendering in this context simple means that you can use SV_RenderTargetArrayIndex in your shader to specify to what slice of the array you wish to render.
While initially only supported from inside geometry shaders, more recent hardware allows it inside the vertex shader (see SystemInfo.supportsRenderTargetArrayIndexFromVertexShader).
Here is an excellent blog post about how to achieve this: http://xdpixel.com/how-to-render-to-a-texture-array-in-unity/