Search Issue Tracker

By Design

By Design in 6000.5.X

Votes

0

Found in

6000.0.66f2

6000.3.5f1

6000.4.0a6

6000.5.0a6

Issue ID

UUM-133615

Regression

No

Memoryless writes to color backbuffer target fail when depth and shadow map depth are not cleared by the same camera

Metal

-

How to reproduce:
1. Open the "MemorylessStopRender.zip" project
2. Build for iOS
3. Open the “Unity-iPhone.xcodeproj“ in Xcode
4. Build to a device
3. Observe the build screen

Actual Result: Only Cube is visible
Expected Result: Cube, Sphere and Plane are visible

Reproducible with: 6000.0.67f1, 6000.3.7f1, 6000.4.0b7, 6000.5.0a6

Testing environment: macOS 26.2 (M1 Max)

Reproducible with devices:
VLNQA00632, iPhone 16e (MD1Q4QN/A), CPU: Apple A18 Pro, OS: 18.3
VLNQA00358, iPhone 12 (MGJ93ET/A), CPU: Apple A14 Bionic, OS: 26.0.1
VLNQA00416, iPhone 13 Pro (MLVA3ET/A), CPU:  Apple A15 Bionic, OS: 17.6.1
VLNQA00543, iPad mini 4 (WiFi) (MK9N2B/A), CPU: Apple A8, OS: 15.8.3

Not reproducible with devices:
VLNQA00612, Samsung Galaxy S24 (SM-S921B), CPU: Exynos 2400, OS: Android 15

Notes:
- Turning Memoryless Depth off in the iOS player settings fixes the issue
- Making sure the depth buffer and shadow map are cleared by the same camera also fixes the issue

  1. Resolution Note:

    Memoryless depth implies that depth contents would be discarded on render target change. What is happening here, is that we first render camera without shadowmap (so it just renders to backbuffer directly), and then the camera with shadows is rendered. This second camera first render shadows (here the render target change is happening and the depth contents are discarded) and then renders to backbuffer depth. In xcode frame capture, in both 22.3 and newer unity version, you can see that the depth buffer is filled with NaN values (as expected) so any depth comparison operation result is subject to a lot of things (maybe even including moon phase! who knows). Joking aside, the fact that it renders "fine" on 22.3 is a pure accident: for example when checking rendering in frame capture you can clearly see that rendering is different from the "normal rendering on device". There are several possible workarounds (maybe not viable in the full project):
    1. Second camera can clear depth (this is "free", so no performance impact) - it will make sure that depth is filled with non-NaN values, making depth comparison predictable
    2. This is more specific to repro, I would say: you can reorder cameras so that shadows rendering happen in the first camera; then actual rendering for first and second cameras can be shared (but this approach is quite fragile, I must say)
    3. You can switch to URP+RenderGraph which would make these decisions (having memoryless render surfaces) with full knowledge of global usage of them

    To summarize: memoryless depth backbuffer is well suited for the setup of "one camera renders to backbuffer the final result" since this avoids all these potential pitfalls. Indeed, the fact that camera renders shadows before doing "normal rendering" does not interact well with memoryless depth

  2. Resolution Note (6000.5.X):

    Memoryless depth implies that depth contents would be discarded on render target change. What is happening here, is that we first render camera without shadowmap (so it just renders to backbuffer directly), and then the camera with shadows is rendered. This second camera first render shadows (here the render target change is happening and the depth contents are discarded) and then renders to backbuffer depth. In xcode frame capture, in both 22.3 and newer unity version, you can see that the depth buffer is filled with NaN values (as expected) so any depth comparison operation result is subject to a lot of things (maybe even including moon phase! who knows). Joking aside, the fact that it renders "fine" on 22.3 is a pure accident: for example when checking rendering in frame capture you can clearly see that rendering is different from the "normal rendering on device". There are several possible workarounds (maybe not viable in the full project):
    1. Second camera can clear depth (this is "free", so no performance impact) - it will make sure that depth is filled with non-NaN values, making depth comparison predictable
    2. This is more specific to repro, I would say: you can reorder cameras so that shadows rendering happen in the first camera; then actual rendering for first and second cameras can be shared (but this approach is quite fragile, I must say)
    3. You can switch to URP+RenderGraph which would make these decisions (having memoryless render surfaces) with full knowledge of global usage of them

    To summarize: memoryless depth backbuffer is well suited for the setup of "one camera renders to backbuffer the final result" since this avoids all these potential pitfalls. Indeed, the fact that camera renders shadows before doing "normal rendering" does not interact well with memoryless depth

Add comment

Log in to post comment

All about bugs

View bugs we have successfully reproduced, and vote for the bugs you want to see fixed most urgently.