Search Issue Tracker
Fixed in 2018.1.X
Votes
2
Found in
5.0.0b22
Issue ID
674107
Regression
No
[Shadows] Camera.Render() clears _CameraDepthTexture if there's a shadow casting directional light in the scene
To reproduce:
1. Open attached project
2. Open scene "scene"
3. Comment out line 47 "QualitySettings.shadowDistance = 0;" in DeferredParticles.cs
4. Notice messed up edge detection
Comments (1)
Add comment
All about bugs
View bugs we have successfully reproduced, and vote for the bugs you want to see fixed most urgently.
Latest issues
- Texture2D hash changes inside of an AssetBundle when rebuilding a SpriteAtlas bundle with an empty AssetPostprocessor Script enabled
- Aniso Level still applies when Generate MipMap is disabled in Texture Import Settings
- Mipmap Limit Groups long names are not truncated when creating a new Mipmap Limit Group with a long name
- “ArgumentException: Invalid double parameter.” error is thrown when Infinity is typed into the Fixed Timestep field
- GameObject becomes gray when using HDRP and STP together on macOS
jbooth
Dec 02, 2015 16:45
So, this just cost me a day of fooling around trying to figure out why my image processing effect (which renders some data with a camera) wasn't working correctly.
I get the desire to share buffers between things; but as long as Unity supports multiple cameras it should be assuming we can share the depth buffer between the shadow pass and the regular pass. It's also incredibly frustrating when you come across these types of optimizations, because figuring our what is going on can be very difficult. I'm much prefer the camera system be more explicit about what you want to do, such as being able to Camera.Render(CameraPass.ForwardBase); and know that shows, etc aren't going to be rendered. I often just use cameras to render things into buffers.