Search Issue Tracker

By Design

Votes

6

Found in

5.6.1f1

Issue ID

924815

Regression

No

CUDA Graphics Interop fails in the Native Plugin when using RenderTexture

Graphics - General

-

To reproduce:
1. Open attached project;
2. Open "CudaTest" scene;
3. Enter Play mode;
-notice there is CUDA error thrown in the Console and Texture is null.
4. Exit Play mode;
5. Change API to Direct3D11 (Edit > Project Settings > Player > Other Settings > Graphics APIs for Windows);
6. In the Hierarchy window, choose the Main Camera;
7. In the Inspector, change the Texture from RenderTexture to "goya" texture (Main Camera > Cuda Test (Script) > Texture);
8. Enter Play mode;
-notice there are no CUDA errors thrown in the Console.

Expected result: CUDA graphics Interop does not fail with RenderTexture.
Actual result: CUDA graphics Interop calls in native plugin fail for all tested graphics APIs (D3D11, D3D9, OpenGLCore and OpenGLES2) with RenderTexture.

Note: CUDA graphics Interop works only when testing with a Normal Texture and Direct3D11.

Reproduced on versions: 5.6.1f1, 5.6.2p1, 2017.1.0f1, 2017.2.0b1.

  1. Resolution Note (2019.3.X):

    On OpenGL, CUDA interop with a render target works, but you need to perform the CUDA resource registering call on the render thread (more specific, on the same thread the texture was created on, otherwise you get a cudaErrorOperatingSystem error). Use GL.IssuePluginEvent() to issue a call on the render thread.

    On Direct3D11, CUDA interop with a render target is not possible because Unity creates its render targets in a typeless format (of type DXGI_FORMAT_{XYZ}_TYPELESS, e.g., DXGI_FORMAT_R32G32B32A32_TYPELESS) and CUDA only supports interop with strongly typed textures. A workaround would be to share a strongly typed texture with CUDA and copy the render target to that texture.

    CUDA Runtime Documentation: https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__D3D11.html#group__CUDART__D3D11_1g85d07753780643584b8febab0370623b

Comments (4)

  1. Varaughe

    Jun 05, 2019 08:05

    cudaGraphicsD3D11RegisterResource cannot be used on a Render Texture created by Unity. If you want to read that Render Texture, first create another D3D11 texture inside your plugin(with the appropriate parameters, so to be compatible with cudaGraphicsD3D11RegisterResource). On Rendering Event coming from Unity, copy the Unity's render texture to that newly created texture, map the cuda's reference, get the reference to cuda array to that texture(not the pointer to data), and user surfRead. These are the steps for reading. For writing, you can only cudaGraphicsD3D11RegisterResource on a Texture2d created in Unity(it can be a blank .jpeg imported in Unity). And eventually, with surfWrite you can write that texture.

  2. Yashiz

    Oct 10, 2017 11:18

    Same issue here.

    It seems, Unity creates render texture as format TYPELSS, such as DXGI_FORMAT_R32G32B32A32_TYPELESS, which can not be shared with CUDA according to the doc below:

    http://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__D3D11.html#group__CUDART__D3D11_1g85d07753780643584b8febab0370623b

    Where it says DXGI_FORMAT_R32G32B32A32_{FLOAT,SINT,UINT} is supported.

    Not sure if this is the issue for D3D11

  3. eehyc

    Sep 16, 2017 08:04

    It is pretty annoying.

  4. luyangliu123

    Aug 16, 2017 06:53

    I got the same issue, which is really wired. It works if I want to do rendering directly on the D3D resources, but when I try to interop it using functions such as cudaGraphicsD3D11RegisterResource, it just fail with error 11: cudaErrorInvalidValue.

Add comment

Log in to post comment

All about bugs

View bugs we have successfully reproduced, and vote for the bugs you want to see fixed most urgently.