Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Track depth textures for VR #205

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
Open

Track depth textures for VR #205

wants to merge 10 commits into from

Conversation

fholger
Copy link
Contributor

@fholger fholger commented Mar 13, 2021

This took a while, but I now have depth support working on a range of Unity and Unreal VR games, and Skyrim.

Disclaimer: many effects using depth are probably too expensive for VR, anyway, and are also prone to shimmering artefacts due to slight mismatches in calculation between the two eyes. Therefore, this feature is probably not as useful for VR as for flat games. So not supporting depth for VR games in favour of a simpler VR integration is definitely an option :)

Anyway, here is a rough rundown of what I had to do to get depth working:

Getting color and depth texture dimensions to match

The majority of games I encountered seem to be using a single big texture for both eyes during submit, and consequently their depth texture is the same. So far, Reshade has copied the submitted region for each eye and processed them separately, then copied the region back. To get a matching depth texture during post-processing, this would require to also do a regional copy of the depth texture. Unfortunately, D3D11's CopySubResourceRegion does not support that for depth/stencil textures, and so you'd have to use a (compute) shader for the copy.
I did originally plan to add a copy_resource_region function to the command_list interface, but I found no clean way to implement a compute shader call within d3d11::device_context_impl.

Instead, I now always give the full color texture to the runtime, so that the original depth texture will match. If the game sends the same texture for both eyes with different regions, I only call the runtime for the first submit, so that we don't process both eyes twice in a frame. It's a bit less elegant, but appears to also be ever so slightly faster for games that use the single big texture approach.

(This also invalidates my statistics hack in the other PR, but that's probably a good thing :D )

Communicating with the depth plugin

To properly track the depth textures for VR, the depth addon needs a few extra pieces of information from the runtime. For a starter, it needs to even recognize that it's dealing with a VR runtime, and it also needs the current eye and the submitted region to differentiate between the possible depth setups in a VR renderer. I added a small struct with a set_data call on the runtime that the depth addon can retrieve.

Tracking the depth textures

Given that there can be up to two separate depth textures, the state in the state_tracking_context (selected texture, view, potential backup) needed to be replicated for the VR eyes. So I extracted them into their own struct. I also added a timestamp of the last drawcall to the counters so that we can decide which depth texture belongs to which eye (if separate textures are used). Most games probably render the left eye first, so that's the default assumption, but I also added a config option to swap the eyes if necessary.

Dealing with the options for depth setup in VR

I've encountered a couple of different ways that depth textures may be used in VR games, and this is the way I deal with them:

  • single big texture for color and depth containing both eyes: this one is straight-forward, as it works pretty much the same as for flat games. Just find the best match for the color texture resolution and provide that. The majority of Unity and Unreal games as well as Skyrim/FO4 use this approach.
  • single small texture reused for both eyes: in this case, you have to make an (extra) copy of the depth texture on clear for the first eye. When the game finds only a single matching depth texture, it will automatically set this one up to be copied on clear. If needed, there's also an extra config option to force the clear index for this first eye. I've encountered this setup in 'Elven Assassin', and with this approach depth is working.
  • separate small textures for each eye: this is where the timestamp of the last drawcall comes into play to assign the textures to each eye. Aside from that, it's straight-forward.
  • separate small textures for color, single big depth texture: 'Talos Principle' does that, and I assume other Croteam games (Serious Sam) probably do this as well. I assume they also have a single big color buffer somewhere, but they submit individual color textures for each eyes. Therefore, the depth texture will not match the color texture size, and without implementing a region copy for depth textures, this case is not solvable and hence not supported in this PR...

@crosire
Copy link
Owner

crosire commented Mar 16, 2021

This is a great start! I'm not fully happy with the way presentation is handled yet though. I think it would be better to integrate stereo support more deeply and keep the separate present events for the left/right eye submits (and instead pass a value that indicates whether this is a mono present, or a left or right stereo present). That way in the future we can e.g. also add support for the GUI in VR by rendering it in stereo (since the runtime would be aware of whether it should render the left or right part in the current present).
Can keep using the full color/depth texture, to avoid the copies, but have to do some trickery then and either render effects on each submit, but with a viewport that only covers half of it (+ passing some info to the effects so that they only sample half too) or (since that would require effects to support this) only render them in only one for the entire texture (like you do right now). That logic should ideally be part of the effect runtime though.
I'll think about this a bit and get back.

@fholger
Copy link
Contributor Author

fholger commented Mar 16, 2021

Sounds good. I did actually briefly consider using the viewport/scissor approach, but I didn't want to go that deeply without your approval, and I wasn't sure if my (lack of) knowledge of Vulkan and DX12 would even allow me to implement that properly :D Still, I think that approach could have merit, and it also has the advantage that shaders would not have to be specialized to deal with single vs two-eye views - if that makes a difference for them.

On that topic, there is one potential issue that might be worth considering. There are some effects out there that accumulate information over multiple frames (e.g. TAA from AstrayFX) by storing copies of previous frames in separate textures. These effects are obviously not aware of the possibility that on_present might be called twice per frame and are therefore incompatible with this approach. The easiest fix would be to use separate runtimes for each eye, but perhaps you have a better idea. If the runtimes do get a deeper support for VR rendering, they might also be able to manage resources per eye if necessary.

@crosire
Copy link
Owner

crosire commented Mar 21, 2021

Went with a single present call per VR frame now (f986346), as you had envisioned, since it simplifies backwards compatibility with effects, avoids problems with effects that accumulate information (like you pointed out). Both OculusVR and OpenXR have a single submit call to submit both eyes in one go, so this is more future proof too.
To emulate this in OpenVR I'm now always creating a side-by-side stereo texture in the effect runtime and copy each eye submitted by the application into that texture. But instead of passing what the application submitted on to OpenVR, I'm passing this side-by-side stereo texture (by skipping the left eye submit of the application and submitting both left and right eye in one go from that texture on the right eye submit of the application). This assumes that the application first submits left and then right eye, but that's the case in all applications I have encountered so far.

This has several advantages:

  • Eliminates the second copy from the runtime texture back into the application texture
  • Both left and right eye can be post-processed in one go, without needing to set viewports or similar
  • Runtime dimensions will match those of the depth texture if a single large depth texture is used
  • Taking a screenshot in the VR effect runtime will work and capture both eyes
  • Solves the problem with missing image usage flags in Vulkan (OpenVR only requires submitted images to have the VK_IMAGE_USAGE_TRANSFER_SRC_BIT flag, so can't copy back into those. Since now it's only used as a copy source, this is no longer a problem.)

One disadvantage is that post-processing may bleed from one eye to the other at the seam between them. But that shouldn't be that big of a problem.

I'll rebase the pull request later.

@fholger
Copy link
Contributor Author

fholger commented Apr 10, 2021

Seems like we've both been busy for a while :)

I actually just looked into rebasing this PR, but the more I think about it, the more I feel that the work here is now largely unnecessary. The majority of it was to support different potential ways of how depth textures might be used, but with the changed submission process, we are "limited" to the single big texture case, anyway :) Given that that appears to be the vast majority of existing games, not a huge loss.

I think all that's really needed now is a simple check in the generic_depth addon's on_present call to see whether there is a VR runtime present, and if so, then only proceed if it's been called for the VR runtime. This is required so that the default runtime does not interfere with the VR runtime's depth texture selection. The rest should work out of the box, I think. I don't know how you'd like to do this check (or if you have a better idea); in my other PR I added a simple function to access the active VR runtime. That'd be one way to do it.

@crosire
Copy link
Owner

crosire commented Apr 12, 2021

Yeah, sorry, didn't forget about this, but didn't get to do much ReShade work in the last couple of weeks.

The changed submission still supports games that submit individual eye textures (they are copied to a single big side-by-side texture in runtime_impl::on_layer_submit) etc. We could of course decide to not care about those for the depth add-on. But still would need to incorporate these changes if we do I think.

The problem with picking a runtime in the depth add-on is a more generic problem that needs to be solved anyway I think. It currently falls apart whenever there are more runtimes than one. E.g. in a racing simulator set-up with 3 monitors, where each monitor gets a separate runtime, the depth add-on will only pick a single depth buffer, rather than three different ones (since presumably racing games render those in separate steps). In VR we have the same (except just two potential depth buffers and the additional problem that there is a non-VR runtime instance that is entirely unimportant for rendering). Simply only using the VR runtime if there is one would fix it for this case, but wouldn't help for the general case, so will need to think about if there is a more generic solution that could tackle them all.

@fholger
Copy link
Contributor Author

fholger commented Apr 12, 2021

Yes, but for depth support to work, you'll need to have the depth texture matching the single side-by-side texture, because otherwise you'd have to similarly copy individual depth textures, and that's unfortunately not so straight-forward to do (at least not with D3D11).

I suppose you could use the extracted depth_stencil_selection from this PR and store it in a map with the effect_runtime as key. That way, you could have each runtime select their own texture, and the depth plugin could track all of them. VR is a bit of a special case, though, in that we'd actually want other runtimes to do as little as possible or ideally not even be present to not waste valuable processing time. For now, though, the primary runtime is still useful for the UI and configuration of the effect chain.

@StarGate01
Copy link

Great work! Are thate any updates on the progress of this PR? have the changes since been merged into main?

@outNEXT
Copy link

outNEXT commented May 2, 2024

What's the status of this PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants