-
-
Notifications
You must be signed in to change notification settings - Fork 361
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WebXR Snap Rotation #2195
base: master
Are you sure you want to change the base?
WebXR Snap Rotation #2195
Conversation
4cee5d7
to
5afcbcc
Compare
Hi @martinken , would you mind weighing in on these changes? I'm running into difficulty introducing VR snap rotation in Pinging @floryst per our earlier discussion on the matter. |
I'm a bit confused. When you snap rotate the camera what does that mean? Normally I think of he camera in VR as always being at 0,0,0 in view (or pose) coordinates and always facing -Z, Y up, and X right. That's sort of the physicalToView matrices job to me. The camera has a physical location and orientation in physical coordinates as well. To rotate the camera means turning your head. But you can rotate the world which I think is what you are talking about here. If we have actor -> world -> physical -> view -> projection the physical to view matrix is completely determined by the HMD position and orientation and we cannot really change that (well some VR systems allow you to redefine Front but we handle that in the world to physical matrix). But we can change how the world coordinates map to the physical space. In C++ we allow the controllers to adjust the PhysicalViewDirection, this feels like rotating the world about the physical up vector and I think that is what you are talking about here. We rotate the worldToPhysical matrix and now the HMD is lookig in a different direction without the user turning their head. There is a very brief README on the C++ that talks about some of this as we just revamped the code and naming recently. It can be found here https://gitlab.kitware.com/vtk/vtk/-/blob/master/Rendering/VR/README.md The JS code is older and pre revamp so it is not as clear how it all fits together. |
Thanks for the explanation, I agree that rotating the world around the user is a better way to put it. For testing I've essentially been trying to compose a simple worldToPhysical matrix as the identity matrix + some fixed rotation about The C++ README is a good introduction. Do you have suggestions as to what other pieces of the revamped OpenXR code would be useful to duplicate in vtk.js with WebXR? I think there is merit to following some similar structure with documentation currently centered on the C++ side, though obviously there will be places where WebXR and OpenXR diverge. |
The JS code isn't too bad in terms of the actual functions, but their naming and documentation are rough and need improvement. For example computeViewParametersFromPhysicalMatrix(mat) really does something like setCameraViewParametersFromAWorldToEyeMatrix(worldToEyeMatrix) Originally I tended to name matrices based on one coordinate system. Like the "view" or "projection" matrix. The C++ VR code now tends to name matrices based on what two coordinate systems they convert between. For example instead of the "view" matrix, we have the WorldToLeftEyeMatrix which is much more explicit about what the matrix represents. So the JS code could definitely use some of that to help all of us keep track of what is going on when we start working in that code. I tend to get lost with matrix operations easily so clear naming and row/column order conventions are a huge help. In C++ all the matrices are VTK style until you get to the key matrices. Even there I wonder if they would be better to stay in VTK style and then transpose at the last second when sending to OpenGL. In JS we have the same mix of OpenGL and VTK style which can be confusing so anything to help keep that clear is also a win IMO. |
Found additional discussion of handling viewer rotation in the WebXR samples: https://github.com/immersive-web/webxr-samples/blob/main/teleportation.html#L313 |
Note that I do not expect to be able to revisit this PR. Will leave open in case others would like to take up this development, otherwise project maintainers are welcome to close it. |
PR and Code Checklist
npm run reformat
to have correctly formatted codeContext
Draft PR for implementing VR snap rotation. Depends on #2184.
"Snap rotation" is a useful operation in VR where the user presses left or right on their gamepad thumbstick to rotate their view in-place by a fixed angle (usually ~30-45 degrees). For user comfort it is best that rotation happens immediately ("snaps") without interpolation.
Draft code is currently unstable. My understanding is that accomplishing a camera rotation at a fixed position should consist of translating the camera to the origin, performing the rotation, and then translating back to the designated position. In practice this gives the initial appearance of rotating the camera, however when additional translation is introduced post-rotation the camera does not move in the appropriate world direction.
For simplicity of experimentation the "world-to-physical" transformation step has been removed such that the view matrix we get from the VR headset is treated as the "world-to-view" instead of "physical-to-view" matrix (or, the world-to-physical matrix is always the identity matrix). The
Geometry/VR
example has been updated with a smaller and closer cone for testing. These changes will be reverted in the final PR once appropriate rotation steps have been introduced.The following steps recreate the existing issue:
What I've tried so far:
Changes
Results
Testing
This contribution is funded by Example.