Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Show passthrough on excessive dropped frames (or if anchor lookup fails) #25

Open
shinyquagsire23 opened this issue Feb 8, 2024 · 5 comments
Labels
enhancement New feature or request feature A feature which needs to be implemented

Comments

@shinyquagsire23
Copy link
Collaborator

shinyquagsire23 commented Feb 8, 2024

If frame drops get too bad, the head snchor API returns a nil value after too much time has passed, and the image becomes head-mounted. We should consider dropping the immersive opacity (if possible) in this case, to prevent nausea.

If we cannot drop the opacity, we should bother Apple about it, because the command center thingy does it just fine.

@shinyquagsire23 shinyquagsire23 added enhancement New feature or request feature A feature which needs to be implemented labels Feb 8, 2024
@zarik5
Copy link
Member

zarik5 commented Feb 8, 2024

What alvr_client_openxr does is recording the history of sent timestamps + poses in a HashMap, and use the pose matching with the timestamp from the video frame. In this way you don't have to query the pose again. This is also important because from the time you poll the pose the first time to the time you draw the image, the pose might be interpolated/extrapolated slighly differently.

@shinyquagsire23
Copy link
Collaborator Author

I grepped around in CompositorServices, and also tried setting the alpha to 0.5/0.0 in the fragment shader, and it genuinely seems like there's no app-side control over the opacity. So I guess I'll go ahead and submit Feedback for that because it'd also be useful for ie, a flat game streaming app that allows keyboard peeping.

@shinyquagsire23
Copy link
Collaborator Author

@zarik5 Yeah I recorded the timestamps+poses initially, but until I do some Unsafe Swift, the anchor pose is get-only and has no setter. I might actually try that now just to see if it's even possible to set it, or if the compositor re-fetches the timestamp somehow.

@zarik5
Copy link
Member

zarik5 commented Feb 8, 2024

Ah damn, this is the same issue with WebXR IIRC. This is pretty limiting, because no setter means it will not support apps with multi-frame pipeline depth (like VR streaming) (unless late-latching is implemented, but is fundamentally incompatible with VR streaming). The solution is to reimplement reprojection/ATW. This would also be useful for PhoneVR.

@panthurrr
Copy link
Contributor

What we do have is access to the spatial environment during full immersion; maybe possible to enable a spatially accurate floor onto the ground if we cannot dismiss the immersive view and that object will not be affected by any lag.

Currently this is a volumetric window which means it can be moved so not a great ground replacement but it's at least a spatial anchor that is stable.
IMG_0024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request feature A feature which needs to be implemented
Projects
None yet
Development

No branches or pull requests

3 participants