I’m trying to position objects in the scene and it’s not intuitive to do that.
I would like to know the role of the camera. My first guess was that the camera was controlled by the glasses position, but then I saw what the camera ‘sees’ looks completely different than the final render. And if I click on the camera in Scene window, the frustum that I see there is not what I’m seeing in the game.
When I test without the glasses, I expected that what I would see was what the camera was capturing, but it’s not.
I read that the viewport does no rendering, it’s only used for correct calculations.
I read all the docs and I couldn’t find a clear explanation on how to position objects in the world and predict what I will see when I run the app. There’s too much magic going on, and I would like to really understand what’s going on to be able to build what the designers want exactly, without having to run the app a thousand times with small tweaks to get there.
Is there any doc that explains this clearly? (I checked http://developer.zspace.com/docs/coordinate-spaces/ but I still don’t understand what the camera is doing)