Object positioning - Relationship between viewport, camera and glasses

I’m trying to position objects in the scene and it’s not intuitive to do that.

I would like to know the role of the camera. My first guess was that the camera was controlled by the glasses position, but then I saw what the camera ‘sees’ looks completely different than the final render. And if I click on the camera in Scene window, the frustum that I see there is not what I’m seeing in the game.

When I test without the glasses, I expected that what I would see was what the camera was capturing, but it’s not.

I read that the viewport does no rendering, it’s only used for correct calculations.

I read all the docs and I couldn’t find a clear explanation on how to position objects in the world and predict what I will see when I run the app. There’s too much magic going on, and I would like to really understand what’s going on to be able to build what the designers want exactly, without having to run the app a thousand times with small tweaks to get there.

Is there any doc that explains this clearly? (I checked http://developer.zspace.com/docs/coordinate-spaces/ but I still don’t understand what the camera is doing)

Hi there,

I can’t really explain how coordinate spaces are translated any better than the document you linked to, but I do understand your frustration regarding camera positioning.

If you’re using Unity3d and our zCore package, then the following is what I would suggest. If not, I would still recommend taking a look at it since it has some valuable demos and helper functions to reference.

The “CameraNavigationSample” in our unity zCore package demonstrates how to position the camera so that a specified world coordinate lands dead center in viewport space. It uses a function in ZCore.cs called “SetViewportWorldTransform”.

Positioning the camera from the perspective of where the viewport center should be, as this helper function does, is the best approach I’ve come across so far.