Hi everyone, it’s the Jakub from Capturing
Reality. This video is a sequel to the camera projections in Blender tutorial that we published a few weeks back. If you haven’t watched it the link is in
the description. In the first tutorial I was processing a head scan in RealityCapture and after the processing was finished I exported the mesh together
with cameras in Alembic format and I also exported the undistorted images
using the keep intrinsics option while preserving the original image resolution.
Then we use the cameras with the undistorted images in Blender 2.81 for
camera projection painting. I briefly mentioned that we have a dedicated Maya scene export and I would like to talk about that today. I would also like to show you how to set up a simple projection shader in Maya using the
cameras and undistorted images. I will quickly jump back to RealityCapture,
open the head scan project and this time export a camera alignment into a Maya
scene. This option only exports the cameras so I will need to export the
mesh separately. Then I will open the scene directly in Maya, check how the images aligned with the mesh and set up the projection shader from one camera. You may ask yourself, well what is the benefit of exporting to Maya scene? We already exported the model with the cameras last time. Well the benefit is that all the cameras in the Maya scene will already have the undistorted
images assigned to all of them, and that is a huge time-saver. Without any further ado let’s start with the tutorial. This is the project I process in the previous video. If you want to see the processing I highly recommend you to watch it
because I will not be repeating the process this time. The registration can be exported from the Workflow tab or from the Alignment tab. Both work the same. After specifying where I want to save the registration I need to choose the file type and I will select the Maya 2013 ASCII scene export. In the export dialog I need to check the following settings. Export image planes need to be set to True. This setting enables that the
undistorted images will be assigned as image planes to the cameras. For the undistortion, the Fit needs to be set to Keep intrinsics to preserve the camera calibration parameters. I want to keep the original resolution of the images so I will set the resolution to Preserve. Undistort Principle point is set to Yes. In the export image settings, I will change the Naming convention to an image sequence starting from number 1. This really depends on your preference you can leave it on original file name if you want to.
In the exports transformation settings I will change the coordinate system to
same as XMP. In the first video, I mentioned that I was using XMPs with Images to preserve the coordinate system of the camera rig. That is why I will choose this option. After all is set just click on OK. As I mentioned in the beginning, I need to export the mesh separately because it is not contained
in the Maya scene. To export the mesh I will go to the Reconstruction tab and
click on Export model. This time I can choose OBJ for a change. The most important setting, in this case, is that the coordinate system of the exported mesh is the same as the registration. Otherwise the image planes will not
match with the mesh in Maya. Now I have all I need I am done with RealityCapture and I’m ready to jump to Maya. Now that we are in Maya the first step
is to open our scene with the aligned cameras. I will click on File, Open scene and navigate to the folder where I exported the registration from RealityCapture. I don’t want to save the changes to the untitled scene so I will click on
Don’t Save. The first time you are loading the scene it can take a little bit longer to open. Now the scene is opened and we can see the cameras. They are too big and overlapping each other so first I will change their locator
scale so I can distinguish individual cameras from each other. I will select all the cameras go to the Layer editor and change the scale 0.1 is looking good. Now I will add the OBJ mesh click on File, Import and select the mesh. Now everything is in the scene, but the scene is rotated sideways. To fix this I can
group the mesh with the cameras by pressing CTRL + G and rotate the group by – 90 degrees around the X-axis. To view the mesh with the texture applied
to it click on Textured and use No lights to view the mesh with the texture with flat shading like in RealityCapture. Everything is prepared and now I will save the scene just in case. Each camera has an image plane with the corresponding undistorted image linked
to it. Now they are only visible when we look through the selected camera. We can
also enable their visibility in all views in the Attribute editor. Click on Attribute editor, ImagePlaneshape and check Display in all views. Now we can switch to the camera view to check how is our mesh lining up with the image
plane. With the selected Camera number 7, go to Panels and Look through selected.
It looks like we are looking at the image but actually it is the textured mesh
with the image plane in the background. I will disable the textured view and also switch back to the default lighting
to see the mesh. I will turn on the Film gate and Gate mask. As you can see the
mesh is lining up with the image plane perfectly. I can also check multiple cameras by switching between them in the Panels menu. Everything is lining up and now I will set up a very simple projection shader and project one of the
undistorted images from a corresponding camera on the mesh. To set up the
projection I will open Hypershade and create a new material. I will start by
adding the Lambert shader. I will be projecting image number 7 through camera object number 7. In the sake of organizing and keeping things tidy I’ve
renamed the Lambert material to Projection_7. In Color I will select file and by right clicking on File I will select Create as projection.
I need to switch the projection type to Perspective and underneath it in Camera
Projection Attributes, I have to select the camera that I want to project from.
In this case it’s the camerashape7. In the file node I will select the image that I want to project. I can turn the filtering off because I
don’t want to apply any kind of filtering on the projected image. The projection setup is done so now I can close Hypershade. To apply the projection on the mesh I will click on it to select it and right-click to apply an existing
material. To better view the projection I will
switch to the Perspective view, turn on texture visibility and use flat
lighting. Now when I zoom and rotate the view around the head you may notice that
one projection from one camera will not be enough. Fortunately in this case we have a lot of cameras to project from. You can create a layered texture and
plug multiple projections to it. You would use masks to define what project and what not to project. But that is a topic for a more advanced tutorial. I demonstrated the projection on this head scan but you could also do the same with a building or a landscape. That’s it for today. Thank you very much for watching and see you in the next video. Bye.