Want create site? Find Free WordPress Themes and plugins.



Ok, this took a while to figure out. Thought we’d upload a tutorial for anyone wondering why their Maya depths don’t work that great with Nuke. You need more maths!

In theory you should just be able to paste the block below into Nuke to get the expression node, then connect an image with a Maya (in this case Mental Ray) depth channel, connect the camera attributes up with expressions, and it should spit out the image, now with a safe-for-nuke depth channel. Yeah! We put the maths at the bottom, if anyone’s interested (or can suggest improvements!). Good luck dudes!

set cut_paste_input [stack 0]
version 8.0 v5
push $cut_paste_input
Expression {
temp_name0 pixelAngleX
temp_expr0 (x-halfPixelSize.w)/pixelSize.w*cameraHorizontalFOV
temp_name1 pixelAngleY
temp_expr1 (y-halfPixelSize.h)/pixelSize.h*cameraVerticalFOV
temp_name2 correctionFactor
temp_expr2 cos(pixelAngleY)*cos(pixelAngleX)
channel0 depth
expr0 1/(depth.Z*correctionFactor)
channel1 none
channel2 none
name MayaDepthToNukeDepth
selected true
xpos -299
ypos -303
addUserKnob {20 Camera_settings l “Camera Settings”}
addUserKnob {26 text1 l “” +STARTLINE T “Use expressions to connect thesento your camerann”}
addUserKnob {7 cameraHorizontalAperture}
cameraHorizontalAperture {{parent.parent.CameraImport.haperture}}
addUserKnob {7 cameraVerticalAperture}
cameraVerticalAperture {{parent.parent.CameraImport.vaperture}}
addUserKnob {7 cameraFocalLength}
cameraFocalLength {{parent.parent.CameraImport.focal}}
addUserKnob {20 endGroup n -1}
addUserKnob {26 “” +STARTLINE}
addUserKnob {26 Text2 l “” +STARTLINE T “These are calculated automaticallynbased on the settings above. Yeah!”}
addUserKnob {7 cameraHorizontalFOV}
cameraHorizontalFOV {{2*atan(0.5*(cameraHorizontalAperture/cameraFocalLength))}}
addUserKnob {7 cameraVerticalFOV}
cameraVerticalFOV {{2*atan(0.5*(cameraVerticalAperture/cameraFocalLength))}}
addUserKnob {14 pixelSize R 0 100}
pixelSize {{width} {height}}
addUserKnob {14 halfPixelSize R 0 100}
halfPixelSize {{pixelSize.w/2} {pixelSize.h/2}}
}

The Math(s):

Right, so there’s lots of problems here – the most obvious one is that Nuke’s depth is in the 1/depth format, whereas maya’s depth is just the depth. So you already need to divide it with an expression node. BUT. Even if you do that, you’ll notice a weird spherical distortion – that’s because Maya measures depth from the camera POINT, and nuke from the camera PLANE, apparently. So you need to correct for that. I think there may be a subtle error here, because there’s a very subtle distortion on the corners, but it’s a million times better than it was. Suggestions welcome though!

work out camera angles from focal length and aperture in nuke (remember if you do this in maya, you need to correct for the ridiculous fact that focal length is in mm and aperture is inches! RIDICULOUS):
cameraHorizontalFOV = 2*atan(0.5*(cameraHorizontalAperture/cameraFocalLength))
cameraVerticalFOV = 2*atan(0.5*(cameraVerticalAperture/cameraFocalLength))

Then work out the angle of whatever pixel we’re looking at:
pixelAngleX = (x-halfPixelSize.w)/pixelSize.w*cameraHorizontalFOV
pixelAngleY = (y-halfPixelSize.h)/pixelSize.h*cameraVerticalFOV

Then, finally, work out the correction factor to apply for the difference between the Maya depth and the Nuke depth:

correctionFactor = cos(pixelAngleY)*cos(pixelAngleX)
nukeDepth = 1/(mayaDepth.Z*correctionFactor)

Anyway, that wasn’t quite as straightforward as it could have been, but it seems to work, so…on with the animating! Yeah!

🙂

Did you find apk for android? You can find new Free Android Games and apps.