opencv - How do I get the projection matrix of a camera after stereo rectification? -


i'm able projection matrix out of monocular setup after using calibratecamera.

this stackoverflow answer explains how.

now, follow stereo calibration sample , same both cameras after stereo rectification (opencv - stereo rectify). method gives me q, r1, r2, p1 , p2 matrices.

void stereorectify(inputarray cameramatrix1, inputarray distcoeffs1, inputarray cameramatrix2, inputarray distcoeffs2, size imagesize, inputarray r, inputarray t, outputarray r1, outputarray r2, outputarray p1, outputarray p2, outputarray q, int flags=calib_zero_disparity, double alpha=-1, size newimagesize=size(), rect* validpixroi1=0, rect* validpixroi2=0 ) 

i assume have combine them somehow, don't understand how relate these output matrices intrinsics , extrinsics of camera.

thanks in advance!

edit: let's assume cameras don't have distortion. understand can remap images using initundistorrectifymap , remap. but, i'm interested in writing of own code using projection matrix, i.e. if it's single camera calibration, camera matrix c, , rotation , translation vector, combine them projection matrix c * [r |t]. i'd same rectified camera position.

what kind of projection matrix need?

the stereorectify computes the rotation matrices each camera transforms both image plane onto common image plane. makes epipolar lines parallel , have find point correspondences per raster lines. i.e. have 2d point x1 = (x1, y1) on image plane of camera #1 corresponding point on camera #2 located on raster line same y1 component. search simplified 1 dimension.

if interested in computing joint undistortion , rectification transformation should use output of stereorectify input of initundistortrectifymap , remap apply projection. i.e.:

stereorectify( m1, d1, m2, d2, img_size, r, t, r1, r2, p1, p2, q, calib_zero_disparity, -1, img_size, &roi1, &roi2 );  mat map11, map12, map21, map22; initundistortrectifymap(m1, d1, r1, p1, img_size, cv_16sc2, map11, map12); initundistortrectifymap(m2, d2, r2, p2, img_size, cv_16sc2, map21, map22);  mat img1r, img2r; remap(img1, img1r, map11, map12, inter_linear); remap(img2, img2r, map21, map22, inter_linear); 

update #1:

say have point in world coordinate system: p_w. can transformed camera coordinate system multiplying extrinsic parameters, i.e. p_c = r*p_w + t or p_c = [r|t] * p_w.

after rectification have 2 matrices each cameras:

  • a rotation matrix each camera (r1, r2) makes both camera image planes same plane, ,
  • a projection matrix in new (rectified) coordinate system each camera (p1, p2), can see, first 3 columns of p1 , p2 new rectified camera matrices.

the transformation of points rectified camera coordinate system can done simple matrix multiplication: p_r = r1*p_c.
, transformation rectified image plane similar above: p_r = p1*r1*p_c