RFC: Replace 2D calib with 3D calib
Our extrinsic calibration has two modes: 2D and 3D
Technically, the 2D calibration can be expressed as a 3D calibration. It's just a special case. I propose only using the 3D calibration, as that leaved only one place where 2d-3d correspondence is calculated in the code in contrast to currently two places (ExtrCalibration and ImageItem)
To do this, we'd need to translate
our 2D parameters to 3D parameters such that we still can work with old projects.
Of course there are a bunch of questions when we want to do this, but I want to highlight 2
- Do we want to only load old projects or do we want to support changing the values as well? (Remember: 3D calib works with points in a plane -> will probably never use in new project)
- Has the
use intrinsic center for calculating real position
option ever not been used for a project with intrinsic calibration?
Here a small snippet I used to transform one specific 2D experiment. IMPORTANT: This does not give the exact same rotation and translation as PeTrack uses internally, but instead an affine transformation from cam to world coords (see #327 (comment 239902) for how PeTrack stores it) also it uses m instead of cm and ignores borderSize
Click to expand
/// calls transformHermesToWorld with hardcoded 2D-coordinates from 240-240-240
Eigen::Affine3f
transformHermesToWorld(std::shared_ptr<HermesSource> &videoSource) {
constexpr float t_x = 678.8; // px
constexpr float t_y = 254.6; // px
constexpr float rotation = 270.3 / 180. * EIGEN_PI;
constexpr float scale = 1.254;
auto pixelTransform = Eigen::Affine2f::Identity();
pixelTransform.translation() << t_x, t_y;
pixelTransform.rotate(Eigen::Rotation2D(rotation));
pixelTransform.scale(scale);
pixelTransform = pixelTransform.inverse();
return transformHermesToWorld(videoSource, pixelTransform);
}
Eigen::Affine3f
transformHermesToWorld(std::shared_ptr<HermesSource> &videoSource,
Eigen::Affine2f transformPixel) {
const float altitude = CAMERA_HEIGHT; // m
// image middle pixel
float mx = 1280. / 2. - 0.5;
float my = 960. / 2. - 0.5;
// since not explicitly given; assume principal point in image center
float cx = mx;
float cy = my;
float f = 0;
triclopsGetFocalLength(videoSource->getTriclopsContext(), &f);
float h = altitude;
auto camToPixel = Eigen::Affine3f::Identity();
camToPixel.translation() << mx, my, 0;
float fh = f / h;
float dx = (cx - mx) / h; // cx - mx = 0; same for cy-my; May *slightly*
// differ for real principal point
float dy = (cy - my) / h;
// clang-format off
camToPixel.linear() << fh, 0, dx,
0, fh, dy,
0, 0, 1;
// clang-format on
auto pixelToPixel = Eigen::Affine3f::Identity();
auto t_pixel = transformPixel.translation();
pixelToPixel.translation() << t_pixel(0), t_pixel(1), 0;
pixelToPixel.linear().topLeftCorner(2, 2) = transformPixel.linear();
auto pixelToWorld = Eigen::Affine3f::Identity();
pixelToWorld.linear().diagonal() << (1. / 100.), (-1. / 100.), -1.;
pixelToWorld.translation() << 0, 0, altitude;
auto camToWorld = pixelToWorld * pixelToPixel * camToPixel;
return camToWorld;
}