OpenVINS 代码解读(2) - Feature特征点相关 - anson2004110的文章 - 知乎 https://zhuanlan.zhihu.com/p/95975654
目录
1. class Feature
1.1 变量的定义
1.2 去除无效的特征点对应的信息
2.class FeatureDatabase
2.1 update_feature,更新特征点坐标(若是新的增加,若是已有则在容器末尾添加)
2.2 features_not_containing_newer,返回在timestamp之后已经丢失的特征点,相当于 > timestamp
2.3 features_not_containing_older,返回在timestamp之前的特征点,当于 < timestamp(程序中未使用)
2.4 features_containing 返回在timestamp跟踪的特征点,相当于 = timestamp(用于Marg掉滑窗内最老的MSCKF状态对应时间戳的那些特征点)
2.5 get_feature, 返回查询的ID对应特征点信息,若remove为true则删除容器features_idlookup对应的值
2.6 仅有在更新完MSCKF的之后才执行,删除标志位to_delete=true的点容器features_idlookup
3. Class FeatureInitializer
3.1 成员变量
3.2 三角化特征点
3.3 高斯非线性迭代优化之前三角化得到的粗略特征点坐标
3.4 重投影误差计算
1. class Feature
* This feature class allows for holding of all tracking information for a given feature.
* Each feature has a unique ID assigned to it, and should have a set of feature tracks alongside it.
* See the FeatureDatabase class for details on how we load information into this, and how we delete features.
1.1 变量的定义
/// Unique ID of this feature
size_t featid;
/// If this feature should be deleted
bool to_delete;
/// UV coordinates that this feature has been seen from (mapped by camera ID)
std::unordered_map<size_t, std::vector<Eigen::VectorXf>> uvs;
/// UV normalized coordinates that this feature has been seen from (mapped by camera ID)
std::unordered_map<size_t, std::vector<Eigen::VectorXf>> uvs_norm;
/// Timestamps of each UV measurement (mapped by camera ID)
std::unordered_map<size_t, std::vector<double>> timestamps;
/// Timestamp of anchor clone
double anchor_clone_timestamp;
/// Triangulated position of this feature, in the anchor frame
Eigen::Vector3d p_FinA;
/// Triangulated position of this feature, in the global frame
Eigen::Vector3d p_FinG;
1.2 去除无效的特征点对应的信息
* @brief Remove measurements that do not occur at passed timestamps
* Given a series of valid timestamps, this will remove all measurements that have not occurred at these times.
* This would normally be used to ensure that the measurements that we have occur at our clone times.
* @param valid_times Vector of timestamps that our measurements must occur at
void Feature::clean_old_measurements(std::vector<double> valid_times)
// Loop through measurement times, remove ones that are not in our timestamps
while (it1 != timestamps[pair.first].end()) {
if (std::find(valid_times.begin(),valid_times.end(),*it1) == valid_times.end()) {
it1 = timestamps[pair.first].erase(it1);
it2 = uvs[pair.first].erase(it2);
it3 = uvs_norm[pair.first].erase(it3);
} else {
++it1;
++it2;
++it3;
}
2.class FeatureDatabase
* Each visual tracker has this database in it and it contains all features that we are tracking.
* The trackers will insert information into this database when they get new measurements from doing tracking.
* A user would then query this database for features that can be used for update and remove them after they have been processed.
成员变量中最重要的是特征点容器features_idlookup,存储了ID号和对应特征点Feature信息的map
/// Our lookup array that allow use to query based on ID
std::unordered_map<size_t, Feature *> features_idlookup;
2.1 update_feature,更新特征点坐标(若是新的增加,若是已有则在容器末尾添加)
* @brief Update a feature object
* @param id ID of the feature we will update
* @param timestamp time that this measurement occured at
* @param cam_id which camera this measurement was from
* @param u raw u coordinate
* @param v raw v coordinate
* @param u_n undistorted/normalized u coordinate
* @param v_n undistorted/normalized v coordinate
* This will update a given feature based on the passed ID it has.
* It will create a new feature, if it is an ID that we have not seen before.
*/
void update_feature(size_t id, double timestamp, size_t cam_id, float u, float v, float u_n, float v_n)
step 1 在已有ID的特征后插入图像坐标(畸变和非畸变),时间戳
// Find this feature using the ID lookup
std::unique_lock<std::mutex> lck(mtx);
if (features_idlookup.find(id) != features_idlookup.end()) {
// Get our feature
Feature *feat = features_idlookup[id];
// Append this new information to it!
feat->uvs[cam_id].emplace_back(Eigen::Vector2f(u, v));
feat->uvs_norm[cam_id].emplace_back(Eigen::Vector2f(u_n, v_n));
feat->timestamps[cam_id].emplace_back(timestamp);
return;
}
step 2 若是新的ID特征点,则插入查找容器features_idlookup
// Else we have not found the feature, so lets make it be a new one!
Feature *feat = new Feature();
feat->featid = id;
feat->uvs[cam_id].emplace_back(Eigen::Vector2f(u, v));
feat->uvs_norm[cam_id].emplace_back(Eigen::Vector2f(u_n, v_n));
feat->timestamps[cam_id].emplace_back(timestamp);
// Append this new feature into our database
features_idlookup.insert({id, feat});
}
2.2 features_not_containing_newer,返回在timestamp之后已经丢失的特征点,相当于 > timestamp
* @brief Get features that do not have newer measurement then the specified time.
* This function will return all features that do not a measurement at a time greater than the specified time.
* For example this could be used to get features that have not been successfully tracked into the newest frame.
* All features returned will not have any measurements occurring at a time greater then the specified.
std::vector<Feature *> features_not_containing_newer(double timestamp, bool remove=false)
step 1 查找是否特征点的时间戳比当前时间戳还新
// Now lets loop through all features, and just make sure they are not old
std::unique_lock<std::mutex> lck(mtx);
for (auto it = features_idlookup.begin(); it != features_idlookup.end();) {
// Loop through each camera
bool has_newer_measurement = false;
for (auto const &pair : (*it).second->timestamps) {
// If we have a measurement greater-than or equal to the specified, this measurement is find
if (!pair.second.empty() && pair.second.at(pair.second.size() - 1) >= timestamp) { //
has_newer_measurement = true; // 若仍然跟踪上,则设置状态为true
break;
}
step 2 若存在,则加入到新的集合中,并且在容器features_idlookup中去除
// If it is not being actively tracked, then it is old
if (!has_newer_measurement) { // 注意这里是非负号
feats_old.push_back((*it).second);
if(remove) features_idlookup.erase(it++);
else it++;
} else {
it++;
}
2.3 features_not_containing_older,返回在timestamp之前的特征点,当于 < timestamp(程序中未使用)
* @brief Get features that has measurements older then the specified time.
* This will collect all features that have measurements occurring before the specified timestamp.
* For example, we would want to remove all features older then the last clone/state in our sliding window.
std::vector<Feature *> features_containing_older(double timestamp, bool remove=false)
step 1 查找是否特征点的时间戳比当前时间戳还老
// Now lets loop through all features, and just make sure they are not old
std::unique_lock<std::mutex> lck(mtx);
for (auto it = features_idlookup.begin(); it != features_idlookup.end();) {
// Loop through each camera
bool found_containing_older = false;
for (auto const &pair : (*it).second->timestamps) {
if (!pair.second.empty() && pair.second.at(0) < timestamp) {
found_containing_older = true;
break;
}
}
step 2 若存在,则加入到老的集合中,并且在容器features_idlookup中去除
// If it has an older timestamp, then add it
if(found_containing_older) { // 注意这里
feats_old.push_back((*it).second);
if(remove) features_idlookup.erase(it++);
else it++;
} else {
it++;
}
2.4 features_containing 返回在timestamp跟踪的特征点,相当于 = timestamp(用于Marg掉滑窗内最老的MSCKF状态对应时间戳的那些特征点)
* @brief Get features that has measurements at the specified time.
* This function will return all features that have the specified time in them.
* This would be used to get all features that occurred at a specific clone/state.
std::vector<Feature *> features_containing(double timestamp, bool remove=false)
step 1 查找是否特征点的时间戳等于时间戳
// Our vector of old features
std::vector<Feature *> feats_has_timestamp;
// Now lets loop through all features, and just make sure they are not
std::unique_lock<std::mutex> lck(mtx);
for (auto it = features_idlookup.begin(); it != features_idlookup.end();) {
// Boolean if it has the timestamp
bool has_timestamp = false;
for (auto const &pair : (*it).second->timestamps) {
// Loop through all timestamps, and see if it has it
for (auto &timefeat : pair.second) {
if (timefeat == timestamp) {
has_timestamp = true;
break;
}
}
step 2 若存在,则加入到集合中,并且在容器features_idlookup中去除
// Remove this feature if it contains the specified timestamp
if (has_timestamp) {
feats_has_timestamp.push_back((*it).second);
if(remove) features_idlookup.erase(it++);
else it++;
} else {
it++;
}
2.5 get_feature, 返回查询的ID对应特征点信息,若remove为true则删除容器features_idlookup对应的值
* @brief Get a specified feature
* @param id What feature we want to get
* @param remove Set to true if you want to remove the feature from the database (you will need to handle the freeing of memory)
* @return Either a feature object, or null if it is not in the database.
*/
Feature *get_feature(size_t id, bool remove=false) {
std::unique_lock<std::mutex> lck(mtx);
if (features_idlookup.find(id) != features_idlookup.end()) {
Feature* temp = features_idlookup[id];
if(remove) features_idlookup.erase(id);
return temp;
} else {
return nullptr;
}
}
2.6 仅有在更新完MSCKF的之后才执行,删除标志位to_delete=true的点容器features_idlookup
* @brief This function will delete all features that have been used up.
* If a feature was unable to be used, it will still remain since it will not have a delete flag set
void cleanup() {
// Debug
//int sizebefore = (int)features_idlookup.size();
// Loop through all features
std::unique_lock<std::mutex> lck(mtx);
for (auto it = features_idlookup.begin(); it != features_idlookup.end();) {
// If delete flag is set, then delete it
if ((*it).second->to_delete) {
delete (*it).second;
features_idlookup.erase(it++);
} else {
it++;
}
}
3. Class FeatureInitializer
@brief Class that triangulates feature
* This class has the functions needed to triangulate and then refine a given 3D feature.
* As in the standard MSCKF, we know the clones of the camera from propagation and past updates.
* Thus, we just need to triangulate a feature in 3D with the known poses and then refine it.
* One should first call the single_triangulation() function afterwhich single_gaussnewton() allows for refinement.
* Please see the @ref update-featinit page for detailed derivations.
3.1 成员变量
a. FeatureInitializerOptions _options;
/// Max runs for Gauss Newton
int max_runs = 20;
/// Init lambda for LM optimization
double init_lamda = 1e-3;
/// Max lambda for LM optimization
double max_lamda = 1e10;
/// Cutoff for dx increment to consider as converged
double min_dx = 1e-6;
/// Cutoff for cost decrement to consider as converged
double min_dcost = 1e-6;
/// Multiplier to increase/decrease lambda
double lam_mult = 10;
/// Minimum distance to accept triangulated features
double min_dist = 0.25;
/// Minimum distance to accept triangulated features
double max_dist = 40;
/// Max baseline ratio to accept triangulated features
double max_baseline = 40;
/// Max condition number of linear triangulation matrix accept triangulated features
double max_cond_number = 1000;
b. ClonePose
* @brief Structure which stores pose estimates for use in triangulation
* - R_GtoC - rotation from global to camera
* - p_CinG - position of camera in global frame
*/
struct ClonePose {
/// Rotation
Eigen::Matrix<double,3,3> _Rot;
/// Position
Eigen::Matrix<double,3,1> _pos;
3.2 三角化特征点
* @brief Uses a linear triangulation to get initial estimate for the feature
* @param feat Pointer to feature
* @param clonesCAM Map between camera ID to map of timestamp to camera pose estimate (rotation from global to camera, position of camera in global frame)
* @return Returns false if it fails to triangulate (based on the thresholds)
*/
OpenVINS并没有说直接用双目的匹配初始化特征点3D坐标值,而是把双目看成两个相对关系的单目,然后用更多两两配对的单目(已知位姿)构成的sfm问题来求解特征点的3D坐标。
bool single_triangulation(Feature* feat, std::unordered_map<size_t,std::unordered_map<double,ClonePose>> &clonesCAM);
step 1 统计特征点的个数,最多的作为锚点
for (auto const& pair : feat->timestamps) { // feat->timestamps 第一个元素存的是相机的ID,第二个存的是这个特征点被多少个相机观测到对应的时间戳集
total_meas += (int)pair.second.size();
if(pair.second.size() > most_meas) {
anchor_most_meas = pair.first; // 存储相机的ID
most_meas = pair.second.size(); // 存储这个特征点被多少个相机观测到
}
}
feat->anchor_cam_id = anchor_most_meas; // 锚点anchor相机ID(要么左相机id,要么右相机id,通常是最多次观测到这个特征点的相机id)
feat->anchor_clone_timestamp = feat->timestamps.at(feat->anchor_cam_id).back(); // 把最老的时间戳当成这个特征点的第一次观测到的anchor时间戳
step 2 获取锚点anchor的位姿
// Get the position of the anchor pose
ClonePose anchorclone = clonesCAM.at(feat->anchor_cam_id).at(feat->anchor_clone_timestamp); //clonesCAM 存储了所有相机ID号和其对应的位姿
Eigen::Matrix<double,3,3> &R_GtoA = anchorclone.Rot(); // 锚点anchor的位姿
Eigen::Matrix<double,3,1> &p_AinG = anchorclone.pos();
step 3 求取任意帧和锚点帧anchor之间的位姿关系
// Loop through each camera for this feature
for (auto const& pair : feat->timestamps) {
// Add CAM_I features
for (size_t m = 0; m < feat->timestamps.at(pair.first).size(); m++) {
// Get the position of this clone in the global
Eigen::Matrix<double, 3, 3> &R_GtoCi = clonesCAM.at(pair.first).at(feat->timestamps.at(pair.first).at(m)).Rot();
Eigen::Matrix<double, 3, 1> &p_CiinG = clonesCAM.at(pair.first).at(feat->timestamps.at(pair.first).at(m)).pos();
// Convert current position relative to anchor
Eigen::Matrix<double,3,3> R_AtoCi; // 各个帧到锚点帧的相对关系
R_AtoCi.noalias() = R_GtoCi*R_GtoA.transpose();
Eigen::Matrix<double,3,1> p_CiinA;
p_CiinA.noalias() = R_GtoA*(p_CiinG-p_AinG);
step 4 遍历所有的ci和anchor帧3D-2D坐标关系,构成Ax=b 构成线性方程求解最小二乘解
// Get the UV coordinate normal
Eigen::Matrix<double, 3, 1> b_i;
b_i << feat->uvs_norm.at(pair.first).at(m)(0), feat->uvs_norm.at(pair.first).at(m)(1), 1;
b_i = R_AtoCi.transpose() * b_i; // 对应公式的cibf 转换成Abf ,如下图
b_i = b_i / b_i.norm();
Eigen::Matrix<double,2,3> Bperp = Eigen::Matrix<double,2,3>::Zero();
Bperp << -b_i(2, 0), 0, b_i(0, 0), 0, b_i(2, 0), -b_i(1, 0); // 对应公式n1 和n2
// Append to our linear system
A.block(2 * c, 0, 2, 3) = Bperp;
b.block(2 * c, 0, 2, 1).noalias() = Bperp * p_CiinA;
c++;
原理如下,任意一个相机ci到Anchor 的关系为 R_AtoCi 和p_CiinA:
其中Apf 为特征点在Anchor下的坐标,Cipf 表示该点在当前相机ci 下的坐标。
用归一化的方式表示特征点cipf的形式如下:
假设存在两个转换式子
用他们消去中间项 zf * Abf,并且构成一个2c行Ax=b 的方程
其中的A([n1,n2])大小为 2cx3 b([n1,n2] Apci)的大小为2cx1,所以x(Apf 带求的特征点在anchor坐标系下的坐标值)的大小位 3x1。
step 5 QR分解Ax=b 的未知数
// Solve the linear system
Eigen::MatrixXd p_f = A.colPivHouseholderQr().solve(b);
step 6 用SVD分解检查A矩阵,用配置文件的参数来检查pf的合理性
// Check A and p_f
Eigen::JacobiSVD<Eigen::MatrixXd> svd(A, Eigen::ComputeThinU | Eigen::ComputeThinV);
Eigen::MatrixXd singularValues;
singularValues.resize(svd.singularValues().rows(), 1);
singularValues = svd.singularValues();
double condA = singularValues(0, 0) / singularValues(singularValues.rows() - 1, 0);
// If we have a bad condition number, or it is too close
// Then set the flag for bad (i.e. set z-axis to nan)
if (std::abs(condA) > _options.max_cond_number || p_f(2,0) < _options.min_dist || p_f(2,0) > _options.max_dist || std::isnan(p_f.norm())){
return false;
}
条件数的解释:
条件数是判断矩阵病态与否的一种度量,条件数越大矩阵越病态。条件数事实上表示了矩阵计算对于误差的敏感性。对于线性方程组 Ax=b,如果 A 的条件数大,b 的微小改变就能引起解 x 较大的改变,数值稳定性差。如果 A 的条件数小,b 有微小的改变,x 的改变也很微小,数值稳定性好。它也可以表示 b 不变,而 A 有微小改变时,x 的变化情况。
配置文件中具体的参数设置如下:
nh.param<double>("fi_min_dist", featinit_options.min_dist, 0.25); z最近距离0.25m
nh.param<double>("fi_max_dist", featinit_options.max_dist, 40);z最远距离40m
nh.param<double>("fi_max_cond_number", featinit_options.max_cond_number, 1000);
step 7 拷贝
// Store it in our feature object
feat->p_FinA = p_f; // Anchor 坐标系下的表示方法
feat->p_FinG = R_GtoA.transpose()*feat->p_FinA + p_AinG; // 全局坐标系下的表示方法
3.3 高斯非线性迭代优化之前三角化得到的粗略特征点坐标
* @brief Uses a nonlinear triangulation to refine initial linear estimate of the feature
* @param feat Pointer to feature
* @param clonesCAM Map between camera ID to map of timestamp to camera pose estimate (rotation from global to camera, position of camera in global frame)
* @return Returns false if it fails to be optimize (based on the thresholds)
*/
bool single_gaussnewton(Feature* feat, std::unordered_map<size_t,std::unordered_map<double,ClonePose>> &clonesCAM);
step 1 获取逆深度表达式子
//Get into inverse depth
double rho = 1/feat->p_FinA(2); // pA
double alpha = feat->p_FinA(0)/feat->p_FinA(2); //uA
double beta = feat->p_FinA(1)/feat->p_FinA(2); // vA
step 2 计算2D特征点的重投影误差平方(作为后续迭代过程中的残差初始值,以后每次迭代都会和它进行比较,从而判断误差是否减小)
// Cost at the last iteration
double cost_old = compute_error(clonesCAM,feat,alpha,beta,rho);
参见本文的定义。
step 3 非线性优化迭代条件
// Loop till we have either
// 1. Reached our max iteration count
// 2. System is unstable
// 3. System has converged
while (runs < _options.max_runs && lam < _options.max_lamda && eps > _options.min_dx) {
nh.param<int>("fi_max_runs", featinit_options.max_runs, 20);
nh.param<double>("fi_init_lamda", featinit_options.init_lamda, 1e-3);
nh.param<double>("fi_max_lamda", featinit_options.max_lamda, 1e10);
nh.param<double>("fi_min_dx", featinit_options.min_dx, 1e-6);
step 4 计算Jacobian 矩阵
// Middle variables of the system 相机观测到的特征点3D坐标转成 h表示
double hi1 = R_AtoCi(0, 0) * alpha + R_AtoCi(0, 1) * beta + R_AtoCi(0, 2) + rho * p_AinCi(0, 0);// 注意这里的+号的原因是 p_AinCi.noalias() = -R_AtoCi*p_CiinA;
double hi2 = R_AtoCi(1, 0) * alpha + R_AtoCi(1, 1) * beta + R_AtoCi(1, 2) + rho * p_AinCi(1, 0);
double hi3 = R_AtoCi(2, 0) * alpha + R_AtoCi(2, 1) * beta + R_AtoCi(2, 2) + rho * p_AinCi(2, 0);
上图中 cipf 表示特征点在ci相机坐标系下的坐标表示,Apf 表示特征点在Anchor 坐标系下的表示,Apci表示相机ci的原点在Anchor坐标系下的表示
// Calculate jacobian 计算jacobian
double d_z1_d_alpha = (R_AtoCi(0, 0) * hi3 - hi1 * R_AtoCi(2, 0)) / (pow(hi3, 2));
double d_z1_d_beta = (R_AtoCi(0, 1) * hi3 - hi1 * R_AtoCi(2, 1)) / (pow(hi3, 2));
double d_z1_d_rho = (p_AinCi(0, 0) * hi3 - hi1 * p_AinCi(2, 0)) / (pow(hi3, 2));
double d_z2_d_alpha = (R_AtoCi(1, 0) * hi3 - hi2 * R_AtoCi(2, 0)) / (pow(hi3, 2));
double d_z2_d_beta = (R_AtoCi(1, 1) * hi3 - hi2 * R_AtoCi(2, 1)) / (pow(hi3, 2));
double d_z2_d_rho = (p_AinCi(1, 0) * hi3 - hi2 * p_AinCi(2, 0)) / (pow(hi3, 2));
Eigen::Matrix<double, 2, 3> H;
H << d_z1_d_alpha, d_z1_d_beta, d_z1_d_rho, d_z2_d_alpha, d_z2_d_beta, d_z2_d_rho;
// Calculate residual 计算残差
Eigen::Matrix<float, 2, 1> z;
z << hi1 / hi3, hi2 / hi3;
Eigen::Matrix<float, 2, 1> res = feat->uvs_norm.at(pair.first).at(m) - z;
step 5 构造HT* H* x =HT* b ,遍历了所有特征点对应的相机观测
// Append to our summation variables
err += std::pow(res.norm(), 2);
grad.noalias() += H.transpose() * res.cast<double>();
Hess.noalias() += H.transpose() * H;
step 6 LM算法
// Solve Levenberg iteration
Eigen::Matrix<double,3,3> Hess_l = Hess;
for (size_t r=0; r < (size_t)Hess.rows(); r++){
Hess_l(r,r) *= (1.0+lam);
}
Eigen::Matrix<double,3,1> dx = Hess_l.colPivHouseholderQr().solve(grad);
step 7 计算增量之后的重投影误差
// Check if error has gone down
double cost = compute_error(clonesCAM,feat,alpha+dx(0,0),beta+dx(1,0),rho+dx(2,0));
step 8 若收敛,直接返回
// Check if converged
if (cost <= cost_old && (cost_old-cost)/cost_old < _options.min_dcost) {
alpha += dx(0, 0);
beta += dx(1, 0);
rho += dx(2, 0);
eps = 0;
break;
}
nh.param<double>("fi_min_dcost", featinit_options.min_dcost, 1e-6);
step 9 否则调整lam的值,让他收敛加快和准确性更高
// If cost is lowered, accept step
// Else inflate lambda (try to make more stable)
if (cost <= cost_old) {
recompute = true; 需要重新调整残差和Jacobian 矩阵
cost_old = cost;
alpha += dx(0, 0);
beta += dx(1, 0);
rho += dx(2, 0);
runs++;
lam = lam/_options.lam_mult; 减小lam,更准确
eps = dx.norm();
} else {
recompute = false; 不需要重新调整残差和Jacobian 矩阵
lam = lam*_options.lam_mult; 增加lam,更快速
continue;
}
step 10 用QR分解来求取 当前这个特征点在所有相机中最大的baseline
// Get tangent plane to x_hat
Eigen::HouseholderQR<Eigen::MatrixXd> qr(feat->p_FinA);
Eigen::MatrixXd Q = qr.householderQ();
参考https://blog.csdn.net/flyfish5/article/details/52315062
// Max baseline we have between poses
double base_line_max = 0.0;
// Check maximum baseline
// Loop through each camera for this feature
for (auto const& pair : feat->timestamps) {
// Loop through the other clones to see what the max baseline is
for (size_t m = 0; m < feat->timestamps.at(pair.first).size(); m++) {
// Get the position of this clone in the global
Eigen::Matrix<double,3,1> &p_CiinG = clonesCAM.at(pair.first).at(feat->timestamps.at(pair.first).at(m)).pos();
// Convert current position relative to anchor
Eigen::Matrix<double,3,1> p_CiinA = R_GtoA*(p_CiinG-p_AinG);
// Dot product camera pose and nullspace
double base_line = ((Q.block(0,1,3,2)).transpose() * p_CiinA).norm(); // ???
if (base_line > base_line_max) base_line_max = base_line;
}
}
step 11 检查迭代优化之后的条件是否超过阈值,多了一个baseline的判断
// Check if this feature is bad or not
// 1. If the feature is too close
// 2. If the feature is invalid
// 3. If the baseline ratio is large
if(feat->p_FinA(2) < _options.min_dist
|| feat->p_FinA(2) > _options.max_dist
|| (feat->p_FinA.norm() / base_line_max) > _options.max_baseline 特征点距离和最大基线的比例超过最大40米
|| std::isnan(feat->p_FinA.norm())) {
return false;
}
nh.param<double>("fi_max_baseline", featinit_options.max_baseline, 40);
step 12 转换到全局坐标系下
// Finally get position in global frame
feat->p_FinG = R_GtoA.transpose()*feat->p_FinA + p_AinG;
return true;
3.4 重投影误差计算
@brief Helper function for the gauss newton method that computes error of the given estimate
@param clonesCAM Map between camera ID to map of timestamp to camera pose estimate
* @param feat Pointer to the feature
* @param alpha x/z in anchor
* @param beta y/z in anchor
* @param rho 1/z inverse depth
double compute_error(std::unordered_map<size_t,std::unordered_map<double,ClonePose>> &clonesCAM,Feature* feat,double alpha,double beta,double rho);
step 1 获取anchor 锚点的转换关系
// Get the position of the anchor pose
Eigen::Matrix<double,3,3> &R_GtoA = clonesCAM.at(feat->anchor_cam_id).at(feat->anchor_clone_timestamp).Rot();
Eigen::Matrix<double,3,1> &p_AinG = clonesCAM.at(feat->anchor_cam_id).at(feat->anchor_clone_timestamp).pos();
step 2 求取每一个相机相对anchor锚点的转换关系
// Loop through each camera for this feature
for (auto const& pair : feat->timestamps) {
// Add CAM_I features
for (size_t m = 0; m < feat->timestamps.at(pair.first).size(); m++) {
// Get the position of this clone in the global
Eigen::Matrix<double, 3, 3> &R_GtoCi = clonesCAM.at(pair.first).at(feat->timestamps.at(pair.first).at(m)).Rot();
Eigen::Matrix<double, 3, 1> &p_CiinG = clonesCAM.at(pair.first).at(feat->timestamps.at(pair.first).at(m)).pos();
// Convert current position relative to anchor
Eigen::Matrix<double,3,3> R_AtoCi;
R_AtoCi.noalias() = R_GtoCi*R_GtoA.transpose();
Eigen::Matrix<double,3,1> p_CiinA;
p_CiinA.noalias() = R_GtoA*(p_CiinG-p_AinG);
Eigen::Matrix<double,3,1> p_AinCi;
p_AinCi.noalias() = -R_AtoCi*p_CiinA; // 把p_CiinA 带入即可知道
step 3 计算投影到当前帧ci 的2D坐标表示
// Middle variables of the system
double hi1 = R_AtoCi(0, 0) * alpha + R_AtoCi(0, 1) * beta + R_AtoCi(0, 2) + rho * p_AinCi(0, 0);
double hi2 = R_AtoCi(1, 0) * alpha + R_AtoCi(1, 1) * beta + R_AtoCi(1, 2) + rho * p_AinCi(1, 0);
double hi3 = R_AtoCi(2, 0) * alpha + R_AtoCi(2, 1) * beta + R_AtoCi(2, 2) + rho * p_AinCi(2, 0);
// Calculate residual
Eigen::Matrix<float, 2, 1> z;
z << hi1 / hi3, hi2 / hi3;
step 4 计算残差
Eigen::Matrix<float, 2, 1> res = feat->uvs_norm.at(pair.first).at(m) - z;
// Append to our summation variables
err += pow(res.norm(), 2);