Object Detection Metrics

2019-03-11
cwlseu

物体检测效果评估相关的定义 1

Intersection Over Union (IOU)

Intersection Over Union (IOU) is measure based on Jaccard Index that evaluates the overlap between two bounding boxes. It requires a ground truth bounding box $B_{gt}$and a predicted bounding box $B_p$ By applying the IOU we can tell if a detection is valid (True Positive) or not (False Positive).
IOU is given by the overlapping area between the predicted bounding box and the ground truth bounding box divided by the area of union between them:

The image below illustrates the IOU between a ground truth bounding box (in green) and a detected bounding box (in red).

True Positive, False Positive, False Negative and True Negative2

Some basic concepts used by the metrics:

• True Positive (TP): A correct detection. Detection with IOU ≥ _threshold_
• False Positive (FP): A wrong detection. Detection with IOU < _threshold_
• False Negative (FN): A ground truth not detected
• True Negative (TN): Does not apply. It would represent a corrected misdetection. In the object detection task there are many possible bounding boxes that should not be detected within an image. Thus, TN would be all possible bounding boxes that were corrrectly not detected (so many possible boxes within an image). That’s why it is not used by the metrics.

_threshold_: depending on the metric, it is usually set to 50%, 75% or 95%.

Precision

Precision is the ability of a model to identify only the relevant objects. It is the percentage of correct positive predictions and is given by: $Precision = \frac{TP}{TP + FP} = \frac{TP}{all-detections}$

Recall

Recall is the ability of a model to find all the relevant cases (all ground truth bounding boxes). It is the percentage of true positive detected among all relevant ground truths and is given by: $Recall = \frac{TP}{TP + FN} = \frac{TP}{all-groundtruths}$

评估方法Metrics345

• Receiver operating characteristics (ROC) curve
• Precision x Recall curve
• Average Precision
• 11-point interpolation
• Interpolating all points

物体检测中的损失函数

【机器学习者都应该知道的五种损失函数！】 我们假设有$n$个样本, 其中$x_i$的gt值为$y_i$, 算法$f(x)$的预测结果为$y_i^p$

Huber损失——平滑平均绝对误差

Huber损失相比于平方损失来说对于异常值不敏感，但它同样保持了可微的特性。它基于绝对误差但在误差很小的时候变成了平方误差。我们可以使用超参数$\delta$来调节这一误差的阈值。当$\delta$趋向于0时它就退化成了MAE，而当$\delta$趋向于无穷时则退化为了MSE，其表达式如下，是一个连续可微的分段函数：

Huber损失函数克服了MAE和MSE的缺点，不仅可以保持损失函数具有连续的导数，同时可以利用MSE梯度随误差减小的特性来得到更精确的最小值，也对异常值具有更好的鲁棒性。

参考文献

1. https://github.com/cwlseu/Object-Detection-Metrics “评估标准”

2. https://acutecaretesting.org/en/articles/precision-recall-curves-what-are-they-and-how-are-they-used “Precision-recall curves – what are they and how are they used”

3. https://www.jianshu.com/p/c61ae11cc5f6 “机器学习之分类性能度量指标 : ROC曲线、AUC值、正确率、召回率”

4. https://machinelearningmastery.com/roc-curves-and-precision-recall-curves-for-classification-in-python/ “How and When to Use ROC Curves and Precision-Recall Curves for Classification in Python”

5. https://www.zhihu.com/question/30643044 “精确率、召回率、F1 值、ROC、AUC 各自的优缺点是什么？”

6. https://yq.aliyun.com/articles/602858?utm_content=m_1000002415 “机器学习者都应该知道的五种损失函数！”

7. http://rishy.github.io/ml/2015/07/28/l1-vs-l2-loss/?spm=a2c4e.10696291.0.0.170b19a44a9JnP

|版权声明：本文为博主原创文章，未经博主允许不得转载。

Content