query image
OmniPoseAD
SplatPose
ours
OmniPoseAD
SplatPose
ours
query image
OmniPoseAD
SplatPose
ours
OmniPoseAD
SplatPose
ours
query image
OmniPoseAD
SplatPose
ours
OmniPoseAD
SplatPose
ours
query image
OmniPoseAD
SplatPose
ours
OmniPoseAD
SplatPose
ours
query image
OmniPoseAD
SplatPose
ours
OmniPoseAD
SplatPose
ours
query image
OmniPoseAD
SplatPose
ours
OmniPoseAD
SplatPose
ours
query image
OmniPoseAD
SplatPose
ours
OmniPoseAD
SplatPose
ours
query image
OmniPoseAD
SplatPose
ours
OmniPoseAD
SplatPose
ours
We introduce the Pose and Illumination agnostic Anomaly Detection (PIAD) problem, a generalization of pose-agnostic anomaly detection (PAD). Being illumination agnostic is critical, as it relaxes the assumption that training data for an object has to be acquired in the same light configuration of the query images that we want to test. Moreover, even if the object is placed within the same capture environment, being illumination agnostic implies that we can relax the assumption that the relative pose between environment light and query object has to match the one in the training data.
We introduce a new dataset to study this problem, containing both synthetic and real-world examples, propose a new baseline for PIAD, and demonstrate how our baseline provides state-of-the-art results in both PAD and PIAD, not only in the new proposed dataset, but also in existing datasets that were designed for the simpler PAD problem.
We present the camera pose estimation results using RGB&Reflectance Gaussian and RGB Gaussian separately. When there is inconsistent lighting between the training set and the query images, using RGB&Reflectance Gaussian is more robust.
Training data
Pose estimation by RGB Gaussian
AD Heat map
Query image
Pose estimation by RGB&Relf. Gaussian
AD Heat map
Training data
Pose estimation by RGB Gaussian
AD Heat map
Query image
Pose estimation by RGB&Relf. Gaussian
AD Heat map
Training data
Pose estimation by RGB Gaussian
AD Heat map
Query image
Pose estimation by RGB&Relf. Gaussian
AD Heat map
We introduce a new dataset to study the PIAD problem, containing both synthetic and real-world examples. It comprises a total of 11268 multi-view images of 30 distinct industry products, including 16 synthetic and 14 real-world products. The synthetic dataset is generated using Blender, and the real-world dataset is captured using a smart phone (Redmi K40) mounted on a gimbal. The dataset is designed to be challenging, with a variety of poses, illuminations, materials and anomaly types. The dataset will be released after acceptance.
We visualize part of our dataset to demonstrate the diversity of views, materials, and types of anomalies within the dataset.
Motor
Spring
Keyring
Axletree
Can
Gear
Sprockets
Picker
Box
Parts
Valve
Tube
Cup
USB
Cube
PaperCup
Lighter
Filter
Wheel
Bearing
@inproceedings{
yang2025piad,
title={PIAD: Pose and Illumination agnostic Anomaly Detection},
author={Kaichen Yang and Junjie Cao and Zeyu Bai and Zhixun Su and Andrea Tagliasacchi},
booktitle={The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR)},
year={2025},
}