Perspective Fields for Single Image Camera Calibration


Linyi Jin1 Jianming Zhang2 Yannick Hold-Geoffroy2 Oliver Wang2 Kevin Matzen2 Matthew Sticha1 David F. Fouhey1

1University of Michigan

2Adobe Research


[arXiv]
[code]
[video]


(A): a photo (credit David Clapp) with an off-centered principal point. (B), (C): assuming traditional pinhole model with principal point at the center, there is no way to correctly represent both up directions (wrong in B) and horizon (wrong in C). (D): Our proposed Perspective Fields model correctly models the Up-vectors (Green arrows) aligned with gravity, and Latitude values (contour line: -π/2 π/2) with 0° on the horizon. We can further recover camera parameters Roll -0.5°, Pitch 1.7°, FoV 64.6° and principal point at × from the prediction.

Geometric camera calibration is often required for applications that understand the perspective of the image. We propose perspective fields as a representation that models the local perspective properties of an image. Perspective Fields contain per-pixel information about the camera view, parameterized as an up vector and a latitude value. This representation has a number of advantages as it makes minimal assumptions about the camera model and is invariant or equivariant to common image editing operations like cropping, warping, and rotation. It is also more interpretable and aligned with human perception. We train a neural network to predict Perspective Fields and the predicted Perspective Fields can be converted to calibration parameters easily. We demonstrate the robustness of our approach under various scenarios compared with camera calibration-based methods and show example applications in image compositing.




Video


Perspective Fields on Pinhole Camera

Check out how Perspective Fields change w.r.t. traditional camera parameters.

Roll      0
Pitch   0
FoV    70

For each pixel location, the Perspective Field consists of a unit Up-vector and Latitude. The Up-vector is the projection of the up direction, shown in Green arrows. In perspective projection, it points to the vertical vanishing point. The Latitude of each pixel is defined as the angle between the incoming ray and the horizontal plane. We show it using contour line: -π/2 π/2. Note 0° is at the horizon.


Qualitative Results

We can train a neural network to predict Perspective Fields from images in the wild. Below are some examples.

Input

Perspective Fields

Moreover, it generalizes to non-perspective projections, such as the multi-perspective scene from Inception or artworks with various camera models.




Paper and Supplementary Material

Linyi Jin, Jianming Zhang, Yannick Hold-Geoffroy, Oliver Wang, Kevin Matzen, Matthew Sticha, David F. Fouhey
Perspective Fields for Single Image Camera Calibration.
arXiv 2022.
(Paper)


[Bibtex]



Acknowledgements

This webpage template originally made from some colorful folks.