Linyi Jin^{1}  Jianming Zhang^{2}  Yannick HoldGeoffroy^{2}  Oliver Wang^{2}  Kevin Matzen^{2}  Matthew Sticha^{1}  David F. Fouhey^{1} 






(A): a photo (credit David Clapp) with an offcentered principal point. (B), (C): assuming traditional pinhole model with principal point at the center, there is no way to correctly represent both up directions (wrong in B) and horizon (wrong in C). (D): Our proposed Perspective Fields model correctly models the Upvectors (Green arrows) aligned with gravity, and Latitude values (contour line: π/2 π/2) with 0° on the horizon. We can further recover camera parameters Roll 0.5°, Pitch 1.7°, FoV 64.6° and principal point at × from the prediction. 
Geometric camera calibration is often required for applications that understand the perspective of the image. We propose perspective fields as a representation that models the local perspective properties of an image. Perspective Fields contain perpixel information about the camera view, parameterized as an up vector and a latitude value. This representation has a number of advantages as it makes minimal assumptions about the camera model and is invariant or equivariant to common image editing operations like cropping, warping, and rotation. It is also more interpretable and aligned with human perception. We train a neural network to predict Perspective Fields and the predicted Perspective Fields can be converted to calibration parameters easily. We demonstrate the robustness of our approach under various scenarios compared with camera calibrationbased methods and show example applications in image compositing.
Check out how Perspective Fields change w.r.t. traditional camera parameters.
Roll
Pitch
FoV

For each pixel location, the Perspective Field consists of a unit Upvector and Latitude. The Upvector is the projection of the up direction, shown in Green arrows. In perspective projection, it points to the vertical vanishing point. The Latitude of each pixel is defined as the angle between the incoming ray and the horizontal plane. We show it using contour line: π/2 π/2. Note 0° is at the horizon.
We can train a neural network to predict Perspective Fields from images in the wild. Below are some examples.
Input 

Perspective Fields 































































Moreover, it generalizes to nonperspective projections, such as the multiperspective scene from Inception or artworks with various camera models.

Linyi Jin, Jianming Zhang, Yannick HoldGeoffroy, Oliver Wang, Kevin Matzen, Matthew Sticha, David F. Fouhey Perspective Fields for Single Image Camera Calibration. arXiv 2022. (Paper) 
AcknowledgementsThis webpage template originally made from some colorful folks. 