Skip to content

Towards Universal Computational Aberration Correction in Photographic Cameras: A Comprehensive Benchmark Analysis

Conference: CVPR 2026 arXiv: 2603.12083 Code: https://github.com/XiaolongQian/UniCAC Area: Image Restoration Keywords: Computational Aberration Correction, Optical Degradation Evaluation, Benchmark, Automatic Optical Design, Image Restoration

TL;DR

This paper presents UniCAC, the first large-scale universal benchmark for Computational Aberration Correction (CAC). It introduces an Optical Degradation Evaluator (ODE) to quantify aberration difficulty and comprehensively evaluates 24 image restoration/CAC algorithms, revealing the impact of three key factors—prior utilization, network architecture, and training strategy—on CAC performance.

Background & Motivation

  1. Background: Computational Aberration Correction (CAC) is a classical problem in computational imaging. Existing methods are typically designed for specific optical systems and require retraining when applied to new lenses.
  2. Limitations of Prior Work: The absence of a comprehensive benchmark covering sufficiently diverse optical aberrations has hindered the development of universal CAC; traditional metrics such as RMS radius fail to accurately quantify task difficulty.
  3. Key Challenge: Universal CAC demands zero-shot generalization to unseen lenses, yet commercial lens design parameters are rarely publicly available, making it difficult to construct large-scale, diverse training and test data.
  4. Goal: (1) Construct a large-scale CAC benchmark covering both spherical and aspherical lenses; (2) propose a more reliable aberration quantification framework; (3) systematically evaluate existing methods and summarize key findings.
  5. Key Insight: Automatic Optical Design (AOD) methods are leveraged to generate a large number of physically constrained lens prescriptions, circumventing the inaccessibility of commercial lens designs.
  6. Core Idea: Diverse lens libraries are generated by extending the OptiFusion automatic design method; an ODE framework is proposed to quantify aberration severity; and the UniCAC benchmark is constructed for comprehensive evaluation.

Method

Overall Architecture

The construction of the UniCAC benchmark consists of three stages: (1) generating a large-scale lens library via an extended automatic optical design method; (2) sampling lenses using the proposed ODE framework to ensure uniform aberration distribution; and (3) comprehensively evaluating 24 methods on the constructed benchmark. The input is aberrated imagery from various lenses, and the output is the corresponding corrected sharp image.

Key Designs

  1. Extended Automatic Optical Design (Extended OptiFusion):

    • Function: Automatically designs a large number of spherical and aspherical lenses to construct the lens library.
    • Mechanism: Extends the OptiFusion method by redefining spherical parameters to encompass aspherical parameters. Four key specifications are considered—number of lens elements, aperture stop position, half field-of-view angle, and F-number—and a heuristic global search algorithm is employed to generate diverse lens samples.
    • Design Motivation: Manual lens design is time-consuming and commercial configurations are inaccessible; automatic design enables large-scale generation of physically plausible lenses.
  2. Optical Degradation Evaluator (ODE):

    • Function: Quantifies the severity of optical degradation in a lens to guide benchmark sampling.
    • Mechanism: \(ODE = \lambda_{oiq} \cdot OIQ + \lambda_s \cdot U_s + \lambda_c \cdot U_c\), where OIQ integrates PSNR/SSIM and MTF-based OIQE to assess overall image quality (\(OIQ = \alpha \frac{PSNR}{50} + \beta \frac{SSIM-0.5}{0.5} + \gamma \cdot OIQE\)), \(U_s\) evaluates spatial uniformity via the coefficient of variation across field positions, \(U_c\) assesses chromatic aberration characteristics across color channels, and \(U_{s,c} = e^{-\sigma \cdot CV_{s,c}}\).
    • Design Motivation: The traditional RMS radius exhibits low correlation with actual CAC performance (\(R^2 = 0.30\)), whereas ODE achieves a substantially higher linear relationship (\(R^2 = 0.84\)).
  3. Comprehensive Evaluation Metric (Overall Performance):

    • Function: Evaluates CAC performance from three dimensions: image fidelity, optical quality, and perceptual quality.
    • Mechanism: \(O.P. = 4 \times \frac{PSNR}{50} + 3 \times \frac{SSIM-0.5}{0.5} + 4 \times \frac{1-LPIPS}{0.4} + 3 \times OIQE + 1 \times \frac{100-FID}{100} + 1 \times ClipIQA\)
    • Design Motivation: A single metric cannot comprehensively assess CAC effectiveness; balancing fidelity, optical quality, and perceptual quality is necessary.

Loss & Training

As a benchmark paper, this work does not propose new training methods. The evaluation covers three training paradigms: regression-based training (improving image fidelity), GAN-based training (improving perceptual quality), and diffusion-based training (improving perceptual quality with limited gains in optical quality).

Key Experimental Results

Main Results

Method Type PSNR↑ SSIM↑ OIQE↑ O.P.↑
PART (Non-blind CAC) Transformer + Regression 28.10 0.866 0.608 1.494
FOV-KPN (Blind CAC) CNN + Regression 26.34 0.824 0.631 1.502
MPRNet (Blind IR) CNN + Regression 27.64 0.860 0.651 1.519
FeMaSR (Blind IR) Transformer + GAN 23.65 0.749 0.501 1.363
DiffBIR (Blind IR) CNN + Diffusion 22.50 0.706 0.455 1.394

Ablation Study

Configuration Key Metric Description
ODE vs. RMS Radius \(R^2 = 0.84\) vs. \(0.30\) ODE demonstrates far stronger linear correlation with CAC performance than RMS radius
With FoV Prior vs. Without Significant improvement Field-of-view information is critical for handling spatially varying aberrations
With PSF Prior vs. Without Significant improvement PSF cues aid in understanding aberration patterns
CNN vs. Transformer CNN offers better efficiency Convolutions efficiently capture local features, well-matched to the nature of aberration degradation

Key Findings

  • Optical priors (FoV and PSF) play a critical role in handling spatially varying aberrations; both FoV information and PSF cues yield significant performance improvements.
  • Clean image priors (e.g., FeMaSR's codebook and DiffBIR's diffusion prior) are highly beneficial for CAC.
  • CNN architectures offer a better trade-off between CAC performance and inference time, as convolutions efficiently capture local features in a manner consistent with the nature of aberration degradation.
  • Regression training improves fidelity, while GAN/diffusion training improves perceptual quality; how to achieve comprehensive gains across all dimensions remains an open question.
  • Diverse lens libraries are generated by extending the OptiFusion automatic optical design method, redefining spherical parameters to include aspherical parameters.
  • The benchmark comprises 120 sampled lenses divided into 5 difficulty levels according to ODE scores.

Highlights & Insights

  • The ODE framework design is particularly elegant: optical degradation is decomposed into three orthogonal dimensions—overall quality, spatial uniformity, and chromatic aberration—yielding a more comprehensive and accurate predictor of CAC difficulty than traditional single metrics.
  • The paradigm of automatic optical design combined with benchmark construction is broadly transferable: when real-world data is scarce, physics-based simulation can be used to generate large-scale benchmarks.
  • The finding that IR methods can be directly transferred to CAC, in some cases outperforming dedicated CAC methods, demonstrates that knowledge from general image restoration can be effectively leveraged.

Limitations & Future Work

  • The benchmark covers only consumer photographic lenses and does not include specialized optical systems such as microscopes or telescopes.
  • A domain gap remains between simulated aberrated images and real-world captures; more accurate simulation or additional real-world data validation is needed in future work.
  • Combining regression and GAN/diffusion training to achieve simultaneous improvements in fidelity, perceptual quality, and optical quality is an important open problem.
  • The weight assignments in the Overall Performance metric \(O.P.\) require further experimental validation.
  • The \(O.P.\) formula: \(O.P. = 4 \times \frac{PSNR}{50} + 3 \times \frac{SSIM-0.5}{0.5} + 4 \times \frac{1-LPIPS}{0.4} + 3 \times OIQE + 1 \times \frac{100-FID}{100} + 1 \times ClipIQA\)
  • Simulation accuracy is verified against Zemax ray-tracing results, with an average error of only 1 μm, confirming reliable simulation fidelity.
  • Diverse lens generation considers 4 key specifications: number of lens elements, aperture stop position, half field-of-view angle, and F-number.
  • vs. Traditional CAC methods (e.g., FOV-KPN): Traditional methods are trained for specific lenses; this work demonstrates the feasibility and necessity of universal training.
  • vs. General IR methods (e.g., NAFNet/Restormer): IR methods under unified training can match or even surpass dedicated CAC methods in performance, yet lack explicit utilization of optical priors.

Rating

  • Novelty: ⭐⭐⭐⭐ — First large-scale universal CAC benchmark; the ODE framework is well-motivated and highly correlated with CAC performance.
  • Experimental Thoroughness: ⭐⭐⭐⭐⭐ — Comprehensive evaluation of 24 methods with multi-dimensional analysis covering 120 sampled lenses.
  • Writing Quality: ⭐⭐⭐⭐ — Clear structure, in-depth analysis, and rich figures and tables.
  • Value: ⭐⭐⭐⭐ — Establishes an important foundation for universal CAC research; dataset and code will be publicly released.