Project overview

Unifying plug-in methods for inverse problem solving

This project organizes our group's plug-in inverse problem solvers under one shared language. The central idea is to separate the measurement model from the pretrained generative prior, then solve the inverse problem by optimizing through the prior rather than interleaving ad hoc correction steps with sampling.

Two master equations

i. Typical MAP formulation

\[ \widehat{x} = \arg\min_x \ell\!\left(y, \mathcal{A}(x)\right) + \lambda \Omega(x) \]

Here \(y\) is the measurement, \(\mathcal{A}\) is the forward operator, \(\ell(y,\mathcal{A}(x))\) enforces measurement fidelity, and \(\Omega(x)\) is a handcrafted or learned prior regularizer.

ii. Plug-in formulation

\[ \widehat{z} = \arg\min_z \ell\!\left(y, \mathcal{A}(G_\theta(z))\right) + \lambda R(z), \qquad \widehat{x}=G_\theta(\widehat{z}) \]

The pretrained model \(G_\theta\) is kept fixed and plugged into the inverse problem as the prior. Optimization happens over the latent variable \(z\), while the measurement loss remains explicit.

Interleaving vs. plug-in methods

Interleaving methods contrasted with plug-in methods
Interleaving methods repeatedly mix generative steps with measurement corrections. Plug-in methods instead optimize a measurement-aware objective through the pretrained prior.

Interleaving methods

Alternate between generative sampling steps and measurement-correction steps. The prior and measurement model interact through a schedule of local corrections.

Plug-in methods

Treat the pretrained generative process as a differentiable prior function and optimize a measurement-aware objective through that function.

Why interleaving can be fragile

Issue in interleaving methods
How plug-in methods address it

Manifold feasibility

Measurement corrections can push samples away from the learned signal manifold, so the reconstruction may satisfy some measurements while becoming less realistic or less physically plausible.

Plug-in methods parameterize the solution through the pretrained model, keeping reconstruction tied to the learned prior throughout optimization.

Measurement feasibility

Reverse generative steps may undo progress made by correction steps, causing oscillation between plausible samples and measurement-consistent samples.

The measurement loss is placed directly in the outer objective, so every update can be evaluated against the forward model.

Noise instability

Correction schedules can be sensitive to unknown or mismatched noise levels, especially when the measurement process differs from the assumptions used to design the sampler.

Plug-in methods can incorporate noise handling, regularization, and stopping criteria in the optimization objective instead of relying on fixed sampling-time heuristics.

FMPlug strategy

From prior mismatch to time-dependent warm-start

The motivation of FMPlug is that foundation flow-matching models are powerful priors, but their general generation capability undermine their reconstruction ability of a specific image. FMPlug resolves this tension by injecting the measurement at a learnable time on the flow path and constraining the latent seed to remain Gaussian-like.

FMPlug warm-start strategy

Simple-distortion objective

\[ \min_{z,\,t\in[0,1]} \ell\!\left( y,\, \mathcal{A}\circ G_\theta(\alpha_t y+\beta_t z, t) \right) \quad \mathrm{s.t.}\quad z\in S^{d-1}_{\epsilon}(0,\sqrt{d}) \]

For image restoration, \(y\) is close enough to the unknown object to serve as an instance guide. The learnable time \(t\) decides how much of that guide enters the trajectory, while the shell constraint preserves the Gaussian concentration structure expected by the foundation FM model.

Image restoration visualizations

Measurement and reconstruction comparisons

Each method shows its available image-restoration tasks in one row. Drag each slider to compare the degraded measurement with the reconstruction.

DMPlug

Inpainting

MeasurementReconstruction

Nonlinear deblur

MeasurementReconstruction

Super-resolution

MeasurementReconstruction

Turbulence

MeasurementReconstruction

FMPlug

Gaussian deblur

MeasurementReconstruction

Inpainting

MeasurementReconstruction

Motion deblur

MeasurementReconstruction

Super-resolution

MeasurementReconstruction

FMPlug expands plug-in methods to scientific inverse problems

Scientific inverse problems often have sparse paired data and a strong domain shift between measurements and objects: the measurement may be Fourier, scattered-field, or nonlinear sensor data, while the object is the scientific image we want to recover. In this regime, the measurement itself may not be a good warm-start guide. FMPlug uses a few object-domain instances instead, turning scarce scientific examples into an instance prior for the foundation FM model.

Few-shot scientific inverse problem intuition
Few-shot scientific domains often concentrate around a small visual family, making related instances useful guides for reconstruction.
Few-shot instance prior solution
FMPlug forms an instance-guided warm start from sparse samples and optimizes the instance weights jointly with the latent seed and time.

Few-shot scientific objective

\[ \min_{z,\,t,\,w} \ell\!\left( y,\, \mathcal{A}\circ G_\theta \left(\alpha_t\sum_{k=1}^{K}w_k x_k+\beta_t z,t\right) \right) \quad \mathrm{s.t.}\quad z\in S^{d-1}_{\epsilon}(0,\sqrt{d}),\, t\in[0,1],\,w\in\Delta_K \]

The simplex weights \(w\) select and combine the most relevant few-shot instances \(\{x_k\}_{k=1}^{K}\). This replaces direct measurement warm-starting with object-domain guidance, which is better aligned with scientific reconstruction under measurement-object domain shift.

Scientific inverse problem comparison results
Scientific IP comparisons show FMPlug recovering main structures across linear inverse scattering, compressed-sensing MRI, and black hole imaging.

References

Related papers