i. Typical MAP formulation
Here \(y\) is the measurement, \(\mathcal{A}\) is the forward operator, \(\ell(y,\mathcal{A}(x))\) enforces measurement fidelity, and \(\Omega(x)\) is a handcrafted or learned prior regularizer.
Project overview
This project organizes our group's plug-in inverse problem solvers under one shared language. The central idea is to separate the measurement model from the pretrained generative prior, then solve the inverse problem by optimizing through the prior rather than interleaving ad hoc correction steps with sampling.
Here \(y\) is the measurement, \(\mathcal{A}\) is the forward operator, \(\ell(y,\mathcal{A}(x))\) enforces measurement fidelity, and \(\Omega(x)\) is a handcrafted or learned prior regularizer.
The pretrained model \(G_\theta\) is kept fixed and plugged into the inverse problem as the prior. Optimization happens over the latent variable \(z\), while the measurement loss remains explicit.
Alternate between generative sampling steps and measurement-correction steps. The prior and measurement model interact through a schedule of local corrections.
Treat the pretrained generative process as a differentiable prior function and optimize a measurement-aware objective through that function.
Measurement corrections can push samples away from the learned signal manifold, so the reconstruction may satisfy some measurements while becoming less realistic or less physically plausible.
Plug-in methods parameterize the solution through the pretrained model, keeping reconstruction tied to the learned prior throughout optimization.
Reverse generative steps may undo progress made by correction steps, causing oscillation between plausible samples and measurement-consistent samples.
The measurement loss is placed directly in the outer objective, so every update can be evaluated against the forward model.
Correction schedules can be sensitive to unknown or mismatched noise levels, especially when the measurement process differs from the assumptions used to design the sampler.
Plug-in methods can incorporate noise handling, regularization, and stopping criteria in the optimization objective instead of relying on fixed sampling-time heuristics.
FMPlug strategy
The motivation of FMPlug is that foundation flow-matching models are powerful priors, but their general generation capability undermine their reconstruction ability of a specific image. FMPlug resolves this tension by injecting the measurement at a learnable time on the flow path and constraining the latent seed to remain Gaussian-like.
For image restoration, \(y\) is close enough to the unknown object to serve as an instance guide. The learnable time \(t\) decides how much of that guide enters the trajectory, while the shell constraint preserves the Gaussian concentration structure expected by the foundation FM model.
Image restoration visualizations
Each method shows its available image-restoration tasks in one row. Drag each slider to compare the degraded measurement with the reconstruction.
Scientific inverse problems often have sparse paired data and a strong domain shift between measurements and objects: the measurement may be Fourier, scattered-field, or nonlinear sensor data, while the object is the scientific image we want to recover. In this regime, the measurement itself may not be a good warm-start guide. FMPlug uses a few object-domain instances instead, turning scarce scientific examples into an instance prior for the foundation FM model.
The simplex weights \(w\) select and combine the most relevant few-shot instances \(\{x_k\}_{k=1}^{K}\). This replaces direct measurement warm-starting with object-domain guidance, which is better aligned with scientific reconstruction under measurement-object domain shift.
References