🧮 Scientific Computing¶
🔬 ICLR2026 · 10 paper notes
- Astral: Training Physics-Informed Neural Networks with Error Majorants
-
This paper proposes the Astral loss function — based on a functional a posteriori error majorant — as a replacement for the conventional residual loss in training physics-informed neural networks (PiNNs). The approach enables reliable error estimation throughout training and achieves superior or comparable accuracy across multiple PDE types, including diffusion and Maxwell equations.
- Deep Learning for Subspace Regression
-
This paper formalizes the subspace prediction problem in Reduced Order Modeling (ROM) as a regression task on the Grassmann manifold. It proposes dedicated loss functions and a subspace embedding technique—predicting a higher-dimensional subspace containing the target—to reduce mapping complexity. The approach achieves significant improvements across eigenvalue problems, parametric PDEs, and iterative solver acceleration.
- DGNet: Discrete Green Networks for Data-Efficient Learning of Spatiotemporal PDEs
-
Grounded in Green's function theory, DGNet embeds the superposition principle into a physics-neural hybrid architecture, achieving state-of-the-art accuracy with only tens of training trajectories and demonstrating robust zero-shot generalization to unseen source terms.
- DRIFT-Net: A Spectral--Coupled Neural Operator for PDEs Learning
-
DRIFT-Net is a dual-branch neural operator that addresses autoregressive drift caused by insufficient global spectral coupling in window attention, via controlled low-frequency mixing (spectral branch), local detail fidelity (image branch), and bandwidth fusion through radial gating. It reduces error by 7%–54% on Navier-Stokes benchmarks.
- Empirical Stability Analysis of Kolmogorov-Arnold Networks in Hard-Constrained Recurrent Physics-Informed Discovery
-
This paper systematically evaluates vanilla KAN as a drop-in replacement for MLP in the residual branch of Hard-Constrained Recurrent Physics-Informed Neural Networks (HRPINN) — through 3 complementary studies × 100 random seeds, it finds that KAN is competitive on univariate separable residuals (Duffing's \(-0.3x^3\)), but systematically fails on multiplicatively coupled residuals (Van der Pol's \((1-x^2)v\)) with extreme hyperparameter fragility, while standard MLP exhibits substantially superior stability across nearly all configurations.
- HyperKKL: Enabling Non-Autonomous State Estimation through Dynamic Weight Conditioning
-
This paper proposes HyperKKL, which uses a hypernetwork to encode exogenous input signals and dynamically generate the transformation mapping parameters of a KKL observer, enabling state estimation for non-autonomous nonlinear systems without retraining or online gradient updates. The method is validated on four classical nonlinear systems: Duffing, Van der Pol, Lorenz, and Rössler.
- Learning-guided Kansa Collocation for Forward and Inverse PDE Problems
-
This work extends the meshfree radial basis function (RBF)-based Kansa collocation method from single-variable linear PDEs to coupled multi-variable and nonlinear PDE settings. It incorporates automatic shape-parameter tuning and multiple time-stepping schemes, and provides a systematic comparison against neural PDE solvers such as PINNs and FNO on both forward and inverse problems.
- One Operator to Rule Them All? On Boundary-Indexed Operator Families in Neural PDE Solvers
-
This paper argues that neural PDE solvers, when trained under varying boundary conditions, do not learn a single solution operator but rather a family of operators indexed by boundary conditions. It formalizes the non-identifiability problem induced by boundary distribution shift under ERM from a learning-theoretic perspective.
- Policy Myopia as a Mechanism of Gradual Disempowerment in Post-AGI Governance
-
This paper argues that policy myopia is not an attention-allocation problem but an institutional mechanism that systematically and irreversibly strips humans of governance participation capacity in the post-AGI era — through three coupled positive feedback loops: salience capture, capability cascades, and value lock-in. Standard mitigation measures can only delay but not prevent this process.
- Supervised Metric Regularization Through Alternating Optimization for Multi-Regime PINNs
-
This paper proposes a Topology-Aware PINN (TAPINN) that structures the latent space via supervised metric regularization (Triplet Loss) and stabilizes training through an alternating optimization schedule. On the multi-regime Duffing oscillator benchmark, TAPINN reduces physics residuals by approximately 49% (0.082 vs. 0.160) and gradient variance by 2.18× compared to baselines.