Discovering invariants via machine learning
Discovering invariants via machine learning
Blog Article
Invariants and conservation laws convey critical information about the underlying dynamics of a system, yet it is generally infeasible to find them from large-scale data without any prior knowledge or human insight.We propose kaiser copy stands ConservNet to achieve this goal, a neural network that spontaneously discovers a conserved quantity from grouped data where the members of each group share invariants, similar to a general experimental setting where trajectories from different trials are observed.As a neural network trained with an am22 pro model intuitive loss function called noise-variance loss, ConservNet learns the hidden invariants in each group of multidimensional observables in a data-driven end-to-end manner.
Our model successfully discovers underlying invariants from the simulated systems having invariants as well as a real-world double-pendulum trajectory.Since the model is robust to various noises and data conditions compared to the baseline, our approach is directly applicable to experimental data for discovering hidden conservation laws and further, general relationships between variables.