diff --git a/docs/source/_tutorials b/docs/source/_tutorials new file mode 120000 index 00000000..8b8f5e24 --- /dev/null +++ b/docs/source/_tutorials @@ -0,0 +1 @@ +../../tutorials \ No newline at end of file diff --git a/docs/source/_tutorials/pictures/EZyRB_gui_enrich.png b/docs/source/_tutorials/pictures/EZyRB_gui_enrich.png deleted file mode 100644 index fec609ea..00000000 Binary files a/docs/source/_tutorials/pictures/EZyRB_gui_enrich.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/EZyRB_gui_new_value.png b/docs/source/_tutorials/pictures/EZyRB_gui_new_value.png deleted file mode 100644 index 9125c368..00000000 Binary files a/docs/source/_tutorials/pictures/EZyRB_gui_new_value.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/EZyRB_gui_online.png b/docs/source/_tutorials/pictures/EZyRB_gui_online.png deleted file mode 100644 index 5064c35a..00000000 Binary files a/docs/source/_tutorials/pictures/EZyRB_gui_online.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/EZyRB_gui_set_params.png b/docs/source/_tutorials/pictures/EZyRB_gui_set_params.png deleted file mode 100644 index 28b5866c..00000000 Binary files a/docs/source/_tutorials/pictures/EZyRB_gui_set_params.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/EZyRB_gui_start.png b/docs/source/_tutorials/pictures/EZyRB_gui_start.png deleted file mode 100644 index 924d5de5..00000000 Binary files a/docs/source/_tutorials/pictures/EZyRB_gui_start.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/field_04.png b/docs/source/_tutorials/pictures/field_04.png deleted file mode 100644 index 2febc009..00000000 Binary files a/docs/source/_tutorials/pictures/field_04.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/high_fidelity_solution.png b/docs/source/_tutorials/pictures/high_fidelity_solution.png deleted file mode 100644 index 79ff5839..00000000 Binary files a/docs/source/_tutorials/pictures/high_fidelity_solution.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/mapped_solution.png b/docs/source/_tutorials/pictures/mapped_solution.png deleted file mode 100644 index f501a783..00000000 Binary files a/docs/source/_tutorials/pictures/mapped_solution.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/online_evaluation.png b/docs/source/_tutorials/pictures/online_evaluation.png deleted file mode 100644 index ca968fb8..00000000 Binary files a/docs/source/_tutorials/pictures/online_evaluation.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/pressure_in_corners.png b/docs/source/_tutorials/pictures/pressure_in_corners.png deleted file mode 100644 index 7a48c821..00000000 Binary files a/docs/source/_tutorials/pictures/pressure_in_corners.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-1.html b/docs/source/_tutorials/tutorial-1.html deleted file mode 100644 index 7f42afc3..00000000 --- a/docs/source/_tutorials/tutorial-1.html +++ /dev/null @@ -1,13551 +0,0 @@ - - - - -tutorial-1 - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-

EZyRB Tutorial 1

Build and query a simple reduced order model

In this tutorial we will show the typical workflow for the construcion of the Reduced Order Model based only on the outputs of the higher-order model. In detail, we consider here a POD-RBF framework (Proper Orthogonal Decomposition for dimensionality reduction and Radial Basis Function for manifold approximation), but the tutorial can be easily extended to other methods thanks to the modularity nature of EZyRB.

-

We consider a parametric steady heat conduction problem in a two-dimensional domain $\Omega$. While in this tutorial we are going to focus on the data-driven approach, the same problem can be tackled in an intrusive manner (with the Reduced Basis method) using the RBniCS, as demonstrated in this RBniCS tutorial.
-This book is therefore exhaustively discussed in the book Certified reduced basis methods for parametrized partial differential equations, J.S. Hesthaven, G. Rozza, B. Stamm, 2016, Springer. An additional description is available also at https://rbnics.gitlab.io/RBniCS-jupyter/tutorial_thermal_block.html.

-

Since the good documentation already available for this problem and since the data-driven methodologies we will take into consideration, we just summarize the model to allow a better understanding.

-

The domain is depicted below:

-

Drawing

-

where:

-
    -
  • the first parameter $\mu_o$ controls the conductivity in the circular subdomain $\Omega_0$;
  • -
  • the second parameter $\mu_1$ controls the flux over $\Gamma_\text{base}$.
  • -
-

Initial setting

First of all import the required packages: we need the standard Numpy and Matplotlib, and some classes from EZyRB. In the EZyRB framework, we need three main ingredients to construct a reduced order model:

-
    -
  • an initial database where the snapshots are stored;
  • -
  • a reduction method to reduce the dimensionality of the system, in this tutorial we will use the proper orthogonal decomposition (POD) method;
  • -
  • an approximation method to extrapolate the parametric solution for new parameters, in this tutorial we will use a radial basis function (RBF) interpolation.
  • -
- -
-
-
-
-
-
In [1]:
-
-
-
import numpy as np
-import matplotlib.tri as mtri
-import matplotlib.pyplot as plt
-
-from ezyrb import POD, RBF, Database
-from ezyrb import ReducedOrderModel as ROM
-%matplotlib inline
-
- -
-
-
- -
-
-
-
-

Offline phase

In the offline phase, we need some samples of the parametric high-fidelity model. In this case, we extract 8 snapshots from the numerical model implemented in FEniCS, and we import them and the related parameters.

- -
-
-
-
-
-
In [2]:
-
-
-
snapshots = np.load('data/tut1_snapshots.npy')
-param = np.load('data/tut1_mu.npy')
-print(snapshots.shape, param.shape)
-
- -
-
-
- -
-
- - -
- -
- - -
-
(8, 304) (8, 2)
-
-
-
- -
-
- -
-
-
-
-

Moreover, to visualize the solution (both the higher-order one and the reduced one), we import also the mesh information to be able to create the triangulation. We underline this additional step is related only to plotting purpose, and not mandatory for the reduced space generation.

- -
-
-
-
-
-
In [3]:
-
-
-
tri = np.load('data/tut1_triangles.npy')
-coord = np.load('data/tut1_coord.npy')
-triang = mtri.Triangulation(coord[0],coord[1],tri)
-
- -
-
-
- -
-
-
-
-

For the sake of clarity the snapshots are plotted.

- -
-
-
-
-
-
In [4]:
-
-
-
fig, ax = plt.subplots(nrows=2, ncols=4, figsize=(16, 6), sharey=True, sharex=True)
-ax = ax.flatten()
-for i in range(8):
-    ax[i].triplot(triang, 'b-', lw=0.1)
-    cm = ax[i].tripcolor(triang, snapshots[i])
-    fig.colorbar(cm, ax=ax[i])
-    ax[i].set_title('($\mu_0={:5.2f}, \mu_1={:5.2f})$'.format(*param[i]))
-
- -
-
-
- -
-
- - -
- -
- - - - -
- -
- -
- -
-
- -
-
-
-
-

First of all, we create a Database object from the parameters and the snapshots.

- -
-
-
-
-
-
In [5]:
-
-
-
db = Database(param, snapshots)
-
- -
-
-
- -
-
-
-
-

Then we need a reduction object. In this case we use the proper orthogonal decomposition so we create a POD object. We use here all the default parameters, but for the complete list of available arguments we refer to original documentation of POD class.

- -
-
-
-
-
-
In [6]:
-
-
-
pod = POD('svd')
-
- -
-
-
- -
-
-
-
-

Then we instantiate the RBF class for interpolating the solution manifold. Also in this case, RBF documentation is the perfect starting point to explore such class.

- -
-
-
-
-
-
In [7]:
-
-
-
rbf = RBF()
-
- -
-
-
- -
-
-
-
-

Few lines of code and our reduced model is created! -To complete everything, we create the ReducedOrderModel (aliased to ROM in this tutorial) object by passing the already created objects. For clarity, we puntualize that we need to pass the instances and not the classes. Simply changing such line (with different objects) allows to test different frameworks in a very modular way. -The fit() function computes the reduced model, meaning that the original snapshots in the database are projected onto the POD space and the RBF interpolator is created.

- -
-
-
-
-
-
In [8]:
-
-
-
rom = ROM(db, pod, rbf)
-rom.fit();
-
- -
-
-
- -
-
-
-
-

Online phase

In the online phase we can query our model in order to predict the solution for a new parameter $\mu_\text{new}$ that is not in the training set. We just need to pass the new parameters as input of the predict() function.

- -
-
-
-
-
-
In [9]:
-
-
-
new_mu = [8,   1]
-pred_sol = rom.predict(new_mu)
-
- -
-
-
- -
-
-
-
-

We can so plot the predicted solution for a fixed parameter...

- -
-
-
-
-
-
In [10]:
-
-
-
plt.figure(figsize=(7, 5))
-plt.triplot(triang, 'b-', lw=0.1)
-plt.tripcolor(triang, pred_sol)
-plt.colorbar();
-
- -
-
-
- -
-
- - -
- -
- - - - -
- -
- -
- -
-
- -
-
-
-
-

... or interactively touch the input parameters to visualize the corresponding (approximated) output. For a fancy result, we need a bit of IPython black magic (https://ipywidgets.readthedocs.io/en/latest/).

- -
-
-
-
-
-
In [11]:
-
-
-
from ipywidgets import interact
-
-def plot_solution(mu0, mu1):
-    new_mu = [mu0, mu1]
-    pred_sol = rom.predict(new_mu)
-    plt.figure(figsize=(8, 7))
-    plt.triplot(triang, 'b-', lw=0.1)
-    plt.tripcolor(triang, pred_sol)
-    plt.colorbar()
-
-interact(plot_solution, mu0=8, mu1=1);
-
- -
-
-
- -
-
- - -
- -
- - - - - - - -
-
- - -
- -
- -
-
- -
-
-
-
-

Error Approximation & Improvement

At the moment, we used a database which is composed by 8 files. we would have an idea of the approximation accuracy we are able to reach with these high-fidelity solutions. Using the leave-one-out strategy, an error is computed for each parametric point in our database and these values are returned as array.

- -
-
-
-
-
-
In [12]:
-
-
-
for pt, error in zip(rom.database.parameters, rom.loo_error()):
-    print(pt, error)
-
- -
-
-
- -
-
- - -
- -
- - -
-
[ 0.5 -0.2] 1.4659459069614533
-[8.6 0.1] 2.3703514514076223
-[5.3 0.8] 9.27781173146522
-[9.4 0.1] 2.6385920620832573
-[ 7.3 -0.8] 9.940025412362525
-[0.2 0.8] 1.604624702830305
-[ 3.5 -0.5] 1.9574356157818849
-[0.3 0.6] 0.9731879770202155
-
-
-
- -
-
- -
-
-
-
-

Moreover, we can use the information about the errors to locate the parametric points where we have to compute the new high-fidelity solutions and add these to the database in order to optimally improve the accuracy.

- -
-
-
-
-
-
In [13]:
-
-
-
rom.optimal_mu()
-
- -
-
-
- -
-
- - -
- -
Out[13]:
- - - - -
-
[array([ 6.07244184, -0.07123822])]
-
- -
- -
-
- -
-
-
-
-

These function can be used to achieve the wanted (estimated) accuracy.

- -
-
-
-
-
- - - - - - diff --git a/docs/source/_tutorials/tutorial-1/output_20_0.png b/docs/source/_tutorials/tutorial-1/output_20_0.png deleted file mode 100644 index 4f32b62e..00000000 Binary files a/docs/source/_tutorials/tutorial-1/output_20_0.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-1/output_8_1.png b/docs/source/_tutorials/tutorial-1/output_8_1.png deleted file mode 100644 index a6c33820..00000000 Binary files a/docs/source/_tutorials/tutorial-1/output_8_1.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-1/tutorial-1.rst b/docs/source/_tutorials/tutorial-1/tutorial-1.rst deleted file mode 100644 index 059c5c0f..00000000 --- a/docs/source/_tutorials/tutorial-1/tutorial-1.rst +++ /dev/null @@ -1,306 +0,0 @@ -Build and query a simple reduced order model -============================================ - -In this tutorial we will show the typical workflow for the construcion -of the Reduced Order Model based only on the outputs of the higher-order -model. In detail, we consider here a POD-RBF framework (Proper -Orthogonal Decomposition for dimensionality reduction and Radial Basis -Function for manifold approximation), but the tutorial can be easily -extended to other methods thanks to the modularity nature of **EZyRB**. - -We consider a parametric steady heat conduction problem in a -two-dimensional domain :math:`\Omega`. While in this tutorial we are -going to focus on the data-driven approach, the same problem can be -tackled in an intrusive manner (with the Reduced Basis method) using the -`RBniCS `__, as demonstrated in this -`RBniCS -tutorial `__. -This book is therefore exhaustively discussed in the book *Certified -reduced basis methods for parametrized partial differential equations*, -J.S. Hesthaven, G. Rozza, B. Stamm, 2016, Springer. An additional -description is available also at -`https://rbnics.gitlab.io/RBniCS-jupyter/tutorial_thermal_block.html <>`__. - -Since the good documentation already available for this problem and -since the data-driven methodologies we will take into consideration, we -just summarize the model to allow a better understanding. - -The domain is depicted below: - -where: - the first parameter :math:`\mu_o` controls the conductivity in -the circular subdomain :math:`\Omega_0`; - the second parameter -:math:`\mu_1` controls the flux over :math:`\Gamma_\text{base}`. - -Initial setting -~~~~~~~~~~~~~~~ - -First of all import the required packages: we need the standard Numpy -and Matplotlib, and some classes from EZyRB. In the EZyRB framework, we -need three main ingredients to construct a reduced order model: - an -initial database where the snapshots are stored; - a reduction method to -reduce the dimensionality of the system, in this tutorial we will use -the proper orthogonal decomposition (POD) method; - an approximation -method to extrapolate the parametric solution for new parameters, in -this tutorial we will use a radial basis function (RBF) interpolation. - -.. code:: ipython3 - - !pip install ezyrb datasets - - -.. parsed-literal:: - - Requirement already satisfied: ezyrb in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (1.3.2) - Requirement already satisfied: datasets in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (4.4.2) - Requirement already satisfied: future in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from ezyrb) (1.0.0) - Requirement already satisfied: numpy in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from ezyrb) (2.2.0) - Requirement already satisfied: scipy in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from ezyrb) (1.14.1) - Requirement already satisfied: matplotlib in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from ezyrb) (3.10.0) - Requirement already satisfied: scikit-learn in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from ezyrb) (1.8.0) - Requirement already satisfied: torch in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from ezyrb) (2.5.1) - Requirement already satisfied: filelock in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (3.16.1) - Requirement already satisfied: pyarrow>=21.0.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (22.0.0) - Requirement already satisfied: dill<0.4.1,>=0.3.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (0.4.0) - Requirement already satisfied: pandas in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (2.2.3) - Requirement already satisfied: requests>=2.32.2 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (2.32.3) - Requirement already satisfied: httpx<1.0.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (0.28.1) - Requirement already satisfied: tqdm>=4.66.3 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (4.67.1) - Requirement already satisfied: xxhash in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (3.6.0) - Requirement already satisfied: multiprocess<0.70.19 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (0.70.18) - Requirement already satisfied: fsspec<=2025.10.0,>=2023.1.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (2025.10.0) - Requirement already satisfied: huggingface-hub<2.0,>=0.25.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (1.2.3) - Requirement already satisfied: packaging in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (24.2) - Requirement already satisfied: pyyaml>=5.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (6.0.2) - Requirement already satisfied: aiohttp!=4.0.0a0,!=4.0.0a1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (3.11.10) - Requirement already satisfied: anyio in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from httpx<1.0.0->datasets) (4.8.0) - Requirement already satisfied: certifi in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from httpx<1.0.0->datasets) (2024.12.14) - Requirement already satisfied: httpcore==1.* in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from httpx<1.0.0->datasets) (1.0.7) - Requirement already satisfied: idna in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from httpx<1.0.0->datasets) (3.10) - Requirement already satisfied: h11<0.15,>=0.13 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from httpcore==1.*->httpx<1.0.0->datasets) (0.14.0) - Requirement already satisfied: hf-xet<2.0.0,>=1.2.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from huggingface-hub<2.0,>=0.25.0->datasets) (1.2.0) - Requirement already satisfied: shellingham in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from huggingface-hub<2.0,>=0.25.0->datasets) (1.5.4) - Requirement already satisfied: typer-slim in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from huggingface-hub<2.0,>=0.25.0->datasets) (0.20.1) - Requirement already satisfied: typing-extensions>=3.7.4.3 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from huggingface-hub<2.0,>=0.25.0->datasets) (4.12.2) - Requirement already satisfied: charset-normalizer<4,>=2 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from requests>=2.32.2->datasets) (3.4.0) - Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from requests>=2.32.2->datasets) (2.2.3) - Requirement already satisfied: contourpy>=1.0.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from matplotlib->ezyrb) (1.3.1) - Requirement already satisfied: cycler>=0.10 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from matplotlib->ezyrb) (0.12.1) - Requirement already satisfied: fonttools>=4.22.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from matplotlib->ezyrb) (4.55.3) - Requirement already satisfied: kiwisolver>=1.3.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from matplotlib->ezyrb) (1.4.7) - Requirement already satisfied: pillow>=8 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from matplotlib->ezyrb) (11.0.0) - Requirement already satisfied: pyparsing>=2.3.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from matplotlib->ezyrb) (3.2.0) - Requirement already satisfied: python-dateutil>=2.7 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from matplotlib->ezyrb) (2.9.0.post0) - Requirement already satisfied: pytz>=2020.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from pandas->datasets) (2025.1) - Requirement already satisfied: tzdata>=2022.7 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from pandas->datasets) (2025.1) - Requirement already satisfied: joblib>=1.3.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from scikit-learn->ezyrb) (1.5.3) - Requirement already satisfied: threadpoolctl>=3.2.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from scikit-learn->ezyrb) (3.6.0) - Requirement already satisfied: networkx in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from torch->ezyrb) (3.4.2) - Requirement already satisfied: jinja2 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from torch->ezyrb) (3.1.4) - Requirement already satisfied: setuptools in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from torch->ezyrb) (75.6.0) - Requirement already satisfied: sympy==1.13.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from torch->ezyrb) (1.13.1) - Requirement already satisfied: mpmath<1.4,>=1.1.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from sympy==1.13.1->torch->ezyrb) (1.3.0) - Requirement already satisfied: aiohappyeyeballs>=2.3.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (2.4.4) - Requirement already satisfied: aiosignal>=1.1.2 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (1.3.2) - Requirement already satisfied: attrs>=17.3.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (24.3.0) - Requirement already satisfied: frozenlist>=1.1.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (1.5.0) - Requirement already satisfied: multidict<7.0,>=4.5 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (6.1.0) - Requirement already satisfied: propcache>=0.2.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (0.2.1) - Requirement already satisfied: yarl<2.0,>=1.17.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (1.18.3) - Requirement already satisfied: six>=1.5 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from python-dateutil>=2.7->matplotlib->ezyrb) (1.17.0) - Requirement already satisfied: sniffio>=1.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from anyio->httpx<1.0.0->datasets) (1.3.1) - Requirement already satisfied: MarkupSafe>=2.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from jinja2->torch->ezyrb) (3.0.2) - Requirement already satisfied: click>=8.0.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from typer-slim->huggingface-hub<2.0,>=0.25.0->datasets) (8.3.1) - - [notice] A new release of pip is available: 24.3.1 -> 25.3 - [notice] To update, run: pip install --upgrade pip - - -.. code:: ipython3 - - import numpy as np - import matplotlib.tri as mtri - import matplotlib.pyplot as plt - from ezyrb import POD, RBF, Database - from ezyrb import ReducedOrderModel as ROM - %matplotlib inline - -Offline phase -------------- - -In the *offline* phase, we need some samples of the parametric -high-fidelity model. In this case, we extract 8 snapshots from the -numerical model implemented in **FEniCS**, and we import them and the -related parameters. - -.. code:: ipython3 - - from datasets import load_dataset - data_path = "kshitij-pandey/termal_dataset" - snapshots_hf = load_dataset(data_path, "snapshots", split="train") - param_hf = load_dataset(data_path, "params", split="train") - triangles_hf = load_dataset(data_path, "triangles", split="train") - coords_hf = load_dataset(data_path, "coords", split="train") - - - # convert the dict files into numpy - - import pandas as pd - - def hf_to_numpy(ds): - return ds.to_pandas().to_numpy() - - - snapshots = hf_to_numpy(snapshots_hf) - param = hf_to_numpy(param_hf) - triangles = hf_to_numpy(triangles_hf) - coords = hf_to_numpy(coords_hf) - print(snapshots.shape, param.shape) - - -.. parsed-literal:: - - (8, 304) (8, 2) - - -Moreover, to visualize the solution (both the higher-order one and the -reduced one), we import also the mesh information to be able to create -the triangulation. We underline this additional step is related only to -plotting purpose, and not mandatory for the reduced space generation. - -.. code:: ipython3 - - x, y = coords - from matplotlib.tri import Triangulation - triang = Triangulation(x, y, triangles) - triang = triang - -For the sake of clarity the snapshots are plotted. - -.. code:: ipython3 - - fig, ax = plt.subplots(nrows=2, ncols=4, figsize=(16, 6), sharey=True, sharex=True) - ax = ax.flatten() - for i in range(8): - ax[i].triplot(triang, 'b-', lw=0.1) - cm = ax[i].tripcolor(triang, snapshots[i]) - fig.colorbar(cm, ax=ax[i]) - ax[i].set_title('($\mu_0={:5.2f}, \mu_1={:5.2f})$'.format(*param[i])) - - -.. image:: output_8_1.png - - -First of all, we create a ``Database`` object from the parameters and -the snapshots. - -.. code:: ipython3 - - db = Database(param, snapshots) - -Then we need a reduction object. In this case we use the proper -orthogonal decomposition so we create a ``POD`` object. We use here all -the default parameters, but for the complete list of available arguments -we refer to original documentation of -`POD `__ class. - -.. code:: ipython3 - - pod = POD('svd') - -Then we instantiate the ``RBF`` class for interpolating the solution -manifold. Also in this case, -`RBF `__ documentation is the -perfect starting point to explore such class. - -.. code:: ipython3 - - rbf = RBF() - -Few lines of code and our reduced model is created! To complete -everything, we create the ``ReducedOrderModel`` (aliased to ``ROM`` in -this tutorial) object by passing the already created objects. For -clarity, we puntualize that we need to pass the **instances** and not -the classes. Simply changing such line (with different objects) allows -to test different frameworks in a very modular way. The ``fit()`` -function computes the reduced model, meaning that the original snapshots -in the database are projected onto the POD space and the RBF -interpolator is created. - -.. code:: ipython3 - - rom = ROM(db, pod, rbf) - rom.fit(); - -Online phase ------------- - -In the *online* phase we can query our model in order to predict the -solution for a new parameter :math:`\mu_\text{new}` that is not in the -training set. We just need to pass the new parameters as input of the -``predict()`` function. - -.. code:: ipython3 - - new_mu = [8, 1] - pred_sol = rom.predict(new_mu) - -We can so plot the predicted solution for a fixed parameter… - -.. code:: ipython3 - - plt.figure(figsize=(7, 5)) - plt.triplot(triang, 'b-', lw=0.1) - plt.tripcolor(triang, *pred_sol) - plt.colorbar(); - - - -.. image:: output_20_0.png - - -Error Approximation & Improvement ---------------------------------- - -At the moment, we used a database which is composed by 8 files. we would -have an idea of the approximation accuracy we are able to reach with -these high-fidelity solutions. Using the *leave-one-out* strategy, an -error is computed for each parametric point in our database and these -values are returned as array. - -.. code:: ipython3 - - for pt, error in zip(rom.database.parameters_matrix, rom.loo_error()): - print(pt, error) - - -.. parsed-literal:: - - [ 0.5 -0.2] 0.3830555986412087 - [8.6 0.1] 0.5972596749801533 - [5.3 0.8] 0.8082744257222089 - [9.4 0.1] 0.4105803285232253 - [ 7.3 -0.8] 0.5505863544054451 - [0.2 0.8] 0.07567485849711765 - [ 3.5 -0.5] 0.66949247698686 - [0.3 0.6] 0.06478619218562698 - - -Moreover, we can use the information about the errors to locate the -parametric points where we have to compute the new high-fidelity -solutions and add these to the database in order to optimally improve -the accuracy. - -.. code:: ipython3 - - rom.optimal_mu() - - - - -.. parsed-literal:: - - array([[ 5.2487694 , -0.06339911]]) - - - -These function can be used to achieve the wanted (estimated) accuracy. diff --git a/docs/source/_tutorials/tutorial-2.html b/docs/source/_tutorials/tutorial-2.html deleted file mode 100644 index 4dc1d9ff..00000000 --- a/docs/source/_tutorials/tutorial-2.html +++ /dev/null @@ -1,13569 +0,0 @@ - - - - -tutorial-2 - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-

EZyRB Tutorial 2

Test several frameworks at once

In this tutorial, we will explain step by step how to use the EZyRB library to test different techniques for building the reduced order model. We will compare different methods of dimensionality reduction, interpolation and accuracy assessment.

-

We consider here a computational fluid dynamics problem described by the (incompressible) Navier Stokes equations. -We will be using the Navier Stokes Dataset that contains the output data from a full order flow simulation and can be found on GitHub under Smithers library. -Smithers is developed by SISSA mathlab and it contains some useful datasets and a multi-purpose toolbox that inherits functionality from other packages to make the process of dealing with these datasets much easier with more compact coding.

-

The package can be installed using python -m pip install smithers -U, but for a detailed description about installation and usage we refer to original Github page.

-

First of all, we just import the package and instantiate the dataset object.

- -
-
-
-
-
-
In [1]:
-
-
-
from smithers.dataset import NavierStokesDataset
-data = NavierStokesDataset()
-
- -
-
-
- -
-
-
-
-

The NavierStokesDataset() class contains the attribute:

-
    -
  • snapshots: the matrices of snapshots stored by row (one matrix for any output field)
  • -
  • params: the matrix of corresponding parameters
  • -
  • pts_coordinates: the coordinates of all nodes of the discretize space
  • -
  • faces: the actual topology of the discretize space
  • -
  • triang: the triangulation, useful especially for rendering purposes.
  • -
-

In the details, data.snapshots is a dictionary with the following output of interest:

-
    -
  • vx: velocity in the X-direction.
  • -
  • vy: velocity in the Y-direction.
  • -
  • mag(v): velocity magnitude.
  • -
  • p: pressure value.
  • -
-

In total, the dataset contains 500 parametric configurations in a space of 1639 degrees of freedom. In this case, we have just one parameter, which is the velocity (along $x$) we impose at the inlet.

- -
-
-
-
-
-
In [2]:
-
-
-
for name in ['vx', 'vy', 'p', 'mag(v)']:
-    print('Shape of {:7s} snapshots matrix: {}'.format(name, data.snapshots[name].shape))
-    
-print('Shape of parameters matrix: {}'.format(data.params.shape))
-
- -
-
-
- -
-
- - -
- -
- - -
-
Shape of vx      snapshots matrix: (500, 1639)
-Shape of vy      snapshots matrix: (500, 1639)
-Shape of p       snapshots matrix: (500, 1639)
-Shape of mag(v)  snapshots matrix: (500, 1639)
-Shape of parameters matrix: (500, 1)
-
-
-
- -
-
- -
-
-
-
-

Initial setting

First of all, we import the required packages.

-

From EZyRB we need:

-
    -
  1. The ROM class, which performs the model order reduction process.
  2. -
  3. A module such as Database, where the matrices of snapshots and parameters are stored.
  4. -
  5. A dimensionality reduction method such as Proper Orthogonal Decomposition POD or Auto-Encoder network AE.
  6. -
  7. An interpolation method to obtain an approximation for the parametric solution for a new set of parameters such as the Radial Basis Function RBF, Gaussian Process Regression GPR, K-Neighbors Regressor KNeighborsRegressor, Radius Neighbors Regressor RadiusNeighborsRegressor or Multidimensional Linear Interpolator Linear.
  8. -
-

We also need to import:

-
    -
  • numpy: to handle arrays and matrices we will be working with.
  • -
  • torch: to enable the usage of Neural Networks
  • -
  • matplotlib.pyplot: to handle the plotting environment.
  • -
  • matplotlib.tri: for plotting of the triangular grid.
  • -
- -
-
-
-
-
-
In [3]:
-
-
-
# Database module
-from ezyrb import Database
-
-# Dimensionality reduction methods
-from ezyrb import POD, AE
-
-# Approximation/interpolation methods
-from ezyrb import RBF, GPR, KNeighborsRegressor, RadiusNeighborsRegressor, Linear, ANN
-
-# Model order reduction calss
-from ezyrb import ReducedOrderModel as ROM
-
-import numpy as np
-import torch
-import torch.nn as nn
-
-import matplotlib.tri as mtri
-import matplotlib.pyplot as plt
-
-import warnings
-warnings.filterwarnings("ignore", message="Ill-conditioned matrix ")
-%matplotlib inline
-
- -
-
-
- -
-
-
-
-

Before starting with the reduced order model, we visualize some of the snapshots in our dataset.

- -
-
-
-
-
-
In [4]:
-
-
-
fig, ax = plt.subplots(nrows=3, ncols=3, figsize=(16, 8), sharey=True, sharex=True)
-ax = ax.flatten()
-for i in range(9):
-    ax[i].tricontourf(data.triang, data.snapshots['vx'][i], levels=16)
-    ax[i].set_title('Original snapshot at inlet velocity = {}'.format(*data.params[i].round(2)))
-
- -
-
-
- -
-
- - -
- -
- - - - -
- -
- -
- -
-
- -
-
-
-
-

In this step, we perform the model order reduction to obtain a reduced space from the full order space. We refer to Tutorial 1 for the description of the basic workflow, here we just quickly describe the steps implemented in the next cell.

-

We start by passing the matrices of the parameters and snapshots to the Database() class. It must be said that at this time we create the ROM for the vx field. We also instantiate the POD and RBF object to have a benchmark ROM.

- -
-
-
-
-
-
In [5]:
-
-
-
db = Database(data.params, data.snapshots['vx'])
-rom = ROM(db, POD(), RBF())
-rom.fit();
-
- -
-
-
- -
-
-
-
-

Three lines for a data-driven reduced order model, not bad!

-

Just to have a visual check that everything is going well, we plot the approximation for new parameters in the range $[1, 80]$.

- -
-
-
-
-
-
In [6]:
-
-
-
new_params = np.random.uniform(size=(2))*79.+1.
-
-fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 3))
-for i, param in enumerate(new_params):
-    ax[i].tricontourf(data.triang, rom.predict([param]))
-    ax[i].set_title('Predicted snapshots at inlet velocity = {}'.format(param))
-
- -
-
-
- -
-
- - -
- -
- - - - -
- -
- -
- -
-
- -
-
-
-
-

We are now calculating the approximation error to see how close is our reduced solution to the full-order solution/simulation using the k-fold Cross-Validation strategy by passing the number of splits to the ReducedOrderModel.kfold_cv_error(n_splits) method, which operates as follows:

-
    -
  1. Split the dataset (parameters/snapshots) into $k$-number of groups/folds.
  2. -
  3. Use $k-1$ groups to calculate the reduced space and leave one group for testing.
  4. -
  5. Use the approximation/interpolation method to predict each snapshot in the testing group.
  6. -
  7. Calculate the error for each snapshot in the testing group by taking the difference between the predicted and the original snapshot.
  8. -
  9. Average the errors for predicting snapshots of the testing group/fold.
  10. -
  11. Repeat this procedure using different groups for testing and the remaining $k-1$ groups to calculate the reduced space.
  12. -
  13. In the end, we will have $k$-number errors for predicting each group/fold that we can average them to have one value for the error.
  14. -
- -
-
-
-
-
-
In [7]:
-
-
-
errors = rom.kfold_cv_error(n_splits = 5)
-print('Average error for each fold:')
-for e in errors:
-    print('  ',e)
-print('\nAverage error = {}'.format(errors.mean()))
-
- -
-
-
- -
-
- - -
- -
- - -
-
Average error for each fold:
-   4.955806193579243e-06
-   2.7585123665319248e-05
-   6.53585757525929e-05
-   8.07049318271821e-05
-   1.9926312885873252e-06
-
-Average error = 3.611941374545217e-05
-
-
-
- -
-
- -
-
-
-
-

Another strategy for calculating the approximation error is called leave-one-out by using the ReducedOrderModel.loo_error() method, which is similar to setting the number of folds equal to the number of snapshots (eg. in this case setting n_splits = 500) and it operates as follows:

-
    -
  1. Combine all the snapshots except one.
  2. -
  3. Calculate the reduced space.
  4. -
  5. Use the approximation/interpolation method to predict the removed snapshot.
  6. -
  7. Calculate the error by taking the difference between the predicted snapshot and the original removed one.
  8. -
  9. The error vector is obtained by repeating this procedure for each snapshot in the database.
  10. -
-

It is worth mentioning that it consumes more time because we have 500 snapshots and the algorithm will perform space order reduction and calculate the approximation error 500 times. For this reason, we commented the next line of code, in order to limit the computational effort needed to run this tutorial. Uncomment it only if you are a really brave person!

- -
-
-
-
-
-
In [8]:
-
-
-
# errors = rom.loo_error()
-
- -
-
-
- -
-
-
-
-

Comparison between different methods

One of the advantages of the data-driven reduced order modeling is the modular nature of the method. Practically speaking, we need

-
    -
  • a method for reducing the dimensionality of input snapshots;
  • -
  • a method for approximate the solution manifold;
  • -
-

allowing in principle a large variety of combinations.

-

The list of implemented reduction methods in EZyRB contains:

-
    -
  • POD: proper orthogonal decomposition
  • -
  • AE: autoencoder
  • -
-

while the list of implemented approximation methods contains:

-
    -
  • RBF: radial basis function interpolation
  • -
  • GPR: gaussian process regression
  • -
  • KNeighborsRegressor: k-neighbors regression
  • -
  • RadiusNeighborsRegressor: radius neighbors regression
  • -
  • Linear: multidimensional linear interpolation
  • -
-

Moreover, new state-of-the-art methods will arrive, so we invite you to read the documentation for the complete list of all the possibilities!

-

In the next cell, we create two dictionaries with the objects, such that we can easily test everything with simple for cycles. WARNING since several methods require the solution of an optimization problem (eg. GPR, ANN, AE), the cell may require some minutes to be run.

- -
-
-
-
-
-
In [9]:
-
-
-
reductions = {
-    'POD': POD('svd',rank=10),
-    'AE': AE([200, 100, 10], [10, 100, 200], nn.Tanh(), nn.Tanh(), 10),
-}
-
-approximations = {
-#    'Linear': Linear(),
-    'RBF': RBF(),
-    'GPR': GPR(),
-    'KNeighbors': KNeighborsRegressor(),
-    'RadiusNeighbors':  RadiusNeighborsRegressor(),
-    'ANN': ANN([20, 20], nn.Tanh(), 10),
-}
-
-header = '{:10s}'.format('')
-for name in approximations:
-    header += ' {:>15s}'.format(name)
-
-print(header)
-for redname, redclass in reductions.items():
-    row = '{:10s}'.format(redname)    
-    for approxname, approxclass in approximations.items():
-        rom = ROM(db, redclass, approxclass)
-        rom.fit()
-        row += ' {:15e}'.format(rom.kfold_cv_error(n_splits=5).mean())
-    
-    print(row)
-
- -
-
-
- -
-
- - -
- -
- - -
-
                       RBF             GPR      KNeighbors RadiusNeighbors             ANN
-POD           4.214594e-05    1.123213e-05    8.032581e-03    1.091257e-02    9.259920e-01
-
-
-
- -
- -
- - -
-
 /u/n/ndemo/.local/lib/python3.6/site-packages/GPy/kern/src/rbf.py:52: RuntimeWarning:overflow encountered in square
- /u/n/ndemo/.local/lib/python3.6/site-packages/GPy/kern/src/stationary.py:168: RuntimeWarning:overflow encountered in true_divide
- /u/n/ndemo/.local/lib/python3.6/site-packages/GPy/kern/src/rbf.py:76: RuntimeWarning:invalid value encountered in multiply
-
-
-
- -
- -
- - -
-
AE            1.982617e+00    1.982617e+00    1.982617e+00    1.982617e+00    1.982611e+00
-
-
-
- -
-
- -
-
-
-
-

In a very compact way, we tested several frameworks - like POD-RBF, POD-GPR, POD-NN -, showing the accuracy reached by any of them.

-

We can also note that the frameworks that involve neural networks (AE and ANN) show a very poor precision. This is due to the fact of the limited number of epochs we impose in the learning procedure. You can try to increase the number of epochs as we shown in the next cell in order to obtain better results, at the cost of a longer training phase.

- -
-
-
-
-
-
In [10]:
-
-
-
reductions['AE'] = AE([100, 10], [10, 100], nn.ReLU(), nn.ReLU(), 30000)
-approximations['ANN'] = ANN([50, 10], nn.ReLU(), 30000)
-
- -
-
-
- -
-
-
- - - - - - diff --git a/docs/source/_tutorials/tutorial-2/output_12_0.png b/docs/source/_tutorials/tutorial-2/output_12_0.png deleted file mode 100644 index 26da21e6..00000000 Binary files a/docs/source/_tutorials/tutorial-2/output_12_0.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-2/output_8_0.png b/docs/source/_tutorials/tutorial-2/output_8_0.png deleted file mode 100644 index 1a1d6082..00000000 Binary files a/docs/source/_tutorials/tutorial-2/output_8_0.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-2/tutorial-2.rst b/docs/source/_tutorials/tutorial-2/tutorial-2.rst deleted file mode 100644 index 1a4f238c..00000000 --- a/docs/source/_tutorials/tutorial-2/tutorial-2.rst +++ /dev/null @@ -1,491 +0,0 @@ -Test several frameworks at once -================================ - -In this tutorial, we will explain step by step how to use the **EZyRB** -library to test different techniques for building the reduced order -model. We will compare different methods of dimensionality reduction, -interpolation and accuracy assessment. - -We consider here a computational fluid dynamics problem described by the -(incompressible) Navier Stokes equations. We will be using the **Navier -Stokes Dataset** that contains the output data from a full order flow -simulation and can be found on **Hugging Face Datasets** - -The package can be installed using ``python -m pip install datasets``, -but for a detailed description about installation and usage we refer to -original `Github page `__. - -First of all, we just import the package and instantiate the dataset -object. - -.. code:: ipython3 - - !pip install datasets ezyrb - - -.. parsed-literal:: - - Requirement already satisfied: datasets in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (4.4.2) - Requirement already satisfied: ezyrb in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (1.3.2) - Requirement already satisfied: filelock in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (3.16.1) - Requirement already satisfied: numpy>=1.17 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (2.2.0) - Requirement already satisfied: pyarrow>=21.0.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (22.0.0) - Requirement already satisfied: dill<0.4.1,>=0.3.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (0.4.0) - Requirement already satisfied: pandas in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (2.2.3) - Requirement already satisfied: requests>=2.32.2 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (2.32.3) - Requirement already satisfied: httpx<1.0.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (0.28.1) - Requirement already satisfied: tqdm>=4.66.3 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (4.67.1) - Requirement already satisfied: xxhash in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (3.6.0) - Requirement already satisfied: multiprocess<0.70.19 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (0.70.18) - Requirement already satisfied: fsspec<=2025.10.0,>=2023.1.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (2025.10.0) - Requirement already satisfied: huggingface-hub<2.0,>=0.25.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (1.2.3) - Requirement already satisfied: packaging in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (24.2) - Requirement already satisfied: pyyaml>=5.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from datasets) (6.0.2) - Requirement already satisfied: future in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from ezyrb) (1.0.0) - Requirement already satisfied: scipy in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from ezyrb) (1.14.1) - Requirement already satisfied: matplotlib in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from ezyrb) (3.10.0) - Requirement already satisfied: scikit-learn in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from ezyrb) (1.8.0) - Requirement already satisfied: torch in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from ezyrb) (2.5.1) - Requirement already satisfied: aiohttp!=4.0.0a0,!=4.0.0a1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (3.11.10) - Requirement already satisfied: anyio in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from httpx<1.0.0->datasets) (4.8.0) - Requirement already satisfied: certifi in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from httpx<1.0.0->datasets) (2024.12.14) - Requirement already satisfied: httpcore==1.* in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from httpx<1.0.0->datasets) (1.0.7) - Requirement already satisfied: idna in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from httpx<1.0.0->datasets) (3.10) - Requirement already satisfied: h11<0.15,>=0.13 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from httpcore==1.*->httpx<1.0.0->datasets) (0.14.0) - Requirement already satisfied: hf-xet<2.0.0,>=1.2.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from huggingface-hub<2.0,>=0.25.0->datasets) (1.2.0) - Requirement already satisfied: shellingham in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from huggingface-hub<2.0,>=0.25.0->datasets) (1.5.4) - Requirement already satisfied: typer-slim in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from huggingface-hub<2.0,>=0.25.0->datasets) (0.20.1) - Requirement already satisfied: typing-extensions>=3.7.4.3 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from huggingface-hub<2.0,>=0.25.0->datasets) (4.12.2) - Requirement already satisfied: charset-normalizer<4,>=2 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from requests>=2.32.2->datasets) (3.4.0) - Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from requests>=2.32.2->datasets) (2.2.3) - Requirement already satisfied: contourpy>=1.0.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from matplotlib->ezyrb) (1.3.1) - Requirement already satisfied: cycler>=0.10 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from matplotlib->ezyrb) (0.12.1) - Requirement already satisfied: fonttools>=4.22.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from matplotlib->ezyrb) (4.55.3) - Requirement already satisfied: kiwisolver>=1.3.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from matplotlib->ezyrb) (1.4.7) - Requirement already satisfied: pillow>=8 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from matplotlib->ezyrb) (11.0.0) - Requirement already satisfied: pyparsing>=2.3.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from matplotlib->ezyrb) (3.2.0) - Requirement already satisfied: python-dateutil>=2.7 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from matplotlib->ezyrb) (2.9.0.post0) - Requirement already satisfied: pytz>=2020.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from pandas->datasets) (2025.1) - Requirement already satisfied: tzdata>=2022.7 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from pandas->datasets) (2025.1) - Requirement already satisfied: joblib>=1.3.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from scikit-learn->ezyrb) (1.5.3) - Requirement already satisfied: threadpoolctl>=3.2.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from scikit-learn->ezyrb) (3.6.0) - Requirement already satisfied: networkx in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from torch->ezyrb) (3.4.2) - Requirement already satisfied: jinja2 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from torch->ezyrb) (3.1.4) - Requirement already satisfied: setuptools in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from torch->ezyrb) (75.6.0) - Requirement already satisfied: sympy==1.13.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from torch->ezyrb) (1.13.1) - Requirement already satisfied: mpmath<1.4,>=1.1.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from sympy==1.13.1->torch->ezyrb) (1.3.0) - Requirement already satisfied: aiohappyeyeballs>=2.3.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (2.4.4) - Requirement already satisfied: aiosignal>=1.1.2 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (1.3.2) - Requirement already satisfied: attrs>=17.3.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (24.3.0) - Requirement already satisfied: frozenlist>=1.1.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (1.5.0) - Requirement already satisfied: multidict<7.0,>=4.5 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (6.1.0) - Requirement already satisfied: propcache>=0.2.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (0.2.1) - Requirement already satisfied: yarl<2.0,>=1.17.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.10.0,>=2023.1.0->datasets) (1.18.3) - Requirement already satisfied: six>=1.5 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from python-dateutil>=2.7->matplotlib->ezyrb) (1.17.0) - Requirement already satisfied: sniffio>=1.1 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from anyio->httpx<1.0.0->datasets) (1.3.1) - Requirement already satisfied: MarkupSafe>=2.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from jinja2->torch->ezyrb) (3.0.2) - Requirement already satisfied: click>=8.0.0 in /Users/ndemo/miniconda3/envs/pina/lib/python3.12/site-packages (from typer-slim->huggingface-hub<2.0,>=0.25.0->datasets) (8.3.1) - - [notice] A new release of pip is available: 24.3.1 -> 25.3 - [notice] To update, run: pip install --upgrade pip - - -.. code:: ipython3 - - from datasets import load_dataset - data_path = "kshitij-pandey/navier_stokes_datasets" - snapshots_hf = load_dataset(data_path, "snapshots_split", split="train") - param_hf = load_dataset(data_path, "params", split="train") - triangles_hf = load_dataset(data_path, "triangles", split="train") - coords_hf = load_dataset(data_path, "coords", split="train") - import numpy as np - snapshots = {name: np.array(snapshots_hf[name]) for name in ['vx', 'vy', 'mag(v)', 'p']} - # convert the dict files into numpy - - import pandas as pd - - def hf_to_numpy(ds): - return ds.to_pandas().to_numpy() - - - params = hf_to_numpy(param_hf) - triangles = hf_to_numpy(triangles_hf) - coords = hf_to_numpy(coords_hf) - - -The ``NavierStokesDataset()`` class contains the attribute: - -``snapshots``: the matrices of snapshots stored by row (one matrix for -any output field) - ``params``: the matrix of corresponding parameters - -``pts_coordinates``: the coordinates of all nodes of the discretize -space - ``faces``: the actual topology of the discretize space - -``triang``: the triangulation, useful especially for rendering purposes. - -In the details, ``snapshots`` is a dictionary with the following output -of interest: - **vx:** velocity in the X-direction. - **vy:** velocity -in the Y-direction. - **mag(v):** velocity magnitude. - **p:** pressure -value. - -In total, the dataset contains 500 parametric configurations in a space -of 1639 degrees of freedom. In this case, we have just one parameter, -which is the velocity (along :math:`x`) we impose at the inlet. - -.. code:: ipython3 - - for name in ['vx', 'vy', 'p', 'mag(v)']: - print('Shape of {:7s} snapshots matrix: {}'.format(name, snapshots[name].shape)) - - print('Shape of parameters matrix: {}'.format(params.shape)) - - - -.. parsed-literal:: - - Shape of vx snapshots matrix: (500, 1639) - Shape of vy snapshots matrix: (500, 1639) - Shape of p snapshots matrix: (500, 1639) - Shape of mag(v) snapshots matrix: (500, 1639) - Shape of parameters matrix: (500, 1) - - -Initial setting -~~~~~~~~~~~~~~~ - -First of all, we import the required packages. - -From ``EZyRB`` we need: 1. The ``ROM`` class, which performs the model -order reduction process. 2. A module such as ``Database``, where the -matrices of snapshots and parameters are stored. 3. A dimensionality -reduction method such as Proper Orthogonal Decomposition ``POD`` or -Auto-Encoder network ``AE``. 4. An interpolation method to obtain an -approximation for the parametric solution for a new set of parameters -such as the Radial Basis Function ``RBF``, Gaussian Process Regression -``GPR``, K-Neighbors Regressor ``KNeighborsRegressor``, Radius Neighbors -Regressor ``RadiusNeighborsRegressor`` or Multidimensional Linear -Interpolator ``Linear``. - -We also need to import: \* ``numpy:`` to handle arrays and matrices we -will be working with. \* ``torch:`` to enable the usage of Neural -Networks \* ``matplotlib.pyplot:`` to handle the plotting environment. -\* ``matplotlib.tri:`` for plotting of the triangular grid. - -.. code:: ipython3 - - # Database module - from ezyrb import Database - - # Dimensionality reduction methods - from ezyrb import POD, AE - - # Approximation/interpolation methods - from ezyrb import RBF, GPR, KNeighborsRegressor, RadiusNeighborsRegressor, Linear, ANN - - # Model order reduction calss - from ezyrb import ReducedOrderModel as ROM - import torch - import torch.nn as nn - - import matplotlib.tri as mtri - import matplotlib.pyplot as plt - - import warnings - warnings.filterwarnings("ignore", message="Ill-conditioned matrix ") - %matplotlib inline - -Before starting with the reduced order model, we visualize some of the -snapshots in our dataset. - -.. code:: ipython3 - - x, y = coords - from matplotlib.tri import Triangulation - triang = Triangulation(x, y, triangles) - fig, ax = plt.subplots(nrows=3, ncols=3, figsize=(16, 8), sharey=True, sharex=True) - ax = ax.flatten() - for i in range(9): - ax[i].tricontourf(triang, snapshots['vx'][i], levels=16) - ax[i].set_title('Original snapshot at inlet velocity = {}'.format(*params[i].round(2))) - - - -.. image:: output_8_0.png - - -In this step, we perform the model order reduction to obtain a reduced -space from the full order space. We refer to `Tutorial -1 `__ -for the description of the basic workflow, here we just quickly describe -the steps implemented in the next cell. - -We start by passing the matrices of the parameters and snapshots to the -``Database()`` class. It must be said that at this time we create the -ROM for the ``vx`` field. We also instantiate the ``POD`` and ``RBF`` -object to have a benchmark ROM. - -.. code:: ipython3 - - db = Database(params, snapshots['vx']) - rom = ROM(db, POD(), RBF()) - rom.fit(); - -Three lines for a data-driven reduced order model, not bad! - -Just to have a visual check that everything is going well, we plot the -approximation for new parameters in the range :math:`[1, 80]`. - -.. code:: ipython3 - - new_params = np.random.uniform(size=(2))*79.+1. - - fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 3)) - for i, param in enumerate(new_params): - ax[i].tricontourf(triang, *rom.predict([param])) - ax[i].set_title('Predicted snapshots at inlet velocity = {}'.format(param)) - - - -.. image:: output_12_0.png - - -We are now calculating the approximation error to see how close is our -reduced solution to the full-order solution/simulation using the -**k-fold Cross-Validation** strategy by passing the number of splits to -the ``ReducedOrderModel.kfold_cv_error(n_splits)`` method, which -operates as follows: - -1. Split the dataset (parameters/snapshots) into :math:`k`-number of - groups/folds. -2. Use :math:`k-1` groups to calculate the reduced space and leave one - group for testing. -3. Use the approximation/interpolation method to predict each snapshot - in the testing group. -4. Calculate the error for each snapshot in the testing group by taking - the difference between the predicted and the original snapshot. -5. Average the errors for predicting snapshots of the testing - group/fold. -6. Repeat this procedure using different groups for testing and the - remaining :math:`k-1` groups to calculate the reduced space. -7. In the end, we will have :math:`k`-number errors for predicting each - group/fold that we can average them to have one value for the error. - -.. code:: ipython3 - - errors = rom.kfold_cv_error(n_splits = 5) - print('Average error for each fold:') - for e in errors: - print(' ',e) - print('\nAverage error = {}'.format(errors.mean())) - - -.. parsed-literal:: - - Average error for each fold: - 4.945136635258633e-07 - 9.860761253488605e-07 - 3.894778057436833e-06 - 5.303642035538002e-06 - 1.2984622088905908e-07 - - Average error = 2.1617712205477237e-06 - - -Another strategy for calculating the approximation error is called -**leave-one-out** by using the ``ReducedOrderModel.loo_error()`` method, -which is similar to setting the number of folds equal to the number of -snapshots (eg. in this case setting ``n_splits`` = 500) and it operates -as follows: 1. Combine all the snapshots except one. 2. Calculate the -reduced space. 3. Use the approximation/interpolation method to predict -the removed snapshot. 4. Calculate the error by taking the difference -between the predicted snapshot and the original removed one. 5. The -error vector is obtained by repeating this procedure for each snapshot -in the database. - -It is worth mentioning that it consumes more time because we have 500 -snapshots and the algorithm will perform space order reduction and -calculate the approximation error 500 times. For this reason, we -commented the next line of code, in order to limit the computational -effort needed to run this tutorial. Uncomment it only if you are a -really brave person! - -.. code:: ipython3 - - # errors = rom.loo_error() - -Comparison between different methods -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -One of the advantages of the data-driven reduced order modeling is the -modular nature of the method. Practically speaking, we need - a method -for reducing the dimensionality of input snapshots; - a method for -approximate the solution manifold; - -allowing in principle a large variety of combinations. - -The list of implemented **reduction methods** in EZyRB contains: - -``POD``: *proper orthogonal decomposition* - ``AE``: *autoencoder* - -while the list of implemented **approximation methods** contains: - -``RBF``: *radial basis function interpolation* - ``GPR``: *gaussian -process regression* - ``KNeighborsRegressor``: *k-neighbors regression* -- ``RadiusNeighborsRegressor``: *radius neighbors regression* - -``Linear``: *multidimensional linear interpolation* - -Moreover, new state-of-the-art methods will arrive, so we invite you to -read the `documentation `__ for the -complete list of all the possibilities! - -In the next cell, we create two dictionaries with the objects, such that -we can easily test everything with simple ``for`` cycles. **WARNING** -since several methods require the solution of an optimization problem -(eg. GPR, ANN, AE), the cell may require some minutes to be run. - -.. code:: ipython3 - - reductions = { - 'POD': POD('svd',rank=10), - 'AE': AE([200, 100, 10], [10, 100, 200], nn.Tanh(), nn.Tanh(), 10, frequency_print=-10), - } - - approximations = { - # 'Linear': Linear(), - 'RBF': RBF(), - 'GPR': GPR(), - 'KNeighbors': KNeighborsRegressor(), - 'RadiusNeighbors': RadiusNeighborsRegressor(), - 'ANN': ANN([20, 20], nn.Tanh(), 10, frequency_print=-10), - } - - s = '\n\n{:10s}'.format('') - for name in approximations: - s += ' {:>15s}'.format(name) - s += '\n' - - for redname, redclass in reductions.items(): - row = '{:10s}'.format(redname) - for approxname, approxclass in approximations.items(): - rom = ROM(db, redclass, approxclass) - print(f"Processing {redname}-{approxname}") - rom.fit() - row += ' {:15e}'.format(rom.kfold_cv_error(n_splits=5).mean()) - - s += f'{row}\n' - - print(s) - - -.. parsed-literal:: - - Processing POD-RBF - Processing POD-GPR - Processing POD-KNeighbors - Processing POD-RadiusNeighbors - Processing POD-ANN - [epoch 1] 9.546327e+04 - [epoch 10] 9.538811e+04 - [epoch 1] 9.522560e+04 - [epoch 10] 9.515077e+04 - [epoch 1] 9.766096e+04 - [epoch 10] 9.758415e+04 - [epoch 1] 9.519630e+04 - [epoch 10] 9.512106e+04 - [epoch 1] 9.567339e+04 - [epoch 10] 9.559758e+04 - [epoch 1] 9.314678e+04 - [epoch 10] 9.307255e+04 - Processing AE-RBF - [epoch 1] 5.823453e+02 - [epoch 10] 5.556604e+02 - [epoch 1] 5.812169e+02 - [epoch 10] 8.230733e+01 - [epoch 1] 5.957941e+02 - [epoch 10] 9.019125e+01 - [epoch 1] 5.806075e+02 - [epoch 10] 6.687416e+01 - [epoch 1] 5.835210e+02 - [epoch 10] 7.294649e+01 - [epoch 1] 5.700542e+02 - [epoch 10] 7.343178e+01 - Processing AE-GPR - [epoch 1] 5.834352e+02 - [epoch 10] 7.699603e+01 - [epoch 1] 5.847290e+02 - [epoch 10] 1.470968e+02 - [epoch 1] 5.948226e+02 - [epoch 10] 7.184375e+01 - [epoch 1] 5.802390e+02 - [epoch 10] 7.155777e+01 - [epoch 1] 5.853676e+02 - [epoch 10] 1.150479e+02 - [epoch 1] 5.690804e+02 - [epoch 10] 6.931157e+01 - Processing AE-KNeighbors - [epoch 1] 5.819167e+02 - [epoch 10] 6.814513e+01 - [epoch 1] 5.820450e+02 - [epoch 10] 9.533990e+01 - [epoch 1] 5.980317e+02 - [epoch 10] 1.218049e+02 - [epoch 1] 5.849615e+02 - [epoch 10] 9.724957e+01 - [epoch 1] 5.848712e+02 - [epoch 10] 1.151645e+02 - [epoch 1] 5.692266e+02 - [epoch 10] 7.778555e+01 - Processing AE-RadiusNeighbors - [epoch 1] 5.845089e+02 - [epoch 10] 1.057290e+02 - [epoch 1] 5.836143e+02 - [epoch 10] 8.220594e+01 - [epoch 1] 5.969666e+02 - [epoch 10] 8.701730e+01 - [epoch 1] 5.823361e+02 - [epoch 10] 9.751357e+01 - [epoch 1] 5.850589e+02 - [epoch 10] 9.528002e+01 - [epoch 1] 5.675153e+02 - [epoch 10] 6.384907e+01 - Processing AE-ANN - [epoch 1] 5.835621e+02 - [epoch 10] 1.136382e+02 - [epoch 1] 4.710647e+03 - [epoch 10] 4.693913e+03 - [epoch 1] 5.837049e+02 - [epoch 10] 1.006396e+02 - [epoch 1] 6.297388e+03 - [epoch 10] 6.277451e+03 - [epoch 1] 6.003340e+02 - [epoch 10] 9.461213e+01 - [epoch 1] 3.808644e+03 - [epoch 10] 3.790863e+03 - [epoch 1] 5.810663e+02 - [epoch 10] 8.357424e+01 - [epoch 1] 5.692258e+03 - [epoch 10] 5.670917e+03 - [epoch 1] 5.863652e+02 - [epoch 10] 1.553782e+02 - [epoch 1] 4.325479e+03 - [epoch 10] 4.307963e+03 - [epoch 1] 5.707682e+02 - [epoch 10] 9.925204e+01 - [epoch 1] 5.734101e+03 - [epoch 10] 5.716063e+03 - - - RBF GPR KNeighbors RadiusNeighbors ANN - POD 1.204641e-05 2.970147e-05 8.032581e-03 1.091257e-02 9.975237e-01 - AE 3.301131e-01 3.514848e-01 3.619394e-01 3.477732e-01 9.939129e-01 - - - -In a very compact way, we tested several frameworks - like POD-RBF, -POD-GPR, POD-NN -, showing the accuracy reached by any of them. - -We can also note that the frameworks that involve neural networks -(``AE`` and ``ANN``) show a very poor precision. This is due to the fact -of the limited number of epochs we impose in the learning procedure. You -can try to increase the number of epochs as we shown in the next cell in -order to obtain better results, at the cost of a longer training phase. - -.. code:: ipython3 - - reductions['AE'] = AE([100, 10], [10, 100], nn.ReLU(), nn.ReLU(), 30000) - approximations['ANN'] = ANN([50, 10], nn.ReLU(), 30000) diff --git a/docs/source/_tutorials/tutorial-3/output_10_0.png b/docs/source/_tutorials/tutorial-3/output_10_0.png deleted file mode 100644 index 29c0b66f..00000000 Binary files a/docs/source/_tutorials/tutorial-3/output_10_0.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-3/output_12_0.png b/docs/source/_tutorials/tutorial-3/output_12_0.png deleted file mode 100644 index 55524107..00000000 Binary files a/docs/source/_tutorials/tutorial-3/output_12_0.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-3/output_13_0.png b/docs/source/_tutorials/tutorial-3/output_13_0.png deleted file mode 100644 index 91cde0ab..00000000 Binary files a/docs/source/_tutorials/tutorial-3/output_13_0.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-3/output_14_0.png b/docs/source/_tutorials/tutorial-3/output_14_0.png deleted file mode 100644 index 415827b5..00000000 Binary files a/docs/source/_tutorials/tutorial-3/output_14_0.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-3/output_16_0.png b/docs/source/_tutorials/tutorial-3/output_16_0.png deleted file mode 100644 index 80938e41..00000000 Binary files a/docs/source/_tutorials/tutorial-3/output_16_0.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-3/output_6_0.png b/docs/source/_tutorials/tutorial-3/output_6_0.png deleted file mode 100644 index b5bdb305..00000000 Binary files a/docs/source/_tutorials/tutorial-3/output_6_0.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-3/output_7_0.png b/docs/source/_tutorials/tutorial-3/output_7_0.png deleted file mode 100644 index 37a7e4c4..00000000 Binary files a/docs/source/_tutorials/tutorial-3/output_7_0.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-3/tutorial-3.rst b/docs/source/_tutorials/tutorial-3/tutorial-3.rst deleted file mode 100644 index e3aa3504..00000000 --- a/docs/source/_tutorials/tutorial-3/tutorial-3.rst +++ /dev/null @@ -1,434 +0,0 @@ -Using Plugin for implementing NNsPOD-ROM -======================================== - -In this tutorial we will explain how to use the **NNsPOD-ROM** algorithm -implemented in **EZyRB** library. - -NNsPOD algorithm is purely a data-driven machine learning method that -seeks for an optimal mapping of the various snapshots to a reference -configuration via an automatic detection [1] and seeking for the -low-rank linear approximation subspace of the solution manifold. The -nonlinear transformation of the manifold leads to an accelerated KnW -decay, resulting in a low-dimensional linear approximation subspace, and -enabling the construction of efficient and accurate reduced order -models. The complete workflow of the NNsPOD-ROM algorithm, comprising of -both the offline and online phases is presented in [2]. - -References: - -[1] Papapicco, D., Demo, N., Girfoglio, M., Stabile, G., & Rozza, -G.(2022). The Neural Network shifted-proper orthogonal decomposition: A -machine learning approach for non-linear reduction of hyperbolic -equations.Computer Methods in Applied Mechanics and Engineering, 392, -114687 - https://doi.org/10.1016/j.cma.2022.114687 - -[2] Gowrachari, H., Demo, N., Stabile, G., & Rozza, G. (2024). -Non-intrusive model reduction of advection-dominated hyperbolic problems -using neural network shift augmented manifold transformations. arXiv -preprint - https://arxiv.org/abs/2407.18419. - -Problem defintion -~~~~~~~~~~~~~~~~~ - -We consider **1D gaussian distribution functions**, in wihch :math:`x` -is random variable, $ :raw-latex:`\mu `$ is mean and $ -:raw-latex:`\sigma`^2 $ is variance, where $ :raw-latex:`\sigma `$ is -the standard deviation or the width of gaussian. - -.. math:: - - - f(x)=\frac{1}{\sigma \sqrt{2 \pi}} e^{-(x-\mu)^2 /\left(2 \sigma^2\right)} - -To mimic travelling waves, here we parameterize the mean :math:`\mu` -values, where changing :math:`\mu` shifts the distribution along x-axis, - -Initial setting -~~~~~~~~~~~~~~~ - -First of all import the required packages: We need the standard Numpy, -Torch, Matplotlib, and some classes from EZyRB. - -- ``numpy:`` to handle arrays and matrices we will be working with. -- ``torch:`` to enable the usage of Neural Networks -- ``matplotlib:`` to handle the plotting environment. - -From ``EZyRB`` we need: 1. The ``ROM`` class, which performs the model -order reduction process. 2. A module such as ``Database``, where the -matrices of snapshots and parameters are stored. 3. A dimensionality -reduction method such as Proper Orthogonal Decomposition ``POD`` 4. An -interpolation method to obtain an approximation for the parametric -solution for a new set of parameters such as the Radial Basis Function -``RBF``, or Multidimensional Linear Interpolator ``Linear``. - -.. code:: ipython3 - - import numpy as np - import torch - from scipy import spatial - from matplotlib import pyplot as plt - - from ezyrb import POD, RBF, Database, Snapshot, Parameter, Linear, ANN - from ezyrb import ReducedOrderModel as ROM - from ezyrb.plugin import AutomaticShiftSnapshots - -.. code:: ipython3 - - def gaussian(x, mu, sig): - return np.exp(-np.power(x - mu, 2.) / (2 * np.power(sig, 2.))) - - def wave(t, res=256): - x = np.linspace(0, 11, res) - return x, gaussian(x, t, 0.2).T # parameterizing mean value - -Offline phase -------------- - -In this case, we obtain 15 snapshots from the analytical model. - -.. code:: ipython3 - - n_params = 20 - params = np.linspace(0.75, 10.25, n_params).reshape(-1, 1) - - pod = POD(rank=1) - rbf = RBF() - db = Database() - - for param in params: - space, values = wave(param) - snap = Snapshot(values=values.T, space=space) - db.add(Parameter(param), snap) - - print("Snapshot shape : ", db.snapshots_matrix.shape) - print("Parameter shape : ", db.parameters_matrix.shape) - - -.. parsed-literal:: - - Snapshot shape : (20, 256) - Parameter shape : (20, 1) - - -.. code:: ipython3 - - db_train, db_test = db.split([0.7,0.3]) - print("Lenght of training data set:", len(db_train)) - print(f"Parameters of training set: \n {db_train.parameters_matrix.flatten()}") - - print("Lenght of test data set:", len(db_test)) - print(f"Parameters of testing set: \n {db_test.parameters_matrix.flatten()}") - - -.. parsed-literal:: - - Lenght of training data set: 12 - Parameters of training set: - [ 0.75 1.25 2.25 2.75 4.75 5.25 7.25 7.75 8.25 9.25 9.75 10.25] - Lenght of test data set: 8 - Parameters of testing set: - [1.75 3.25 3.75 4.25 5.75 6.25 6.75 8.75] - - -.. code:: ipython3 - - plt.rcParams.update({ - "text.usetex": True, - "font.family": "serif", - "font.serif": ["Times New Roman"], - }) - - fig1 = plt.figure(figsize=(5,5)) - ax = fig1.add_subplot(111,projection='3d') - - for param in params: - space, values = wave(param) - snap = Snapshot(values=values.T, space=space) - ax.plot(space, param*np.ones(space.shape), values, label = f"{param}") - ax.set_xlabel('x') - ax.set_ylabel('t') - ax.set_zlabel('$f_{g}$(t)') - ax.set_xlim(0,11) - ax.set_ylim(0,11) - ax.set_zlim(0,1) - ax.legend(loc="upper center", ncol=7, prop = { "size": 7}) - ax.grid(False) - ax.view_init(elev=20, azim=-60, roll=0) - ax.set_title("Snapshots at original position") - plt.show() - - - -.. image:: output_6_0.png - - -.. code:: ipython3 - - #%% 3D PLOT : db_train snpashots at original position - fig2 = plt.figure(figsize=(5,5)) - ax = fig2.add_subplot(111, projection='3d') - - for i in range(len(db_train)): - ax.plot(space,(db_train.parameters_matrix[i]*np.ones(space.shape)), db_train.snapshots_matrix[i], label = db_train.parameters_matrix[i]) - ax.set_xlabel('x') - ax.set_ylabel('t') - ax.set_zlabel('$f_{g}$(t)') - ax.set_xlim(0,11) - ax.set_ylim(0,11) - ax.set_zlim(0,1) - ax.legend(loc="upper center", ncol=7, prop = { "size": 7}) - ax.grid(False) - ax.view_init(elev=20, azim=-60, roll=0) - ax.set_title("Training set snapshots at original position") - plt.show() - - - -.. image:: output_7_0.png - - -``InterpNet:`` must learn the reference configuration in the best -possible way w.r.t its grid point distribution such that it will be able -to reconstruct field values for every shifted centroid disrtribution. - -``ShiftNet:`` will learn the shift operator for a given problem, which -quantifies the optimal-shift, resulting in shifted space that transports -all the snapshots to the reference frame. - -``Training:`` The training of ShiftNet and InterpNet are seperated with -the latter being trained first. Once the network has learned the -best-possible reconstruct of the solution field of the reference -configuration, its forward map will be used for the training of Shiftnet -as well, in a cascaded fashion. For this reason, we must optimise the -loss of interpnet considerably more than ShiftNet’s. - -.. code:: ipython3 - - torch.manual_seed(1) - - interp = ANN([10,10], torch.nn.Softplus(), [1e-6, 200000], frequency_print=1000, lr=0.03) - shift = ANN([], torch.nn.LeakyReLU(), [1e-4, 10000], optimizer=torch.optim.Adam, frequency_print=500, l2_regularization=0, lr=0.0023) - - rom = ROM( - database=db_train, - reduction=pod, - approximation=rbf, - plugins=[ - AutomaticShiftSnapshots( - shift_network= shift, - interp_network=interp, - interpolator=Linear(fill_value=0), - reference_index=4, - parameter_index=4, - barycenter_loss=20.) - ] - ) - rom.fit() - - -.. parsed-literal:: - - [epoch 1] 9.325559e-02 - [epoch 1000] 1.529361e-03 - [epoch 2000] 4.970222e-04 - [epoch 3000] 4.387998e-04 - [epoch 4000] 4.628130e-04 - [epoch 5000] 3.835012e-04 - [epoch 6000] 3.140604e-04 - [epoch 7000] 3.919086e-04 - [epoch 8000] 4.268886e-04 - [epoch 9000] 9.304365e-05 - [epoch 10000] 4.048840e-04 - [epoch 11000] 3.448661e-04 - [epoch 12000] 1.810824e-04 - [epoch 13000] 1.608639e-04 - [epoch 14000] 1.410103e-04 - [epoch 15000] 1.369342e-04 - [epoch 16000] 3.215274e-04 - [epoch 17000] 1.686200e-05 - [epoch 18000] 1.850619e-04 - [epoch 19000] 5.792178e-05 - [epoch 20000] 1.031569e-05 - [epoch 21000] 5.006416e-04 - [epoch 22000] 7.024280e-06 - [epoch 23000] 3.728175e-06 - [epoch 24000] 2.684203e-06 - [epoch 25000] 2.088043e-05 - [epoch 26000] 2.364460e-05 - [epoch 27000] 5.422693e-05 - [epoch 28000] 8.736612e-06 - [epoch 29000] 1.406125e-03 - [epoch 29941] 9.978612e-07 - [epoch 1] 1.996005e+01 - [epoch 500] 3.647174e+00 - [epoch 1000] 2.648998e+00 - [epoch 1500] 1.878768e+00 - [epoch 2000] 1.296257e+00 - [epoch 2500] 8.012146e-01 - [epoch 3000] 3.615186e-01 - [epoch 3500] 8.892784e-03 - [epoch 4000] 4.733771e-03 - [epoch 4500] 2.296455e-03 - [epoch 5000] 9.881203e-04 - [epoch 5500] 3.655896e-04 - [epoch 6000] 1.145256e-04 - [epoch 6055] 9.997654e-05 - - - - -.. parsed-literal:: - - - - - -.. code:: ipython3 - - #%% Snapshots shifted reference position after training - for i in range(len(db_train.parameters_matrix)): - plt.plot(space, rom.shifted_database.snapshots_matrix[i], label = f"t = {db_train.parameters_matrix[i]}") #rom._shifted_reference_database.parameters_matrix - plt.legend(prop={'size': 8}) - plt.ylabel('$f_{g}$(t)') - plt.xlabel('X') - plt.title(f'After training : Snapshot of db_train set shifted to reference snapshot {db_train.parameters_matrix[5]}') - plt.show() - - - -.. image:: output_10_0.png - - -Showing the snapshots before (left) and after pre-processing (right) of -solution manifold - -.. code:: ipython3 - - fig3 = plt.figure(figsize=(10, 5)) - - # First subplot - ax1 = fig3.add_subplot(121, projection='3d') - for i in range(len(db_train)): - ax1.plot(space, (db_train.parameters_matrix[i] * np.ones(space.shape)), db_train.snapshots_matrix[i]) - ax1.set_xlabel('x') - ax1.set_ylabel('t') - ax1.set_zlabel('$f_{g}$(t)') - ax1.set_xlim(0,11) - ax1.set_ylim(0,11) - ax1.set_zlim(0,1) - ax1.grid(False) - ax1.view_init(elev=20, azim=-60, roll=0) - - # Second subplot - ax2 = fig3.add_subplot(122, projection='3d') - for i in range(len(rom.shifted_database)): - ax2.plot(space, (rom.shifted_database.parameters_matrix[i] * np.ones(space.shape)), - rom.shifted_database.snapshots_matrix[i], label=rom.shifted_database.parameters_matrix[i]) - ax2.set_xlabel('x') - ax2.set_ylabel('t') - ax2.set_zlabel('$f_{g}$(t)') - ax2.set_xlim(0, 11) - ax2.set_ylim(0, 11) - ax2.set_zlim(0, 1) - ax2.grid(False) - ax2.view_init(elev=20, azim=-60, roll=0) - handles, labels = ax2.get_legend_handles_labels() - fig3.legend(handles, labels, loc='center right', ncol=1, prop={'size': 8}) - plt.show() - - - -.. image:: output_12_0.png - - -.. code:: ipython3 - - #%% Singular values of original snapshots and shifted snapshots - U, s = np.linalg.svd(db.snapshots_matrix.T, full_matrices=False)[:2] - N_modes = np.linspace(1, len(s),len(s)) - - # Singular values of shifted snapshots - U_shifted , s_shifted = np.linalg.svd(rom.shifted_database.snapshots_matrix.T, full_matrices=False)[:2] - N_modes_shifted = np.linspace(1, len(s_shifted),len(s_shifted)) - - # Compare singular values - plt.figure(figsize=(6,4)) - plt.plot(N_modes[:10], s[:10]/np.max(s),"-s",color = "blue", label='POD') - plt.plot(N_modes_shifted, s_shifted/np.max(s_shifted),"-o", color = "red", label='NNsPOD') - plt.ylabel('$\sigma/\sigma_{1}$', size=15) - plt.xlabel('Modes', size=15) - plt.xlim(0, 11) - plt.legend(fontsize=12) - plt.xticks(fontsize=15) - plt.yticks(fontsize=15) - plt.show() - - - -.. image:: output_13_0.png - - -.. code:: ipython3 - - #%% POD MODES - modes = pod.modes - plt.figure(figsize=(6,4)) - plt.plot(space, modes*-1) - plt.ylabel('$f_{g}$(t)', size=15) - plt.xlabel('x', size=15) - plt.title('NNsPOD mode', size=15) - plt.xticks(fontsize=15) - plt.yticks(fontsize=15) - plt.show() - - - -.. image:: output_14_0.png - - -Online phase ------------- - -.. code:: ipython3 - - #%% Test set predictions using NNsPOD - pred = rom.predict(db_test.parameters_matrix) # Calculate predicted solution for given mu - - fig5 = plt.figure(figsize=(5,5)) - ax = fig5.add_subplot(111, projection='3d') - for i in range(len(pred)): - space, orig = wave(db_test.parameters_matrix[i]) - ax.plot(space,(db_test.parameters_matrix[i]*np.ones(space.shape)), pred.snapshots_matrix[i], label = f'Predict={db_test.parameters_matrix[i]}') - ax.plot(space,(db_test.parameters_matrix[i]*np.ones(space.shape)), orig, '--', label = f'Truth={db_test.parameters_matrix[i]}') - ax.set_xlabel('x') - ax.set_ylabel('t') - ax.set_zlabel('$f_{g}$(t)') - ax.set_xlim(0,11) - ax.set_ylim(0,11) - ax.set_zlim(0,1) - ax.legend(loc="upper center", ncol=5, prop = { "size": 7}) - ax.grid(False) - ax.view_init(elev=20, azim=-60, roll=0) - ax.set_title('Predicted Snapshots for db_test set parameters') - plt.show() - - - -.. image:: output_16_0.png - - -.. code:: ipython3 - - #%% Reconstruction and prediction error - train_err = rom.test_error(db_train) - test_err = rom.test_error(db_test) - - print('Mean Train error: ', train_err) - print('Mean Test error: ', test_err) - - -.. parsed-literal:: - - Mean Train error: 0.18585298603714098 - Mean Test error: 0.11119321870633797 - diff --git a/docs/source/_tutorials/tutorial-4/output_10_0.png b/docs/source/_tutorials/tutorial-4/output_10_0.png deleted file mode 100644 index c0816ff0..00000000 Binary files a/docs/source/_tutorials/tutorial-4/output_10_0.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-4/output_10_1.png b/docs/source/_tutorials/tutorial-4/output_10_1.png deleted file mode 100644 index eba59c18..00000000 Binary files a/docs/source/_tutorials/tutorial-4/output_10_1.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-4/output_19_1.png b/docs/source/_tutorials/tutorial-4/output_19_1.png deleted file mode 100644 index d90f1f65..00000000 Binary files a/docs/source/_tutorials/tutorial-4/output_19_1.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-4/output_19_2.png b/docs/source/_tutorials/tutorial-4/output_19_2.png deleted file mode 100644 index 13441332..00000000 Binary files a/docs/source/_tutorials/tutorial-4/output_19_2.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-4/output_27_0.png b/docs/source/_tutorials/tutorial-4/output_27_0.png deleted file mode 100644 index 3ede3f98..00000000 Binary files a/docs/source/_tutorials/tutorial-4/output_27_0.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-4/output_27_1.png b/docs/source/_tutorials/tutorial-4/output_27_1.png deleted file mode 100644 index 1fa35b2f..00000000 Binary files a/docs/source/_tutorials/tutorial-4/output_27_1.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-4/output_29_0.png b/docs/source/_tutorials/tutorial-4/output_29_0.png deleted file mode 100644 index 359a1b39..00000000 Binary files a/docs/source/_tutorials/tutorial-4/output_29_0.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-4/output_29_1.png b/docs/source/_tutorials/tutorial-4/output_29_1.png deleted file mode 100644 index 921635c9..00000000 Binary files a/docs/source/_tutorials/tutorial-4/output_29_1.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-4/tutorial-4.rst b/docs/source/_tutorials/tutorial-4/tutorial-4.rst deleted file mode 100644 index 773c5f62..00000000 --- a/docs/source/_tutorials/tutorial-4/tutorial-4.rst +++ /dev/null @@ -1,738 +0,0 @@ -Build a Multi Reduced Order Model (MultiROM) -============================================ - -In this tutorial, we will show how to aggregate the predictions of -different ROMs following the method presented in the `paper by Ivagnes -et -al. `__ - -Let’s call :math:`\boldsymbol{\eta}=(\boldsymbol{x}, \boldsymbol{\mu})` -the problem’s features, namely the space coordinates and the parameters. - -The idea is to build and combine a set of ROMs -:math:`\{\mathcal{M}_1, \mathcal{M}_2, \dots, \mathcal{M}_{N}\}`, to -approximate a specific high-fidelity field, for instance the -parametrized velocity :math:`\boldsymbol{u}(\boldsymbol{\eta})`. The -individual ROMs differ in the reduction approach and/or in the -approximation technique. The **MultiROM prediction** will then be a -convex combination of the predictions of the pre-trained individual -ROMs. If the :math:`i`-th ROM prediction is -:math:`\tilde{\boldsymbol{u}}^{(i)}(\boldsymbol{\eta})`, then the -MultiROM prediction will be: - -.. math:: \tilde{\boldsymbol{u}}(\boldsymbol{\eta}) = \sum_{i=1}^{N} w^{(i)}(\boldsymbol{\eta}) \tilde{\boldsymbol{u}}^{(i)}(\boldsymbol{\eta}) , - -where the weights associated with each ROM in the convex combination are -space- and parameter-dependent. In this way, the **MultiROM** should -effectively and automatically identify the ROM with the optimal -performance across various regions of the spatial and parameter domains. - -To build the model, we have to design a method to compute the weights, -also in unseen settings. We here consider a dataset from the library -**Smithers** (``NavierStokesDataset``), and we divide it into three -subsets: - the **training** dataset (composed of -:math:`M_{\text{train}}` instances): used to train the individual ROMs; -- the **evaluation** dataset (composed of :math:`M_{\text{evaluation}}` -instances): used to compute the optimal weights; - the **test** dataset -(composed of :math:`M_{\text{test}}` instances): used to test our -methodology, where the weights are approximated with a regression -technique. - -Now the question is: *How to compute the weights?* We here consider two -different approaches: - **XMA** (as in `de Zordo-Banliat et -al. `__), -where the weights are computed in the evaluation set, using the -following expression: - -.. math:: - - w^{(i)}(\boldsymbol{\eta})=\dfrac{g^{(i)}(\boldsymbol{\eta})}{\sum_{i=1}^N g^{(i)}(\boldsymbol{\eta})}, \, g^{(i)}(\boldsymbol{\eta})=\text{exp}\left( - \dfrac{1}{2} \dfrac{(\tilde{\boldsymbol{u}}^{(i)}(\boldsymbol{\eta}) - \boldsymbol{u}(\boldsymbol{\eta}))^2}{\sigma^2} \right),\, \text{for } \boldsymbol{\eta}=\boldsymbol{\eta}_{\text{evaluation}}. - - -\ In the test set, a regression approach (``KNN``) is used to -approximate the weights at unseen -:math:`\boldsymbol{\eta}=\boldsymbol{\eta}_{\text{test}}`. - -- **ANN**: a neural network takes as input :math:`\boldsymbol{\eta}`, - and gives as output directly the weights - :math:`w^{(i)}, i=1, \dots, N,` of the convex combination. It is - trained to minimize the following loss: - - .. math:: \mathcal{L}=\frac{1}{M_{\textrm{test}}} \sum_{j=1}^{M_{\textrm{test}}}\left(\sum_{i=1}^N \left(w^{(i)}(\boldsymbol{\eta}_j) \tilde{\boldsymbol{u}}^{(i)}(\boldsymbol{\eta}_j)\right) - \boldsymbol{u}(\boldsymbol{\eta}_j) \right)^2 - -Let’s begin the tutorial with some useful imports. - -.. code:: ipython3 - - import numpy as np - import copy - %pip install -e ../ - from ezyrb import Database - from ezyrb import POD, AE, PODAE - from ezyrb import RBF, GPR, ANN, KNeighborsRegressor - from ezyrb import ReducedOrderModel as ROM - from ezyrb import MultiReducedOrderModel as MultiROM - from ezyrb.plugin import Aggregation, DatabaseSplitter - import matplotlib.pyplot as plt - import torch - import torch.nn as nn - from matplotlib.colors import LogNorm - import matplotlib.tri as tri - import matplotlib - from mpl_toolkits.axes_grid1 import make_axes_locatable - - -.. parsed-literal:: - - Obtaining file:///Users/aivagnes/Desktop/Work/Packages/EZyRB - Preparing metadata (setup.py) ... [?25ldone - [?25hRequirement already satisfied: future in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from ezyrb==1.3.0) (0.18.3) - Requirement already satisfied: numpy in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from ezyrb==1.3.0) (1.24.4) - Requirement already satisfied: scipy in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from ezyrb==1.3.0) (1.10.1) - Requirement already satisfied: matplotlib in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from ezyrb==1.3.0) (3.7.1) - Requirement already satisfied: scikit-learn>=1.0 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from ezyrb==1.3.0) (1.3.2) - Requirement already satisfied: torch in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from ezyrb==1.3.0) (2.0.1) - Requirement already satisfied: joblib>=1.1.1 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from scikit-learn>=1.0->ezyrb==1.3.0) (1.2.0) - Requirement already satisfied: threadpoolctl>=2.0.0 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from scikit-learn>=1.0->ezyrb==1.3.0) (3.1.0) - Requirement already satisfied: contourpy>=1.0.1 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from matplotlib->ezyrb==1.3.0) (1.0.7) - Requirement already satisfied: cycler>=0.10 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from matplotlib->ezyrb==1.3.0) (0.11.0) - Requirement already satisfied: fonttools>=4.22.0 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from matplotlib->ezyrb==1.3.0) (4.39.0) - Requirement already satisfied: kiwisolver>=1.0.1 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from matplotlib->ezyrb==1.3.0) (1.4.4) - Requirement already satisfied: packaging>=20.0 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from matplotlib->ezyrb==1.3.0) (23.0) - Requirement already satisfied: pillow>=6.2.0 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from matplotlib->ezyrb==1.3.0) (9.4.0) - Requirement already satisfied: pyparsing>=2.3.1 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from matplotlib->ezyrb==1.3.0) (3.0.9) - Requirement already satisfied: python-dateutil>=2.7 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from matplotlib->ezyrb==1.3.0) (2.8.2) - Requirement already satisfied: importlib-resources>=3.2.0 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from matplotlib->ezyrb==1.3.0) (5.12.0) - Requirement already satisfied: filelock in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from torch->ezyrb==1.3.0) (3.15.4) - Requirement already satisfied: typing-extensions in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from torch->ezyrb==1.3.0) (4.11.0) - Requirement already satisfied: sympy in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from torch->ezyrb==1.3.0) (1.11.1) - Requirement already satisfied: networkx in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from torch->ezyrb==1.3.0) (3.1) - Requirement already satisfied: jinja2 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from torch->ezyrb==1.3.0) (3.1.2) - Requirement already satisfied: zipp>=3.1.0 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from importlib-resources>=3.2.0->matplotlib->ezyrb==1.3.0) (3.15.0) - Requirement already satisfied: six>=1.5 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from python-dateutil>=2.7->matplotlib->ezyrb==1.3.0) (1.16.0) - Requirement already satisfied: MarkupSafe>=2.0 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from jinja2->torch->ezyrb==1.3.0) (2.1.2) - Requirement already satisfied: mpmath>=0.19 in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages (from sympy->torch->ezyrb==1.3.0) (1.3.0) - Installing collected packages: ezyrb - Attempting uninstall: ezyrb - Found existing installation: ezyrb 1.3.0 - Uninstalling ezyrb-1.3.0: - Successfully uninstalled ezyrb-1.3.0 - Running setup.py develop for ezyrb - Successfully installed ezyrb-1.3.0 - - [notice] A new release of pip is available: 24.1.1 -> 25.0.1 - [notice] To update, run: pip install --upgrade pip - Note: you may need to restart the kernel to use updated packages. - - -Before starting with the core part of the tutorial, we define a useful -function for plotting the solutions on a 2D mesh. - -.. code:: ipython3 - - def plot_multiple_internal(db, fields_list, titles_list, figsize=None, - logscale=False, lim_x=(-0.5, 2), lim_y=(-1, 1), - different_cbar=True, clims=None): - ''' - Plot multiple internal fields in one figure. - - Parameters - ---------- - db : PinaDataModule - The data module. - fields_list : list - The list of fields to plot. - titles_list : list - The list of titles for each field. - figsize : tuple (optional, default=(16, 16/len(fields_list)) - The size of the figure. - logscale : bool (optional, default=False) - Whether to use a logarithmic color scale. - lim_x : tuple (optional, default=(-0.5, 2)) - The x-axis limits. - lim_y : tuple (optional, default=(-1, 1)) - The y-axis limits. - different_cbar : bool (optional, default=True) - Whether to use a different colorbar for each field. - - Returns - ---------- - None (shows figures) - ''' - triang = db.auxiliary_triang - - if figsize is None: - figsize = (16, 16/len(fields_list)) - fig, axs = plt.subplots(1, len(fields_list), figsize=figsize) - for e, a in enumerate(axs): - field = fields_list[e] - title = titles_list[e] - if clims is None: - clims = fields_list[0].min(), fields_list[0].max() - if logscale: - lognorm = matplotlib.colors.LogNorm(vmin=clims[0]+1e-12, - vmax=clims[1]) - c = a.tripcolor(triang, field, cmap='rainbow', - shading='gouraud', norm=lognorm) - else: - c = a.tripcolor(triang, field, cmap='rainbow', - shading='gouraud', vmin=clims[0], - vmax=clims[1]) - a.plot(db._coords_airfoil()[0], db._coords_airfoil()[1], - color='black', lw=0.5) - a.plot(db._coords_airfoil(which='neg')[0], - db._coords_airfoil(which='neg')[1], - color='black', lw=0.5) - a.set_aspect('equal') - if lim_x is not None: - a.set_xlim(lim_x) - if lim_y is not None: - a.set_ylim(lim_y) - if title is not None: - a.set_title(title) - if different_cbar: - divider = make_axes_locatable(a) - cax = divider.append_axes("right", size= "5%", pad=0.1) - plt.colorbar(c, cax=cax) - a.set_xticks([]) - a.set_yticks([]) - if not different_cbar: - divider = make_axes_locatable(axs[0]) - cax = divider.append_axes("left", size= "1%", pad=0.1) - plt.colorbar(c, cax=cax) - plt.tight_layout() - plt.show() - -Now, we define a simple neural network class, which will be useful in -the multiROM-ANN case. This networks takes as input the spatial -coordinates and the problem parameters, and gives as output the weights -of our multiROM. This class is inherited from the ``ANN`` one, with a -newly defined ``fit`` function. In this case, in the loss function we -have the discrepancy between the multiROM prediction and the FOM -reference. Moreover, the power of this technique is that it is -continuous in space, so we can train the NN on a reduced amount of -spatial data, gaining time also in the training itself. - -.. code:: ipython3 - - class ANN_weights(ANN): - def __init__(self, mrom, layers, function, stop_training, loss=None, - optimizer=torch.optim.Adam, lr=0.001, l2_regularization=0, - frequency_print=500, last_identity=True): - super().__init__(layers, function, stop_training, loss=None, - optimizer=torch.optim.Adam, lr=0.001, l2_regularization=0, - frequency_print=10, last_identity=True) - - # import useful data from multirom and roms predictions - self.mrom = mrom - self.params = list(self.mrom.roms.values())[0].validation_full_database.parameters_matrix - - self.frequency_print = frequency_print - self.lr = lr - self.l2_regularization = l2_regularization - - # import ROMs and validation predictions of all ROMs - self.rom_validation_predictions = {} - for rom in self.mrom.roms: - rom_pred = self.mrom.roms[rom] - rom_pred = rom_pred.predict(self.params) - rom_pred = rom_pred.reshape(rom_pred.shape[0]*rom_pred.shape[1], 1) - self.rom_validation_predictions[rom] = self._convert_numpy_to_torch(rom_pred) - - # Device configuration - self.device = torch.device('mps' if torch.backends.mps.is_available() else 'cpu') - print(f"Using device: 💻 {self.device}") - - def _build_model_(self, points): - layers = self.layers.copy() - layers.insert(0, points.shape[1]) - layers.append(len(self.mrom.roms)) - self.model = self._list_to_sequential(layers, self.function) - - # Move the model to the device - self.model.to(self.device) - - def fit(self, points, values): # points=(x, mu) and values=(snapshots) - self._build_model_(points) - optimizer = self.optimizer( - self.model.parameters(), - lr=self.lr, weight_decay=self.l2_regularization) - - #scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.9, patience=1000) - - points = self._convert_numpy_to_torch(points) - values = self._convert_numpy_to_torch(values) - - # Move everything to the device - points = points.to(self.device) - values = values.to(self.device) - self.rom_validation_predictions = {rom: pred.to(self.device) for rom, pred in self.rom_validation_predictions.items()} - - # train the neural network - n_epoch = 1 - flag = True - while flag: - # compute output of ANN - y_pred = self.model(points) - - # compute aggregated solution from output weights of ANN - aggr_pred = torch.zeros(values.shape, device=self.device) - for i, rom in enumerate(self.mrom.roms): - weight = y_pred.clone()[..., i].unsqueeze(-1) - aggr_pred += weight*self.rom_validation_predictions[rom] - - # difference between aggregated solution and exact solution - loss = self.loss(aggr_pred, values) - - optimizer.zero_grad() - loss.backward() - optimizer.step() - - scalar_loss = loss.item() - self.loss_trend.append(scalar_loss) - - #scheduler.step(scalar_loss) - - for criteria in self.stop_training: - if isinstance(criteria, int): # stop criteria is an integer - if n_epoch == criteria: - flag = False - elif isinstance(criteria, float): # stop criteria is float - if scalar_loss < criteria: - flag = False - - if (flag is False or - n_epoch == 1 or n_epoch % self.frequency_print == 0): - print(f'[epoch {n_epoch:6d}]\t{scalar_loss:e}') - n_epoch += 1 - - return optimizer - - def predict(self, x): - - # Move the model to the device - x = self._convert_numpy_to_torch(np.array(x)) - x = x.to(self.device) - y_new = self.model(x) - ynew = y_new.cpu().detach().numpy() - return ynew - -Now we can introduce the dataset, taken from the library -`Smithers `__. - -The test case here considered is ``AirfoilTransonicDataset``, namely the -transonic flow over an airfoil (NACA 0012), with the angle of attack -varying in the range [:math:`0^{\circ}`, :math:`10^{\circ}`] at the -Reynolds number :math:`Re=10^7`. - -This test case is quite challenging, as it presents shocks, and the -shock position varies a lot from one snapshot to the other. The full -order implementation has been done in OpenFOAM (using a finite volume -discretization) and has been validated with the results in -https://ntrs.nasa.gov/citations/19850019511 and in -https://doi.org/10.2514/1.J051329. - -The ``AirfoilTransonicDataset`` is a dictionary including: - -- ``pts_coordinates``: the points’ coordinates, divided into: - - - ``pts_coordinates['internal']``: x-y coordinates in internal mesh; - - ``pts_coordinates['airfoil']``: x-y coordinates on the airfoil; - -- ``params``: the parameters, in our case only the angle of attack; -- ``snapshots``: the snapshots’ fields, divided into: - - - ``snapshots['internal']``: the fields evaluated on the 2D internal - mesh (we will focus on the velocity magnitude ``mag(v)``); - - ``snapshots['airfoil']``: the fields on the airfoil (1D fields). - -We focus here on the 2D ``mag(v)`` field. Let’s try to read the dataset! - -.. code:: ipython3 - - from smithers.dataset import NavierStokesDataset, AirfoilTransonicDataset - data = AirfoilTransonicDataset() - field = 'mag(v)' - coords = data.pts_coordinates["internal"].T - params = data.params - snaps = data.snapshots["internal"][field] - snaps_max = np.max(snaps) - snaps /= snaps_max - print("Shape of parameters vector: ", params.shape) - print("Shape of snapshots matrix: ", snaps.shape) - - -.. parsed-literal:: - - Shape of parameters vector: (100, 1) - Shape of snapshots matrix: (100, 45448) - - -Let’s try now to visualize the 2D spatial coordinates and the velocity -magnitude snapshots for the two extreme parameters. - -.. code:: ipython3 - - idx = 0 - # Plot coordinates - fig, ax = plt.subplots(1, 2, figsize=(15, 5)) - ax[0].scatter(data.pts_coordinates["internal"][0, :], - data.pts_coordinates["internal"][1, :], s=5) - ax[1].scatter(data.pts_coordinates["internal"][0, :], - data.pts_coordinates["internal"][1, :], s=5) - ax[1].set_xlim(-0.5, 2) - ax[1].set_ylim(-1, 1) - - for a in ax: - a.grid() - a.set_aspect("equal") - plt.show() - plot_multiple_internal(data, [snaps[0], snaps[-1]], [f"Snapshot at alpha={params[0]}", f"Snapshot at alpha={params[-1]}"], - figsize=None, logscale=False, lim_x=(-0.5, 2), lim_y=(-1, 1), different_cbar=True) - - - -.. image:: output_10_0.png - - - -.. image:: output_10_1.png - - -Then, we can create the database for the ROMs and initialize the -reduction and approximation approaches. Here, we decide to consider POD -and PODAE as reduction techniques, RBF and GPR as approximation -strategies. In the end, we are considering four ROMs: POD-RBF, POD-GPR, -PODAE-RBF, PODAE-GPR. - -.. code:: ipython3 - - # Create the database - db_all = Database(params, snaps, coords) - - # Define some reduction and approximation methods to test - rank = 3 - pod_for_podae = POD('svd', rank=80) - ae_for_podae = AE([30, 10, rank], [rank, 10, 30], nn.Softplus(), nn.Softplus(), 50000, lr=1e-3, frequency_print=2000) - reduction_methods = { - 'POD': POD('svd', rank=rank), - 'PODAE': PODAE(pod_for_podae, ae_for_podae) - } - approximation_methods = { - 'RBF': RBF(), - 'GPR': GPR() - } - -We now define the ROMs (store into a simple dictionary). Note that we -use the ``DatabaseSplitter`` plugin to split our database into train, -validation, test, and predict sets. Here we will only use the train, -validation, and predict sets. - -.. code:: ipython3 - - # Define a dictionary to store the ROMs - roms_dict = {} - db_splitter_plugin = DatabaseSplitter(train=0.6, validation=0.3, test=0., - predict=0.1, seed=42) - # Train a ROM for each combination of reduction and approximation - for redname, redclass in reduction_methods.items(): - for approxname, approxclass in approximation_methods.items(): - rom = ROM(copy.deepcopy(db_all), - copy.deepcopy(redclass), - copy.deepcopy(approxclass), - plugins=[db_splitter_plugin]) - roms_dict[f'{redname}_{approxname}'] = rom - -Then, the definition of the ``MultiROM`` follows. We can now fit the -MultiROM, which coincides with fitting the individual ROMs separately. - -.. code:: ipython3 - - # Build a simple multiROM without aggregation and save it - multirom_noagg = MultiROM(roms_dict) - # Fit the multiROM (this step may take some time) - multirom_noagg.fit() - - -.. parsed-literal:: - - [epoch 1] 2.469141e+02 - [epoch 2000] 1.377898e-01 - [epoch 4000] 1.071991e-01 - [epoch 6000] 7.265408e-02 - [epoch 8000] 4.396581e-02 - [epoch 10000] 3.937927e-02 - [epoch 12000] 3.674430e-02 - [epoch 14000] 4.076399e-02 - [epoch 16000] 2.940542e-02 - [epoch 18000] 2.764397e-02 - [epoch 20000] 2.988867e-02 - [epoch 22000] 2.159446e-02 - [epoch 24000] 2.006776e-02 - [epoch 26000] 1.967250e-02 - [epoch 28000] 1.194988e-02 - [epoch 30000] 9.829493e-03 - [epoch 32000] 9.823296e-03 - [epoch 34000] 8.634089e-03 - [epoch 36000] 8.533438e-03 - [epoch 38000] 8.409876e-03 - [epoch 40000] 1.002743e-02 - [epoch 42000] 1.035028e-02 - [epoch 44000] 8.028184e-03 - [epoch 46000] 8.383193e-03 - [epoch 48000] 7.963227e-03 - [epoch 50000] 1.061061e-02 - [epoch 1] 2.433394e+02 - [epoch 2000] 2.430486e-01 - [epoch 4000] 8.800354e-02 - [epoch 6000] 6.279282e-02 - [epoch 8000] 4.753726e-02 - [epoch 10000] 4.444053e-02 - [epoch 12000] 4.448576e-02 - [epoch 14000] 4.391388e-02 - [epoch 16000] 4.341885e-02 - [epoch 18000] 3.927369e-02 - [epoch 20000] 3.098299e-02 - [epoch 22000] 2.612145e-02 - [epoch 24000] 2.182353e-02 - [epoch 26000] 2.140819e-02 - [epoch 28000] 2.090856e-02 - [epoch 30000] 2.038679e-02 - [epoch 32000] 1.961394e-02 - [epoch 34000] 1.646917e-02 - [epoch 36000] 1.557970e-02 - [epoch 38000] 1.507105e-02 - [epoch 40000] 1.453493e-02 - [epoch 42000] 1.511188e-02 - [epoch 44000] 1.419082e-02 - [epoch 46000] 1.334900e-02 - [epoch 48000] 1.282931e-02 - [epoch 50000] 1.201333e-02 - - - - -.. parsed-literal:: - - - - - -After fitting the individual models in the train database, we can now -read the validation and test databases, and, for example, visualize the -ROM predictions for some test parameters. - -.. code:: ipython3 - - # Get the dictionary of ROMs - roms_dict = multirom_noagg.roms - - # Extract one ROM from the dictionary, and read the validation and test databases - rom_one = list(multirom_noagg.roms.values())[0] - db_validation = rom_one.validation_full_database - db_test = rom_one.predict_full_database - -.. code:: ipython3 - - # Visualize the results of each ROM in the multiROM without aggregation on - # a new parameter - j = 0 # we choose an index to plot the solution and the weights - p = db_test.parameters_matrix[j] - print("Test parameter for plotting: ", p) - fields = [] - roms_pred = [rom.predict([p]).flatten() for rom in roms_dict.values()] - roms_pred.append(db_test.snapshots_matrix[j]) - errs = [np.abs(r - db_test.snapshots_matrix[j])+1e-10 for r in roms_pred[:-1]] - labels = [f'{key}' for key in roms_dict.keys()] - labels.append("FOM") - plot_multiple_internal(data, roms_pred, labels, different_cbar=False) - plot_multiple_internal(data, errs, [f"{l} - abs. error" for l in labels], logscale=True, different_cbar=False) - - -.. parsed-literal:: - - Test parameter for plotting: [0.2] - - - -.. image:: output_19_1.png - - - -.. image:: output_19_2.png - - -We can see that the ``POD_*`` solutions are more overdiffusive, while -the ``PODAE_*`` solutions better capture the discontinuity, even if they -still exhibit imprecisions. - -We now initialize two novel ``multiROM``\ s using the plugin -``Aggregation``. One model is for the standard XMA aggregation -(indicated with ``fit_function=None``) and uses ``KNN`` as regressor. -The other model uses the ``ANN_weights`` class to compute the weights -starting from the individual ROM prediction. In both cases, the weights -are trained in the validation set. - -.. code:: ipython3 - - print("Fitting multiROM with KNN aggregation...") - knn = KNeighborsRegressor() - multirom_KNN = MultiROM(roms_dict, plugins=[Aggregation(fit_function=None, predict_function=knn), db_splitter_plugin]) - multirom_KNN.fit() - - -.. parsed-literal:: - - Fitting multiROM with KNN aggregation... - Optimal sigma value in weights: [0.009994] - - - - -.. parsed-literal:: - - - - - -.. code:: ipython3 - - print("Fitting multiROM with ANN aggregation...") - ann = ANN_weights(multirom_noagg, [64, 64, 64],[nn.Softplus(), nn.Softplus(), nn.Softplus(), nn.Softmax(dim=-1)], - stop_training=1000, lr=1e-3, frequency_print=100, l2_regularization=0) - multirom_ANN = MultiROM(roms_dict, plugins=[Aggregation(fit_function=ann), db_splitter_plugin]) - multirom_ANN.fit() - - -.. parsed-literal:: - - Fitting multiROM with ANN aggregation... - Using device: 💻 mps - [epoch 1] 1.110127e-04 - [epoch 100] 2.332629e-05 - [epoch 200] 2.292044e-05 - [epoch 300] 2.286620e-05 - [epoch 400] 2.281346e-05 - [epoch 500] 2.274650e-05 - [epoch 600] 2.265942e-05 - [epoch 700] 2.256762e-05 - [epoch 800] 2.250623e-05 - [epoch 900] 2.247204e-05 - [epoch 1000] 2.244576e-05 - - - - -.. parsed-literal:: - - - - - -Let’s now quantify the relative error on test parameters for the -individual ROMs and for the multiROM strategies. - -.. code:: ipython3 - - multiroms = {} - multiroms["KNN"] = multirom_KNN - multiroms["ANN"] = multirom_ANN - - header = '{:10s}'.format('') - for name in approximation_methods: - header += ' {:>16s}'.format(name) - print(header) - for redname, redclass in reduction_methods.items(): - row = '{:10s}'.format(redname) - for approxname, approxclass in approximation_methods.items(): - rom = roms_dict[redname+'_'+approxname] - row += ' {:16e}'.format(rom.test_error(db_test)) - print(row) - print('-'*len(row)) - for model_name in multiroms: - row = '{:10s}'.format(model_name) - multirom_ = multiroms[model_name] - row += '- MultiROM {:16e}'.format(multirom_.test_error(db_test)) - print(row) - - -.. parsed-literal:: - - RBF GPR - POD 5.263353e-02 5.263348e-02 - -------------------------------------------- - PODAE 9.785834e-03 9.695809e-03 - -------------------------------------------- - KNN - MultiROM 1.304681e-02 - ANN - MultiROM 9.233725e-03 - - -We can try now to visualize the predicted multiROMs solutions for a test -parameters, and the errors with respect to the corresponding FOM -reference. The multiROM automatically detects the best method in -different spatial coordinates. - -.. code:: ipython3 - - fields = [] - roms_pred = [] - for rom in multiroms.values(): - roms_pred.append(rom.predict(np.array([p]).reshape(-1, 1)).flatten()) - roms_pred.append(db_test.snapshots_matrix[j].flatten()) - errs = [np.abs(r - db_test.snapshots_matrix[j])+1e-10 for r in roms_pred[:-1]] - labels = list(multiroms.keys()) - labels.append("FOM") - # visualize fields - plot_multiple_internal(data, roms_pred, labels, - figsize=None, logscale=False, lim_x=(-0.5, 2), lim_y=(-1, 1)) - # visualize errors in log scale - plot_multiple_internal(data, errs, [f"{l} - abs. error" for l in labels], - figsize=None, logscale=True, lim_x=(-0.5, 2), lim_y=(-1, 1)) - - - -.. image:: output_27_0.png - - - -.. image:: output_27_1.png - - -We finally try to visualize the weights, for example for the standard -XMA multiROM strategy, for the same test parameter as before. - -.. code:: ipython3 - - for mrom in multiroms.values(): - weights_list = [] - for rom in roms_dict.keys(): - weights_list.append(mrom.weights_predict[rom].flatten()) - plot_multiple_internal(data, weights_list, list(roms_dict.keys()), - figsize=None, logscale=False, lim_x=(-0.5, 2), lim_y=(-1, 1), different_cbar=False, clims=[0, 1]) - - - -.. image:: output_29_0.png - - - -.. image:: output_29_1.png - - -We can immediately see that the standard aggregation algorithm is -“activating” the nonlinear reduction approaches (PODAE) in the spatial -regions close to the shock and to the wake, while in the rest part of -the domain the weights are 50% for POD methods and 50% for PODAE -methods. The ANN strategy instead converges to weights with less -space-dependency. This highly depends on the architecture of the ANN, -and on all the hyperparameters (activation function, learning rate, -weight decay). - -What’s next? -~~~~~~~~~~~~ - -There’s still a lot to do like: - improving the training of the ANN by -adding a negative loss contribution depending on the spatial variability -of the weights. In this way we try to enforce more space variability; - -try to combine FOM and ROM together (multifidelity aggregation). - diff --git a/docs/source/conf.py b/docs/source/conf.py index c4d555b3..48d72618 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -49,6 +49,7 @@ 'sphinx.ext.coverage', 'sphinx.ext.ifconfig', 'sphinx.ext.mathjax', + 'nbsphinx' ] intersphinx_mapping = { @@ -316,3 +317,6 @@ # If true, do not generate a @detailmenu in the "Top" node's menu. #texinfo_no_detailmenu = False + +# -- nbsphinx configuration ----------------------------------------------- +nbsphinx_execute = "never" diff --git a/docs/source/tutorials.rst b/docs/source/tutorials.rst index 068f8a64..52e67d90 100644 --- a/docs/source/tutorials.rst +++ b/docs/source/tutorials.rst @@ -4,19 +4,19 @@ Tutorials .. toctree:: :maxdepth: 1 - _tutorials/tutorial-1/tutorial-1 + _tutorials/tutorial-1 .. toctree:: :maxdepth: 1 - _tutorials/tutorial-2/tutorial-2 + _tutorials/tutorial-2 .. toctree:: :maxdepth: 1 - _tutorials/tutorial-3/tutorial-3 + _tutorials/tutorial-3 .. toctree:: :maxdepth: 1 - _tutorials/tutorial-4/tutorial-4 + _tutorials/tutorial-4 diff --git a/tests/test_reducedordermodel.py b/tests/test_reducedordermodel.py index ea62b694..415fa1eb 100644 --- a/tests/test_reducedordermodel.py +++ b/tests/test_reducedordermodel.py @@ -202,6 +202,30 @@ def test_multi_db(self): assert isinstance(pred, dict) assert len(pred) == 2 +def test_invariant_pod(): + pod = POD() + + rbf = RBF() + gpr = GPR() + rnr = RadiusNeighborsRegressor() + knr = KNeighborsRegressor(n_neighbors=1) + lin = Linear(fill_value=0) + db = Database(param, snapshots.T) + + modal_coeffs = [] + for approx in [rbf, gpr, knr, rnr, lin]: + rom = ROM(db, pod, approx).fit() + coeff = rom.reduction.transform(db.snapshots_matrix.T) + modal_coeffs.append(coeff) + + for i in range(1, len(modal_coeffs)): + np.testing.assert_allclose( + modal_coeffs[0], + modal_coeffs[i], + rtol=1e-5, + atol=1e-8 + ) + """ def test_optimal_mu(self): diff --git a/tutorials/logo_ezrb.png b/tutorials/logo_ezrb.png deleted file mode 100644 index 3dd9ccca..00000000 Binary files a/tutorials/logo_ezrb.png and /dev/null differ diff --git a/tutorials/logo_mathlab.png b/tutorials/logo_mathlab.png deleted file mode 100644 index 25724be1..00000000 Binary files a/tutorials/logo_mathlab.png and /dev/null differ diff --git a/tutorials/tutorial-1.ipynb b/tutorials/tutorial-1.ipynb index 5a8a4303..31ca9f7f 100644 --- a/tutorials/tutorial-1.ipynb +++ b/tutorials/tutorial-1.ipynb @@ -6,9 +6,9 @@ "id": "OYB8FZHSE6ef" }, "source": [ - "# EZyRB Tutorial 1\n", "## Build and query a simple reduced order model\n", "\n", + "\n", "In this tutorial we will show the typical workflow for the construcion of the Reduced Order Model based only on the outputs of the higher-order model. In detail, we consider here a POD-RBF framework (Proper Orthogonal Decomposition for dimensionality reduction and Radial Basis Function for manifold approximation), but the tutorial can be easily extended to other methods thanks to the modularity nature of **EZyRB**.\n", "\n", "We consider a parametric steady heat conduction problem in a two-dimensional domain $\\Omega$. While in this tutorial we are going to focus on the data-driven approach, the same problem can be tackled in an intrusive manner (with the Reduced Basis method) using the [RBniCS](https://gitlab.com/RBniCS/RBniCS), as demonstrated in this [RBniCS tutorial](https://gitlab.com/RBniCS/RBniCS/tree/master/tutorials/01_thermal_block).
\n", @@ -160,7 +160,7 @@ "id": "DNW_CtXDE6eh" }, "source": [ - "## Offline phase\n", + "### Offline phase\n", "\n", "In the *offline* phase, we need some samples of the parametric high-fidelity model. In this case, we extract 8 snapshots from the numerical model implemented in **FEniCS**, and we import them and the related parameters." ] @@ -544,7 +544,7 @@ "id": "Xv30_9_jE6ei" }, "source": [ - "## Online phase\n", + "### Online phase\n", "In the *online* phase we can query our model in order to predict the solution for a new parameter $\\mu_\\text{new}$ that is not in the training set. We just need to pass the new parameters as input of the `predict()` function." ] }, @@ -626,7 +626,7 @@ "id": "r2cP-_ikE6ei" }, "source": [ - "## Error Approximation & Improvement\n", + "### Error Approximation & Improvement\n", "\n", "At the moment, we used a database which is composed by 8 files. we would have an idea of the approximation accuracy we are able to reach with these high-fidelity solutions. Using the *leave-one-out* strategy, an error is computed for each parametric point in our database and these values are returned as array." ] diff --git a/tutorials/tutorial-1.py b/tutorials/tutorial-1.py deleted file mode 100644 index 966fd478..00000000 --- a/tutorials/tutorial-1.py +++ /dev/null @@ -1,177 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -# # EZyRB Tutorial 1 -# ## Build and query a simple reduced order model -# -# In this tutorial we will show the typical workflow for the construcion of the Reduced Order Model based only on the outputs of the higher-order model. In detail, we consider here a POD-RBF framework (Proper Orthogonal Decomposition for dimensionality reduction and Radial Basis Function for manifold approximation), but the tutorial can be easily extended to other methods thanks to the modularity nature of **EZyRB**. -# -# We consider a parametric steady heat conduction problem in a two-dimensional domain $\Omega$. While in this tutorial we are going to focus on the data-driven approach, the same problem can be tackled in an intrusive manner (with the Reduced Basis method) using the [RBniCS](https://gitlab.com/RBniCS/RBniCS), as demonstrated in this [RBniCS tutorial](https://gitlab.com/RBniCS/RBniCS/tree/master/tutorials/01_thermal_block).
-# This book is therefore exhaustively discussed in the book *Certified reduced basis methods for parametrized partial differential equations*, J.S. Hesthaven, G. Rozza, B. Stamm, 2016, Springer. An additional description is available also at [https://rbnics.gitlab.io/RBniCS-jupyter/tutorial_thermal_block.html](). -# -# Since the good documentation already available for this problem and since the data-driven methodologies we will take into consideration, we just summarize the model to allow a better understanding. -# -# The domain is depicted below: -# -# Drawing -# -# where: -# - the first parameter $\mu_o$ controls the conductivity in the circular subdomain $\Omega_0$; -# - the second parameter $\mu_1$ controls the flux over $\Gamma_\text{base}$. -# -# -# ### Initial setting -# -# First of all import the required packages: we need the standard Numpy and Matplotlib, and some classes from EZyRB. In the EZyRB framework, we need three main ingredients to construct a reduced order model: -# - an initial database where the snapshots are stored; -# - a reduction method to reduce the dimensionality of the system, in this tutorial we will use the proper orthogonal decomposition (POD) method; -# - an approximation method to extrapolate the parametric solution for new parameters, in this tutorial we will use a radial basis function (RBF) interpolation. - -# In[1]: - - -import numpy as np -import matplotlib.tri as mtri -import matplotlib.pyplot as plt -from datasets import load_dataset -from ezyrb import POD, RBF, Database -from ezyrb import ReducedOrderModel as ROM - -# get_ipython().run_line_magic('matplotlib', 'inline') - - -# ## Offline phase -# -# In the *offline* phase, we need some samples of the parametric high-fidelity model. In this case, we extract 8 snapshots from the numerical model implemented in **FEniCS**, and we import them and the related parameters. - -# In[2]: - -data_path = "kshitij-pandey/termal_dataset" -snapshots_hf = load_dataset(data_path, "snapshots", split="train") -param_hf = load_dataset(data_path, "params", split="train") -triangles_hf = load_dataset(data_path, "triangles", split="train") -coords_hf = load_dataset(data_path, "coords", split="train") - - -# convert the dict files into numpy - -def hf_to_numpy(ds): - return np.stack([np.array(ds[col]) for col in ds.column_names], axis=1) - -snapshots = hf_to_numpy(snapshots_hf) -param = hf_to_numpy(param_hf) -triangles = hf_to_numpy(triangles_hf) -coords = hf_to_numpy(coords_hf) -# Moreover, to visualize the solution (both the higher-order one and the reduced one), we import also the mesh information to be able to create the triangulation. We underline this additional step is related only to plotting purpose, and not mandatory for the reduced space generation. - -# In[3]: - - -x, y = coords -from matplotlib.tri import Triangulation -triang = Triangulation(x, y, triangles) - -# For the sake of clarity the snapshots are plotted. - -# In[4]: - - -fig, ax = plt.subplots(nrows=2, ncols=4, figsize=(16, 6), sharey=True, sharex=True) -ax = ax.flatten() -for i in range(8): - ax[i].triplot(triang, 'b-', lw=0.1) - cm = ax[i].tripcolor(triang, snapshots[i]) - fig.colorbar(cm, ax=ax[i]) - ax[i].set_title(r'($\mu_0={:5.2f}, \mu_1={:5.2f})$'.format(*param[i])) - - -# First of all, we create a `Database` object from the parameters and the snapshots. - -# In[5]: - - -db = Database(param, snapshots) - - -# Then we need a reduction object. In this case we use the proper orthogonal decomposition so we create a `POD` object. We use here all the default parameters, but for the complete list of available arguments we refer to original documentation of [POD](https://mathlab.github.io/EZyRB/pod.html) class. - -# In[6]: - - -pod = POD('svd') - - -# Then we instantiate the `RBF` class for interpolating the solution manifold. Also in this case, [RBF](https://mathlab.github.io/EZyRB/rbf.html) documentation is the perfect starting point to explore such class. - -# In[7]: - - -rbf = RBF() - - -# Few lines of code and our reduced model is created! -# To complete everything, we create the `ReducedOrderModel` (aliased to `ROM` in this tutorial) object by passing the already created objects. For clarity, we puntualize that we need to pass the **instances** and not the classes. Simply changing such line (with different objects) allows to test different frameworks in a very modular way. -# The `fit()` function computes the reduced model, meaning that the original snapshots in the database are projected onto the POD space and the RBF interpolator is created. - -# In[8]: - - -rom = ROM(db, pod, rbf) -rom.fit(); - - -# ## Online phase -# In the *online* phase we can query our model in order to predict the solution for a new parameter $\mu_\text{new}$ that is not in the training set. We just need to pass the new parameters as input of the `predict()` function. - -# In[9]: - - -new_mu = [8, 1] -pred_sol = rom.predict(new_mu).snapshots_matrix - -# We can so plot the predicted solution for a fixed parameter... - -# In[10]: - - -plt.figure(figsize=(7, 5)) -plt.triplot(triang, 'b-', lw=0.1) -plt.tripcolor(triang, *pred_sol) -plt.colorbar(); - - -# ... or interactively touch the input parameters to visualize the corresponding (approximated) output. For a fancy result, we need a bit of IPython black magic ([https://ipywidgets.readthedocs.io/en/latest/]()). - -# In[11]: - - -from ipywidgets import interact - -def plot_solution(mu0, mu1): - new_mu = [mu0, mu1] - pred_sol = rom.predict(new_mu).snapshots_matrix - plt.figure(figsize=(8, 7)) - plt.triplot(triang, 'b-', lw=0.1) - plt.tripcolor(triang, *pred_sol) - plt.colorbar() - -interact(plot_solution, mu0=8, mu1=1); - - -# ## Error Approximation & Improvement -# -# At the moment, we used a database which is composed by 8 files. we would have an idea of the approximation accuracy we are able to reach with these high-fidelity solutions. Using the *leave-one-out* strategy, an error is computed for each parametric point in our database and these values are returned as array. - -# In[12]: - - -for pt, error in zip(rom.database.parameters_matrix, rom.loo_error()): - print(pt, error) - - -# Moreover, we can use the information about the errors to locate the parametric points where we have to compute the new high-fidelity solutions and add these to the database in order to optimally improve the accuracy. - -# In[13]: - - -rom.optimal_mu() \ No newline at end of file diff --git a/tutorials/tutorial-2.ipynb b/tutorials/tutorial-2.ipynb index d90edbd8..30a6f862 100644 --- a/tutorials/tutorial-2.ipynb +++ b/tutorials/tutorial-2.ipynb @@ -7,7 +7,6 @@ "id": "b0c39fb4" }, "source": [ - "# EZyRB Tutorial 2\n", "## Test several frameworks at once\n", "\n", "In this tutorial, we will explain step by step how to use the **EZyRB** library to test different techniques for building the reduced order model. We will compare different methods of dimensionality reduction, interpolation and accuracy assessment.\n", diff --git a/tutorials/tutorial-2.py b/tutorials/tutorial-2.py deleted file mode 100644 index 093a199e..00000000 --- a/tutorials/tutorial-2.py +++ /dev/null @@ -1,241 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -# # EZyRB Tutorial 2 -# ## Test several frameworks at once -# -# In this tutorial, we will explain step by step how to use the **EZyRB** library to test different techniques for building the reduced order model. We will compare different methods of dimensionality reduction, interpolation and accuracy assessment. -# -# We consider here a computational fluid dynamics problem described by the (incompressible) Navier Stokes equations. -# We will be using the **Navier Stokes Dataset** that contains the output data from a full order flow simulation and can be found on **GitHub** under [Smithers library](https://github.com/mathLab/Smithers). -# **Smithers** is developed by **SISSA mathlab** and it contains some useful datasets and a multi-purpose toolbox that inherits functionality from other packages to make the process of dealing with these datasets much easier with more compact coding. -# -# The package can be installed using `python -m pip install smithers -U`, but for a detailed description about installation and usage we refer to original [Github page](https://github.com/mathLab/Smithers/blob/master/README.md). -# -# First of all, we just import the package and instantiate the dataset object. - -# In[1]: -from datasets import load_dataset -data_path = "kshitij-pandey/navier_stokes_datasets" -snapshots_hf = load_dataset(data_path, "snapshots_split", split="train") -param_hf = load_dataset(data_path, "params", split="train") -triangles_hf = load_dataset(data_path, "triangles", split="train") -coords_hf = load_dataset(data_path, "coords", split="train") -import numpy as np -snapshots = {name: np.array(snapshots_hf[name]) for name in ['vx', 'vy', 'mag(v)', 'p']} -# convert the dict files into numpy - -def hf_to_numpy(ds): - return np.stack([np.array(ds[col]) for col in ds.column_names], axis=1) - -params = hf_to_numpy(param_hf) -triangles = hf_to_numpy(triangles_hf) -coords = hf_to_numpy(coords_hf) - -# The `NavierStokesDataset()` class contains the attribute: -# - `snapshots`: the matrices of snapshots stored by row (one matrix for any output field) -# - `params`: the matrix of corresponding parameters -# - `pts_coordinates`: the coordinates of all nodes of the discretize space -# - `faces`: the actual topology of the discretize space -# - `triang`: the triangulation, useful especially for rendering purposes. -# -# In the details, `snapshots` is a dictionary with the following output of interest: -# - **vx:** velocity in the X-direction. -# - **vy:** velocity in the Y-direction. -# - **mag(v):** velocity magnitude. -# - **p:** pressure value. -# -# In total, the dataset contains 500 parametric configurations in a space of 1639 degrees of freedom. In this case, we have just one parameter, which is the velocity (along $x$) we impose at the inlet. - -# In[2]: - - -for name in ['vx', 'vy', 'p', 'mag(v)']: - print('Shape of {:7s} snapshots matrix: {}'.format(name, snapshots[name].shape)) - -print('Shape of parameters matrix: {}'.format(params.shape)) - - -# ### Initial setting -# -# First of all, we import the required packages. -# -# From `EZyRB` we need: -# 1. The `ROM` class, which performs the model order reduction process. -# 2. A module such as `Database`, where the matrices of snapshots and parameters are stored. -# 3. A dimensionality reduction method such as Proper Orthogonal Decomposition `POD` or Auto-Encoder network `AE`. -# 4. An interpolation method to obtain an approximation for the parametric solution for a new set of parameters such as the Radial Basis Function `RBF`, Gaussian Process Regression `GPR`, K-Neighbors Regressor `KNeighborsRegressor`, Radius Neighbors Regressor `RadiusNeighborsRegressor` or Multidimensional Linear Interpolator `Linear`. -# -# We also need to import: -# * `numpy:` to handle arrays and matrices we will be working with. -# * `torch:` to enable the usage of Neural Networks -# * `matplotlib.pyplot:` to handle the plotting environment. -# * `matplotlib.tri:` for plotting of the triangular grid. - -# In[3]: - - -# Database module -from ezyrb import Database - -# Dimensionality reduction methods -from ezyrb import POD, AE - -# Approximation/interpolation methods -from ezyrb import RBF, GPR, KNeighborsRegressor, RadiusNeighborsRegressor, Linear, ANN - -# Model order reduction calss -from ezyrb import ReducedOrderModel as ROM - - -import torch -import torch.nn as nn - -import matplotlib.tri as mtri -import matplotlib.pyplot as plt - -import warnings -warnings.filterwarnings("ignore", message="Ill-conditioned matrix ") -# get_ipython().run_line_magic('matplotlib', 'inline') - - -# Before starting with the reduced order model, we visualize some of the snapshots in our dataset. - -# In[4]: -x, y = coords -from matplotlib.tri import Triangulation -triang = Triangulation(x, y, triangles) - -fig, ax = plt.subplots(nrows=3, ncols=3, figsize=(16, 8), sharey=True, sharex=True) -ax = ax.flatten() -for i in range(9): - ax[i].tricontourf(triang, snapshots['vx'][i], levels=16) - ax[i].set_title('Original snapshot at inlet velocity = {}'.format(*params[i].round(2))) - - -# In this step, we perform the model order reduction to obtain a reduced space from the full order space. We refer to [Tutorial 1](https://github.com/mathLab/EZyRB/blob/master/tutorials/tutorial-1.ipynb) for the description of the basic workflow, here we just quickly describe the steps implemented in the next cell. -# -# We start by passing the matrices of the parameters and snapshots to the `Database()` class. It must be said that at this time we create the ROM for the `vx` field. We also instantiate the `POD` and `RBF` object to have a benchmark ROM. - -# In[5]: - - -db = Database(params, snapshots['vx']) -rom = ROM(db, POD(), RBF()) -rom.fit(); - - -# Three lines for a data-driven reduced order model, not bad! -# -# Just to have a visual check that everything is going well, we plot the approximation for new parameters in the range $[1, 80]$. - -# In[6]: - - -new_params = np.random.uniform(size=(2))*79.+1. - -fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 3)) -for i, param in enumerate(new_params): - ax[i].tricontourf(triang, *rom.predict([param]).snapshots_matrix) - ax[i].set_title('Predicted snapshots at inlet velocity = {}'.format(param)) - - -# We are now calculating the approximation error to see how close is our reduced solution to the full-order solution/simulation using the **k-fold Cross-Validation** strategy by passing the number of splits to the `ReducedOrderModel.kfold_cv_error(n_splits)` method, which operates as follows: -# -# 1. Split the dataset (parameters/snapshots) into $k$-number of groups/folds. -# 2. Use $k-1$ groups to calculate the reduced space and leave one group for testing. -# 3. Use the approximation/interpolation method to predict each snapshot in the testing group. -# 4. Calculate the error for each snapshot in the testing group by taking the difference between the predicted and the original snapshot. -# 5. Average the errors for predicting snapshots of the testing group/fold. -# 6. Repeat this procedure using different groups for testing and the remaining $k-1$ groups to calculate the reduced space. -# 7. In the end, we will have $k$-number errors for predicting each group/fold that we can average them to have one value for the error. - -# In[7]: - - -errors = rom.kfold_cv_error(n_splits = 5) -print('Average error for each fold:') -for e in errors: - print(' ',e) -print('\nAverage error = {}'.format(errors.mean())) - - -# Another strategy for calculating the approximation error is called **leave-one-out** by using the `ReducedOrderModel.loo_error()` method, which is similar to setting the number of folds equal to the number of snapshots (eg. in this case setting `n_splits` = 500) and it operates as follows: -# 1. Combine all the snapshots except one. -# 2. Calculate the reduced space. -# 3. Use the approximation/interpolation method to predict the removed snapshot. -# 4. Calculate the error by taking the difference between the predicted snapshot and the original removed one. -# 5. The error vector is obtained by repeating this procedure for each snapshot in the database. -# -# It is worth mentioning that it consumes more time because we have 500 snapshots and the algorithm will perform space order reduction and calculate the approximation error 500 times. For this reason, we commented the next line of code, in order to limit the computational effort needed to run this tutorial. Uncomment it only if you are a really brave person! - -# In[8]: - - -# errors = rom.loo_error() - - -# ### Comparison between different methods -# -# One of the advantages of the data-driven reduced order modeling is the modular nature of the method. Practically speaking, we need -# - a method for reducing the dimensionality of input snapshots; -# - a method for approximate the solution manifold; -# -# allowing in principle a large variety of combinations. -# -# The list of implemented **reduction methods** in EZyRB contains: -# - `POD`: *proper orthogonal decomposition* -# - `AE`: *autoencoder* -# -# while the list of implemented **approximation methods** contains: -# - `RBF`: *radial basis function interpolation* -# - `GPR`: *gaussian process regression* -# - `KNeighborsRegressor`: *k-neighbors regression* -# - `RadiusNeighborsRegressor`: *radius neighbors regression* -# - `Linear`: *multidimensional linear interpolation* -# -# Moreover, new state-of-the-art methods will arrive, so we invite you to read the [documentation](https://mathlab.github.io/EZyRB/) for the complete list of all the possibilities! -# -# In the next cell, we create two dictionaries with the objects, such that we can easily test everything with simple `for` cycles. **WARNING** since several methods require the solution of an optimization problem (eg. GPR, ANN, AE), the cell may require some minutes to be run. - -# In[9]: - - -reductions = { - 'POD': POD('svd',rank=10), - 'AE': AE([200, 100, 10], [10, 100, 200], nn.Tanh(), nn.Tanh(), 10), -} - -approximations = { -# 'Linear': Linear(), - 'RBF': RBF(), - 'GPR': GPR(), - 'KNeighbors': KNeighborsRegressor(), - 'RadiusNeighbors': RadiusNeighborsRegressor(), - 'ANN': ANN([20, 20], nn.Tanh(), 10), -} - -header = '{:10s}'.format('') -for name in approximations: - header += ' {:>15s}'.format(name) - -print(header) -for redname, redclass in reductions.items(): - row = '{:10s}'.format(redname) - for approxname, approxclass in approximations.items(): - rom = ROM(db, redclass, approxclass) - rom.fit() - row += ' {:15e}'.format(rom.kfold_cv_error(n_splits=5).mean()) - - print(row) - - -# In a very compact way, we tested several frameworks - like POD-RBF, POD-GPR, POD-NN -, showing the accuracy reached by any of them. -# -# We can also note that the frameworks that involve neural networks (`AE` and `ANN`) show a very poor precision. This is due to the fact of the limited number of epochs we impose in the learning procedure. You can try to increase the number of epochs as we shown in the next cell in order to obtain better results, at the cost of a longer training phase. - -# In[10]: - - -reductions['AE'] = AE([100, 10], [10, 100], nn.ReLU(), nn.ReLU(), 30000) -approximations['ANN'] = ANN([50, 10], nn.ReLU(), 30000) diff --git a/tutorials/tutorial-3.ipynb b/tutorials/tutorial-3.ipynb index e0664f6d..4ac39780 100644 --- a/tutorials/tutorial-3.ipynb +++ b/tutorials/tutorial-3.ipynb @@ -7,9 +7,7 @@ "tags": [] }, "source": [ - "# EzyRB Tutorial 3\n", - "\n", - "## Tutorial for developing ROM using Neural-Network shift Proper Orthogonal Decomposition : NNsPOD-ROM\n", + "## Using `Plugin`for implementing Neural-Network shift POD\n", "\n", "In this tutorial we will explain how to use the **NNsPOD-ROM** algorithm implemented in **EZyRB** library.\n", "\n", @@ -84,7 +82,7 @@ "tags": [] }, "source": [ - "## Offline phase \n", + "### Offline phase \n", "\n", "In this case, we obtain 15 snapshots from the analytical model. " ] @@ -496,7 +494,7 @@ "id": "0eda175f-9bc2-45d5-be47-d8c0a92465c8", "metadata": {}, "source": [ - "## Online phase " + "### Online phase " ] }, { diff --git a/tutorials/tutorial-3.py b/tutorials/tutorial-3.py deleted file mode 100644 index 01fa2bd7..00000000 --- a/tutorials/tutorial-3.py +++ /dev/null @@ -1,252 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -# # EzyRB Tutorial 3 -# -# ## Tutorial for developing ROM using Neural-Network shift Proper Orthogonal Decomposition : NNsPOD-ROM -# -# In this tutorial we will explain how to use the **NNsPOD-ROM** algorithm implemented in **EZyRB** library. -# -# NNsPOD algorithm is purely a data-driven machine learning method that seeks for an optimal mapping of the various snapshots to a reference configuration via an automatic detection [1] and seeking for the low-rank linear approximation subspace of the solution manifold. The nonlinear transformation of the manifold leads to an accelerated KnW decay, resulting in a low-dimensional linear approximation subspace, and enabling the construction of efficient and accurate reduced order models. The complete workflow of the NNsPOD-ROM algorithm, comprising of both the offline and online phases is presented in [2]. -# -# References: -# -# [1] Papapicco, D., Demo, N., Girfoglio, M., Stabile, G., & Rozza, G.(2022). The Neural Network shifted-proper orthogonal decomposition: A machine learning approach for non-linear reduction of hyperbolic equations.Computer Methods in Applied Mechanics and Engineering, 392, 114687 - https://doi.org/10.1016/j.cma.2022.114687 -# -# [2] Gowrachari, H., Demo, N., Stabile, G., & Rozza, G. (2024). Non-intrusive model reduction of advection-dominated hyperbolic problems using neural network shift augmented manifold transformations. arXiv preprint - https://arxiv.org/abs/2407.18419. -# -# ### Problem defintion -# -# We consider **1D gaussian distribution functions**, in wihch $x$ is random variable, $ \mu $ is mean and $ \sigma^2 $ is variance, where $ \sigma $ is the standard deviation or the width of gaussian. -# $$ -# f(x)=\frac{1}{\sigma \sqrt{2 \pi}} e^{-(x-\mu)^2 /\left(2 \sigma^2\right)} -# $$ -# -# To mimic travelling waves, here we parameterize the mean $\mu$ values, where changing $\mu$ shifts the distribution along x-axis, -# -# ### Initial setting -# -# First of all import the required packages: We need the standard Numpy, Torch, Matplotlib, and some classes from EZyRB. -# -# * `numpy:` to handle arrays and matrices we will be working with. -# * `torch:` to enable the usage of Neural Networks -# * `matplotlib:` to handle the plotting environment. -# -# From `EZyRB` we need: -# 1. The `ROM` class, which performs the model order reduction process. -# 2. A module such as `Database`, where the matrices of snapshots and parameters are stored. -# 3. A dimensionality reduction method such as Proper Orthogonal Decomposition `POD` -# 4. An interpolation method to obtain an approximation for the parametric solution for a new set of parameters such as the Radial Basis Function `RBF`, or Multidimensional Linear Interpolator `Linear`. - -#%% Dependencies -import numpy as np -import torch -from matplotlib import pyplot as plt - -from ezyrb import POD, RBF, Database, Snapshot, Parameter, Linear, ANN -from ezyrb import ReducedOrderModel as ROM -from ezyrb.plugin import AutomaticShiftSnapshots -#%% 1D Travelling wave function definition - -def gaussian(x, mu, sig): - return np.exp(-np.power(x - mu, 2.) / (2 * np.power(sig, 2.))) - -def wave(t, res=256): - x = np.linspace(0, 11, res) - return x, gaussian(x, t, 0.2).T # parameterizing mean value -#%% Offline phase -# In this case, we obtain 20 snapshots from the analytical model. - -n_params = 20 -params = np.linspace(0.75, 10.25, n_params).reshape(-1, 1) - -pod = POD(rank=1) -rbf = RBF() -db = Database() - -for param in params: - space, values = wave(param) - snap = Snapshot(values=values.T, space=space) - db.add(Parameter(param), snap) - -print("Snapshot shape : ", db.snapshots_matrix.shape) -print("Parameter shape : ", db.parameters_matrix.shape) -# In[4]: -db_train, db_test = db.split([10,10]) -print("Lenght of training data set:", len(db_train)) -print(f"Parameters of training set: \n {db_train.parameters_matrix.flatten()}") - -print("Lenght of test data set:", len(db_test)) -print(f"Parameters of testing set: \n {db_test.parameters_matrix.flatten()}") -#%% -plt.rcParams.update({ - "text.usetex": True, - "font.family": "serif", - "font.serif": ["Times New Roman"], -}) - -fig1 = plt.figure(figsize=(5,5)) -ax = fig1.add_subplot(111,projection='3d') - -for param in params: - space, values = wave(param) - snap = Snapshot(values=values.T, space=space) - ax.plot(space, param*np.ones(space.shape), values, label = f"{param}") -ax.set_xlabel('x') -ax.set_ylabel('t') -ax.set_zlabel('$f_{g}$(t)') -ax.set_xlim(0,11) -ax.set_ylim(0,11) -ax.set_zlim(0,1) -ax.legend(loc="upper center", ncol=7, prop = { "size": 7}) -ax.grid(False) -ax.view_init(elev=20, azim=-60, roll=0) -ax.set_title("Snapshots at original position") -plt.show() -#%% 3D PLOT : db_train snpashots at original position -fig2 = plt.figure(figsize=(5,5)) -ax = fig2.add_subplot(111, projection='3d') - -for i in range(len(db_train)): - ax.plot(space,(db_train.parameters_matrix[i]*np.ones(space.shape)), db_train.snapshots_matrix[i], label = db_train.parameters_matrix[i]) -ax.set_xlabel('x') -ax.set_ylabel('t') -ax.set_zlabel('$f_{g}$(t)') -ax.set_xlim(0,11) -ax.set_ylim(0,11) -ax.set_zlim(0,1) -ax.legend(loc="upper center", ncol=7, prop = { "size": 7}) -ax.grid(False) -ax.view_init(elev=20, azim=-60, roll=0) -ax.set_title("Training set snapshots at original position") -plt.show() -#%% - -# `InterpNet:` must learn the reference configuration in the best possible way w.r.t its grid point distribution such that it will be able to reconstruct field values for every shifted centroid disrtribution. -# -# `ShiftNet:` will learn the shift operator for a given problem, which quantifies the optimal-shift, resulting in shifted space that transports all the snapshots to the reference frame. -# -# `Training:` The training of ShiftNet and InterpNet are seperated with the latter being trained first. Once the network has learned the best-possible reconstruct of the solution field of the reference configuration, its forward map will be used for the training of Shiftnet as well, in a cascaded fashion. For this reason, we must optimise the loss of interpnet considerably more than ShiftNet's. - -torch.manual_seed(1) - -interp = ANN([10,10], torch.nn.Softplus(), [1e-6, 200000], frequency_print=1000, lr=0.03) -shift = ANN([], torch.nn.LeakyReLU(), [1e-3, 10000], optimizer=torch.optim.Adam, frequency_print=500, l2_regularization=0, lr=0.0023) - -rom = ROM( - database=db_train, - reduction=pod, - approximation=rbf, - plugins=[ - AutomaticShiftSnapshots( - shift_network= shift, - interp_network=interp, - interpolator=Linear(fill_value=0), - reference_index=4, - parameter_index=4, - barycenter_loss=20.) - ] - ) -rom.fit() - -#%% Snapshots shifted reference position after training -for i in range(len(db_train.parameters_matrix)): - plt.plot(space, rom.shifted_database.snapshots_matrix[i], label = f"t = {db_train.parameters_matrix[i]}") #rom._shifted_reference_database.parameters_matrix - plt.legend(prop={'size': 8}) - plt.ylabel('$f_{g}$(t)') - plt.xlabel('X') - plt.title(f'After training : Snapshot of db_train set shifted to reference snapshot {db_train.parameters_matrix[5]}') -plt.show() -#%% Showing the snapshots before (left) and after pre-processing (right) of solution manifold - -fig3 = plt.figure(figsize=(10, 5)) - -# First subplot -ax1 = fig3.add_subplot(121, projection='3d') -for i in range(len(db_train)): - ax1.plot(space, (db_train.parameters_matrix[i] * np.ones(space.shape)), db_train.snapshots_matrix[i]) -ax1.set_xlabel('x') -ax1.set_ylabel('t') -ax1.set_zlabel('$f_{g}$(t)') -ax1.set_xlim(0,11) -ax1.set_ylim(0,11) -ax1.set_zlim(0,1) -ax1.grid(False) -ax1.view_init(elev=20, azim=-60, roll=0) - -# Second subplot -ax2 = fig3.add_subplot(122, projection='3d') -for i in range(len(rom.shifted_database)): - ax2.plot(space, (rom.shifted_database.parameters_matrix[i] * np.ones(space.shape)), - rom.shifted_database.snapshots_matrix[i], label=rom.shifted_database.parameters_matrix[i]) -ax2.set_xlabel('x') -ax2.set_ylabel('t') -ax2.set_zlabel('$f_{g}$(t)') -ax2.set_xlim(0, 11) -ax2.set_ylim(0, 11) -ax2.set_zlim(0, 1) -ax2.grid(False) -ax2.view_init(elev=20, azim=-60, roll=0) -handles, labels = ax2.get_legend_handles_labels() -fig3.legend(handles, labels, loc='center right', ncol=1, prop={'size': 8}) -plt.show() - -#%% Singular values of original snapshots and shifted snapshots -U, s = np.linalg.svd(db.snapshots_matrix.T, full_matrices=False)[:2] -N_modes = np.linspace(1, len(s),len(s)) - -# Singular values of shifted snapshots -U_shifted , s_shifted = np.linalg.svd(rom.shifted_database.snapshots_matrix.T, full_matrices=False)[:2] -N_modes_shifted = np.linspace(1, len(s_shifted),len(s_shifted)) - -# Compare singular values -plt.figure(figsize=(6,4)) -plt.plot(N_modes[:10], s[:10]/np.max(s),"-s",color = "blue", label='POD') -plt.plot(N_modes_shifted, s_shifted/np.max(s_shifted),"-o", color = "red", label='NNsPOD') -plt.ylabel(r'$\sigma/\sigma_{1}$', size=15) -plt.xlabel('Modes', size=15) -plt.xlim(0, 11) -plt.legend(fontsize=12) -plt.xticks(fontsize=15) -plt.yticks(fontsize=15) -plt.show() - -#%% POD MODES -modes = pod.modes -plt.figure(figsize=(6,4)) -plt.plot(space, modes*-1) -plt.ylabel('$f_{g}$(t)', size=15) -plt.xlabel('x', size=15) -plt.title('NNsPOD mode', size=15) -plt.xticks(fontsize=15) -plt.yticks(fontsize=15) -plt.show() - -#%% Online phase -# Test set predictions using NNsPOD-ROM -pred = rom.predict(db_test.parameters_matrix) # Calculate predicted solution for given mu - -fig5 = plt.figure(figsize=(5,5)) -ax = fig5.add_subplot(111, projection='3d') -for i in range(len(pred)): - space, orig = wave(db_test.parameters_matrix[i]) - ax.plot(space,(db_test.parameters_matrix[i]*np.ones(space.shape)), pred.snapshots_matrix[i], label = f'Predict={db_test.parameters_matrix[i]}') - ax.plot(space,(db_test.parameters_matrix[i]*np.ones(space.shape)), orig, '--', label = f'Truth={db_test.parameters_matrix[i]}') -ax.set_xlabel('x') -ax.set_ylabel('t') -ax.set_zlabel('$f_{g}$(t)') -ax.set_xlim(0,11) -ax.set_ylim(0,11) -ax.set_zlim(0,1) -ax.legend(loc="upper center", ncol=5, prop = { "size": 7}) -ax.grid(False) -ax.view_init(elev=20, azim=-60, roll=0) -ax.set_title('Predicted Snapshots for db_test set parameters') -plt.show() -#%% Reconstruction and prediction error - -train_err = rom.test_error(db_train) -test_err = rom.test_error(db_test) - -print('Mean Train error: ', train_err) -print('Mean Test error: ', test_err) diff --git a/tutorials/tutorial-4.ipynb b/tutorials/tutorial-4.ipynb index 9b1ab3bf..490a7d2b 100644 --- a/tutorials/tutorial-4.ipynb +++ b/tutorials/tutorial-4.ipynb @@ -5,8 +5,7 @@ "id": "7ef4304f-b19a-4064-aa49-2b1b71c9474b", "metadata": {}, "source": [ - "# EZyRB Tutorial 4\n", - "## Build a Multi Reduced Order Model (MultiROM)\n", + "## Build a Multi Reduced Order Model\n", "\n", "In this tutorial, we will show how to aggregate the predictions of different ROMs following the method presented in the [paper by Ivagnes et al.](https://link.springer.com/content/pdf/10.1007/s00707-024-04007-9.pdf)\n", "\n",