diff --git a/docs/source/_tutorials b/docs/source/_tutorials new file mode 120000 index 00000000..8b8f5e24 --- /dev/null +++ b/docs/source/_tutorials @@ -0,0 +1 @@ +../../tutorials \ No newline at end of file diff --git a/docs/source/_tutorials/pictures/EZyRB_gui_enrich.png b/docs/source/_tutorials/pictures/EZyRB_gui_enrich.png deleted file mode 100644 index fec609ea..00000000 Binary files a/docs/source/_tutorials/pictures/EZyRB_gui_enrich.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/EZyRB_gui_new_value.png b/docs/source/_tutorials/pictures/EZyRB_gui_new_value.png deleted file mode 100644 index 9125c368..00000000 Binary files a/docs/source/_tutorials/pictures/EZyRB_gui_new_value.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/EZyRB_gui_online.png b/docs/source/_tutorials/pictures/EZyRB_gui_online.png deleted file mode 100644 index 5064c35a..00000000 Binary files a/docs/source/_tutorials/pictures/EZyRB_gui_online.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/EZyRB_gui_set_params.png b/docs/source/_tutorials/pictures/EZyRB_gui_set_params.png deleted file mode 100644 index 28b5866c..00000000 Binary files a/docs/source/_tutorials/pictures/EZyRB_gui_set_params.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/EZyRB_gui_start.png b/docs/source/_tutorials/pictures/EZyRB_gui_start.png deleted file mode 100644 index 924d5de5..00000000 Binary files a/docs/source/_tutorials/pictures/EZyRB_gui_start.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/field_04.png b/docs/source/_tutorials/pictures/field_04.png deleted file mode 100644 index 2febc009..00000000 Binary files a/docs/source/_tutorials/pictures/field_04.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/high_fidelity_solution.png b/docs/source/_tutorials/pictures/high_fidelity_solution.png deleted file mode 100644 index 79ff5839..00000000 Binary files a/docs/source/_tutorials/pictures/high_fidelity_solution.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/mapped_solution.png b/docs/source/_tutorials/pictures/mapped_solution.png deleted file mode 100644 index f501a783..00000000 Binary files a/docs/source/_tutorials/pictures/mapped_solution.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/online_evaluation.png b/docs/source/_tutorials/pictures/online_evaluation.png deleted file mode 100644 index ca968fb8..00000000 Binary files a/docs/source/_tutorials/pictures/online_evaluation.png and /dev/null differ diff --git a/docs/source/_tutorials/pictures/pressure_in_corners.png b/docs/source/_tutorials/pictures/pressure_in_corners.png deleted file mode 100644 index 7a48c821..00000000 Binary files a/docs/source/_tutorials/pictures/pressure_in_corners.png and /dev/null differ diff --git a/docs/source/_tutorials/tutorial-1.html b/docs/source/_tutorials/tutorial-1.html deleted file mode 100644 index 7f42afc3..00000000 --- a/docs/source/_tutorials/tutorial-1.html +++ /dev/null @@ -1,13551 +0,0 @@ - - -
- -In this tutorial we will show the typical workflow for the construcion of the Reduced Order Model based only on the outputs of the higher-order model. In detail, we consider here a POD-RBF framework (Proper Orthogonal Decomposition for dimensionality reduction and Radial Basis Function for manifold approximation), but the tutorial can be easily extended to other methods thanks to the modularity nature of EZyRB.
-We consider a parametric steady heat conduction problem in a two-dimensional domain $\Omega$. While in this tutorial we are going to focus on the data-driven approach, the same problem can be tackled in an intrusive manner (with the Reduced Basis method) using the RBniCS, as demonstrated in this RBniCS tutorial.
-This book is therefore exhaustively discussed in the book Certified reduced basis methods for parametrized partial differential equations, J.S. Hesthaven, G. Rozza, B. Stamm, 2016, Springer. An additional description is available also at https://rbnics.gitlab.io/RBniCS-jupyter/tutorial_thermal_block.html.
Since the good documentation already available for this problem and since the data-driven methodologies we will take into consideration, we just summarize the model to allow a better understanding.
-The domain is depicted below:
-
where:
-First of all import the required packages: we need the standard Numpy and Matplotlib, and some classes from EZyRB. In the EZyRB framework, we need three main ingredients to construct a reduced order model:
-import numpy as np
-import matplotlib.tri as mtri
-import matplotlib.pyplot as plt
-
-from ezyrb import POD, RBF, Database
-from ezyrb import ReducedOrderModel as ROM
-%matplotlib inline
-In the offline phase, we need some samples of the parametric high-fidelity model. In this case, we extract 8 snapshots from the numerical model implemented in FEniCS, and we import them and the related parameters.
- -snapshots = np.load('data/tut1_snapshots.npy')
-param = np.load('data/tut1_mu.npy')
-print(snapshots.shape, param.shape)
-Moreover, to visualize the solution (both the higher-order one and the reduced one), we import also the mesh information to be able to create the triangulation. We underline this additional step is related only to plotting purpose, and not mandatory for the reduced space generation.
- -tri = np.load('data/tut1_triangles.npy')
-coord = np.load('data/tut1_coord.npy')
-triang = mtri.Triangulation(coord[0],coord[1],tri)
-For the sake of clarity the snapshots are plotted.
- -fig, ax = plt.subplots(nrows=2, ncols=4, figsize=(16, 6), sharey=True, sharex=True)
-ax = ax.flatten()
-for i in range(8):
- ax[i].triplot(triang, 'b-', lw=0.1)
- cm = ax[i].tripcolor(triang, snapshots[i])
- fig.colorbar(cm, ax=ax[i])
- ax[i].set_title('($\mu_0={:5.2f}, \mu_1={:5.2f})$'.format(*param[i]))
-First of all, we create a Database object from the parameters and the snapshots.
db = Database(param, snapshots)
-Then we need a reduction object. In this case we use the proper orthogonal decomposition so we create a POD object. We use here all the default parameters, but for the complete list of available arguments we refer to original documentation of POD class.
pod = POD('svd')
-Then we instantiate the RBF class for interpolating the solution manifold. Also in this case, RBF documentation is the perfect starting point to explore such class.
rbf = RBF()
-Few lines of code and our reduced model is created!
-To complete everything, we create the ReducedOrderModel (aliased to ROM in this tutorial) object by passing the already created objects. For clarity, we puntualize that we need to pass the instances and not the classes. Simply changing such line (with different objects) allows to test different frameworks in a very modular way.
-The fit() function computes the reduced model, meaning that the original snapshots in the database are projected onto the POD space and the RBF interpolator is created.
rom = ROM(db, pod, rbf)
-rom.fit();
-In the online phase we can query our model in order to predict the solution for a new parameter $\mu_\text{new}$ that is not in the training set. We just need to pass the new parameters as input of the predict() function.
new_mu = [8, 1]
-pred_sol = rom.predict(new_mu)
-We can so plot the predicted solution for a fixed parameter...
- -plt.figure(figsize=(7, 5))
-plt.triplot(triang, 'b-', lw=0.1)
-plt.tripcolor(triang, pred_sol)
-plt.colorbar();
-... or interactively touch the input parameters to visualize the corresponding (approximated) output. For a fancy result, we need a bit of IPython black magic (https://ipywidgets.readthedocs.io/en/latest/).
- -from ipywidgets import interact
-
-def plot_solution(mu0, mu1):
- new_mu = [mu0, mu1]
- pred_sol = rom.predict(new_mu)
- plt.figure(figsize=(8, 7))
- plt.triplot(triang, 'b-', lw=0.1)
- plt.tripcolor(triang, pred_sol)
- plt.colorbar()
-
-interact(plot_solution, mu0=8, mu1=1);
-At the moment, we used a database which is composed by 8 files. we would have an idea of the approximation accuracy we are able to reach with these high-fidelity solutions. Using the leave-one-out strategy, an error is computed for each parametric point in our database and these values are returned as array.
- -for pt, error in zip(rom.database.parameters, rom.loo_error()):
- print(pt, error)
-Moreover, we can use the information about the errors to locate the parametric points where we have to compute the new high-fidelity solutions and add these to the database in order to optimally improve the accuracy.
- -rom.optimal_mu()
-These function can be used to achieve the wanted (estimated) accuracy.
- -In this tutorial, we will explain step by step how to use the EZyRB library to test different techniques for building the reduced order model. We will compare different methods of dimensionality reduction, interpolation and accuracy assessment.
-We consider here a computational fluid dynamics problem described by the (incompressible) Navier Stokes equations. -We will be using the Navier Stokes Dataset that contains the output data from a full order flow simulation and can be found on GitHub under Smithers library. -Smithers is developed by SISSA mathlab and it contains some useful datasets and a multi-purpose toolbox that inherits functionality from other packages to make the process of dealing with these datasets much easier with more compact coding.
-The package can be installed using python -m pip install smithers -U, but for a detailed description about installation and usage we refer to original Github page.
First of all, we just import the package and instantiate the dataset object.
- -from smithers.dataset import NavierStokesDataset
-data = NavierStokesDataset()
-The NavierStokesDataset() class contains the attribute:
snapshots: the matrices of snapshots stored by row (one matrix for any output field)params: the matrix of corresponding parameterspts_coordinates: the coordinates of all nodes of the discretize spacefaces: the actual topology of the discretize spacetriang: the triangulation, useful especially for rendering purposes.In the details, data.snapshots is a dictionary with the following output of interest:
In total, the dataset contains 500 parametric configurations in a space of 1639 degrees of freedom. In this case, we have just one parameter, which is the velocity (along $x$) we impose at the inlet.
- -for name in ['vx', 'vy', 'p', 'mag(v)']:
- print('Shape of {:7s} snapshots matrix: {}'.format(name, data.snapshots[name].shape))
-
-print('Shape of parameters matrix: {}'.format(data.params.shape))
-First of all, we import the required packages.
-From EZyRB we need:
ROM class, which performs the model order reduction process.Database, where the matrices of snapshots and parameters are stored. POD or Auto-Encoder network AE.RBF, Gaussian Process Regression GPR, K-Neighbors Regressor KNeighborsRegressor, Radius Neighbors Regressor RadiusNeighborsRegressor or Multidimensional Linear Interpolator Linear.We also need to import:
-numpy: to handle arrays and matrices we will be working with.torch: to enable the usage of Neural Networksmatplotlib.pyplot: to handle the plotting environment. matplotlib.tri: for plotting of the triangular grid.# Database module
-from ezyrb import Database
-
-# Dimensionality reduction methods
-from ezyrb import POD, AE
-
-# Approximation/interpolation methods
-from ezyrb import RBF, GPR, KNeighborsRegressor, RadiusNeighborsRegressor, Linear, ANN
-
-# Model order reduction calss
-from ezyrb import ReducedOrderModel as ROM
-
-import numpy as np
-import torch
-import torch.nn as nn
-
-import matplotlib.tri as mtri
-import matplotlib.pyplot as plt
-
-import warnings
-warnings.filterwarnings("ignore", message="Ill-conditioned matrix ")
-%matplotlib inline
-Before starting with the reduced order model, we visualize some of the snapshots in our dataset.
- -fig, ax = plt.subplots(nrows=3, ncols=3, figsize=(16, 8), sharey=True, sharex=True)
-ax = ax.flatten()
-for i in range(9):
- ax[i].tricontourf(data.triang, data.snapshots['vx'][i], levels=16)
- ax[i].set_title('Original snapshot at inlet velocity = {}'.format(*data.params[i].round(2)))
-In this step, we perform the model order reduction to obtain a reduced space from the full order space. We refer to Tutorial 1 for the description of the basic workflow, here we just quickly describe the steps implemented in the next cell.
-We start by passing the matrices of the parameters and snapshots to the Database() class. It must be said that at this time we create the ROM for the vx field. We also instantiate the POD and RBF object to have a benchmark ROM.
db = Database(data.params, data.snapshots['vx'])
-rom = ROM(db, POD(), RBF())
-rom.fit();
-Three lines for a data-driven reduced order model, not bad!
-Just to have a visual check that everything is going well, we plot the approximation for new parameters in the range $[1, 80]$.
- -new_params = np.random.uniform(size=(2))*79.+1.
-
-fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 3))
-for i, param in enumerate(new_params):
- ax[i].tricontourf(data.triang, rom.predict([param]))
- ax[i].set_title('Predicted snapshots at inlet velocity = {}'.format(param))
-We are now calculating the approximation error to see how close is our reduced solution to the full-order solution/simulation using the k-fold Cross-Validation strategy by passing the number of splits to the ReducedOrderModel.kfold_cv_error(n_splits) method, which operates as follows:
errors = rom.kfold_cv_error(n_splits = 5)
-print('Average error for each fold:')
-for e in errors:
- print(' ',e)
-print('\nAverage error = {}'.format(errors.mean()))
-Another strategy for calculating the approximation error is called leave-one-out by using the ReducedOrderModel.loo_error() method, which is similar to setting the number of folds equal to the number of snapshots (eg. in this case setting n_splits = 500) and it operates as follows:
It is worth mentioning that it consumes more time because we have 500 snapshots and the algorithm will perform space order reduction and calculate the approximation error 500 times. For this reason, we commented the next line of code, in order to limit the computational effort needed to run this tutorial. Uncomment it only if you are a really brave person!
- -# errors = rom.loo_error()
-One of the advantages of the data-driven reduced order modeling is the modular nature of the method. Practically speaking, we need
-allowing in principle a large variety of combinations.
-The list of implemented reduction methods in EZyRB contains:
-POD: proper orthogonal decompositionAE: autoencoderwhile the list of implemented approximation methods contains:
-RBF: radial basis function interpolationGPR: gaussian process regressionKNeighborsRegressor: k-neighbors regressionRadiusNeighborsRegressor: radius neighbors regressionLinear: multidimensional linear interpolationMoreover, new state-of-the-art methods will arrive, so we invite you to read the documentation for the complete list of all the possibilities!
-In the next cell, we create two dictionaries with the objects, such that we can easily test everything with simple for cycles. WARNING since several methods require the solution of an optimization problem (eg. GPR, ANN, AE), the cell may require some minutes to be run.
reductions = {
- 'POD': POD('svd',rank=10),
- 'AE': AE([200, 100, 10], [10, 100, 200], nn.Tanh(), nn.Tanh(), 10),
-}
-
-approximations = {
-# 'Linear': Linear(),
- 'RBF': RBF(),
- 'GPR': GPR(),
- 'KNeighbors': KNeighborsRegressor(),
- 'RadiusNeighbors': RadiusNeighborsRegressor(),
- 'ANN': ANN([20, 20], nn.Tanh(), 10),
-}
-
-header = '{:10s}'.format('')
-for name in approximations:
- header += ' {:>15s}'.format(name)
-
-print(header)
-for redname, redclass in reductions.items():
- row = '{:10s}'.format(redname)
- for approxname, approxclass in approximations.items():
- rom = ROM(db, redclass, approxclass)
- rom.fit()
- row += ' {:15e}'.format(rom.kfold_cv_error(n_splits=5).mean())
-
- print(row)
-In a very compact way, we tested several frameworks - like POD-RBF, POD-GPR, POD-NN -, showing the accuracy reached by any of them.
-We can also note that the frameworks that involve neural networks (AE and ANN) show a very poor precision. This is due to the fact of the limited number of epochs we impose in the learning procedure. You can try to increase the number of epochs as we shown in the next cell in order to obtain better results, at the cost of a longer training phase.
reductions['AE'] = AE([100, 10], [10, 100], nn.ReLU(), nn.ReLU(), 30000)
-approximations['ANN'] = ANN([50, 10], nn.ReLU(), 30000)
-
-#
-# where:
-# - the first parameter $\mu_o$ controls the conductivity in the circular subdomain $\Omega_0$;
-# - the second parameter $\mu_1$ controls the flux over $\Gamma_\text{base}$.
-#
-#
-# ### Initial setting
-#
-# First of all import the required packages: we need the standard Numpy and Matplotlib, and some classes from EZyRB. In the EZyRB framework, we need three main ingredients to construct a reduced order model:
-# - an initial database where the snapshots are stored;
-# - a reduction method to reduce the dimensionality of the system, in this tutorial we will use the proper orthogonal decomposition (POD) method;
-# - an approximation method to extrapolate the parametric solution for new parameters, in this tutorial we will use a radial basis function (RBF) interpolation.
-
-# In[1]:
-
-
-import numpy as np
-import matplotlib.tri as mtri
-import matplotlib.pyplot as plt
-from datasets import load_dataset
-from ezyrb import POD, RBF, Database
-from ezyrb import ReducedOrderModel as ROM
-
-# get_ipython().run_line_magic('matplotlib', 'inline')
-
-
-# ## Offline phase
-#
-# In the *offline* phase, we need some samples of the parametric high-fidelity model. In this case, we extract 8 snapshots from the numerical model implemented in **FEniCS**, and we import them and the related parameters.
-
-# In[2]:
-
-data_path = "kshitij-pandey/termal_dataset"
-snapshots_hf = load_dataset(data_path, "snapshots", split="train")
-param_hf = load_dataset(data_path, "params", split="train")
-triangles_hf = load_dataset(data_path, "triangles", split="train")
-coords_hf = load_dataset(data_path, "coords", split="train")
-
-
-# convert the dict files into numpy
-
-def hf_to_numpy(ds):
- return np.stack([np.array(ds[col]) for col in ds.column_names], axis=1)
-
-snapshots = hf_to_numpy(snapshots_hf)
-param = hf_to_numpy(param_hf)
-triangles = hf_to_numpy(triangles_hf)
-coords = hf_to_numpy(coords_hf)
-# Moreover, to visualize the solution (both the higher-order one and the reduced one), we import also the mesh information to be able to create the triangulation. We underline this additional step is related only to plotting purpose, and not mandatory for the reduced space generation.
-
-# In[3]:
-
-
-x, y = coords
-from matplotlib.tri import Triangulation
-triang = Triangulation(x, y, triangles)
-
-# For the sake of clarity the snapshots are plotted.
-
-# In[4]:
-
-
-fig, ax = plt.subplots(nrows=2, ncols=4, figsize=(16, 6), sharey=True, sharex=True)
-ax = ax.flatten()
-for i in range(8):
- ax[i].triplot(triang, 'b-', lw=0.1)
- cm = ax[i].tripcolor(triang, snapshots[i])
- fig.colorbar(cm, ax=ax[i])
- ax[i].set_title(r'($\mu_0={:5.2f}, \mu_1={:5.2f})$'.format(*param[i]))
-
-
-# First of all, we create a `Database` object from the parameters and the snapshots.
-
-# In[5]:
-
-
-db = Database(param, snapshots)
-
-
-# Then we need a reduction object. In this case we use the proper orthogonal decomposition so we create a `POD` object. We use here all the default parameters, but for the complete list of available arguments we refer to original documentation of [POD](https://mathlab.github.io/EZyRB/pod.html) class.
-
-# In[6]:
-
-
-pod = POD('svd')
-
-
-# Then we instantiate the `RBF` class for interpolating the solution manifold. Also in this case, [RBF](https://mathlab.github.io/EZyRB/rbf.html) documentation is the perfect starting point to explore such class.
-
-# In[7]:
-
-
-rbf = RBF()
-
-
-# Few lines of code and our reduced model is created!
-# To complete everything, we create the `ReducedOrderModel` (aliased to `ROM` in this tutorial) object by passing the already created objects. For clarity, we puntualize that we need to pass the **instances** and not the classes. Simply changing such line (with different objects) allows to test different frameworks in a very modular way.
-# The `fit()` function computes the reduced model, meaning that the original snapshots in the database are projected onto the POD space and the RBF interpolator is created.
-
-# In[8]:
-
-
-rom = ROM(db, pod, rbf)
-rom.fit();
-
-
-# ## Online phase
-# In the *online* phase we can query our model in order to predict the solution for a new parameter $\mu_\text{new}$ that is not in the training set. We just need to pass the new parameters as input of the `predict()` function.
-
-# In[9]:
-
-
-new_mu = [8, 1]
-pred_sol = rom.predict(new_mu).snapshots_matrix
-
-# We can so plot the predicted solution for a fixed parameter...
-
-# In[10]:
-
-
-plt.figure(figsize=(7, 5))
-plt.triplot(triang, 'b-', lw=0.1)
-plt.tripcolor(triang, *pred_sol)
-plt.colorbar();
-
-
-# ... or interactively touch the input parameters to visualize the corresponding (approximated) output. For a fancy result, we need a bit of IPython black magic ([https://ipywidgets.readthedocs.io/en/latest/]()).
-
-# In[11]:
-
-
-from ipywidgets import interact
-
-def plot_solution(mu0, mu1):
- new_mu = [mu0, mu1]
- pred_sol = rom.predict(new_mu).snapshots_matrix
- plt.figure(figsize=(8, 7))
- plt.triplot(triang, 'b-', lw=0.1)
- plt.tripcolor(triang, *pred_sol)
- plt.colorbar()
-
-interact(plot_solution, mu0=8, mu1=1);
-
-
-# ## Error Approximation & Improvement
-#
-# At the moment, we used a database which is composed by 8 files. we would have an idea of the approximation accuracy we are able to reach with these high-fidelity solutions. Using the *leave-one-out* strategy, an error is computed for each parametric point in our database and these values are returned as array.
-
-# In[12]:
-
-
-for pt, error in zip(rom.database.parameters_matrix, rom.loo_error()):
- print(pt, error)
-
-
-# Moreover, we can use the information about the errors to locate the parametric points where we have to compute the new high-fidelity solutions and add these to the database in order to optimally improve the accuracy.
-
-# In[13]:
-
-
-rom.optimal_mu()
\ No newline at end of file
diff --git a/tutorials/tutorial-2.ipynb b/tutorials/tutorial-2.ipynb
index d90edbd8..30a6f862 100644
--- a/tutorials/tutorial-2.ipynb
+++ b/tutorials/tutorial-2.ipynb
@@ -7,7 +7,6 @@
"id": "b0c39fb4"
},
"source": [
- "# EZyRB Tutorial 2\n",
"## Test several frameworks at once\n",
"\n",
"In this tutorial, we will explain step by step how to use the **EZyRB** library to test different techniques for building the reduced order model. We will compare different methods of dimensionality reduction, interpolation and accuracy assessment.\n",
diff --git a/tutorials/tutorial-2.py b/tutorials/tutorial-2.py
deleted file mode 100644
index 093a199e..00000000
--- a/tutorials/tutorial-2.py
+++ /dev/null
@@ -1,241 +0,0 @@
-#!/usr/bin/env python
-# coding: utf-8
-
-# # EZyRB Tutorial 2
-# ## Test several frameworks at once
-#
-# In this tutorial, we will explain step by step how to use the **EZyRB** library to test different techniques for building the reduced order model. We will compare different methods of dimensionality reduction, interpolation and accuracy assessment.
-#
-# We consider here a computational fluid dynamics problem described by the (incompressible) Navier Stokes equations.
-# We will be using the **Navier Stokes Dataset** that contains the output data from a full order flow simulation and can be found on **GitHub** under [Smithers library](https://github.com/mathLab/Smithers).
-# **Smithers** is developed by **SISSA mathlab** and it contains some useful datasets and a multi-purpose toolbox that inherits functionality from other packages to make the process of dealing with these datasets much easier with more compact coding.
-#
-# The package can be installed using `python -m pip install smithers -U`, but for a detailed description about installation and usage we refer to original [Github page](https://github.com/mathLab/Smithers/blob/master/README.md).
-#
-# First of all, we just import the package and instantiate the dataset object.
-
-# In[1]:
-from datasets import load_dataset
-data_path = "kshitij-pandey/navier_stokes_datasets"
-snapshots_hf = load_dataset(data_path, "snapshots_split", split="train")
-param_hf = load_dataset(data_path, "params", split="train")
-triangles_hf = load_dataset(data_path, "triangles", split="train")
-coords_hf = load_dataset(data_path, "coords", split="train")
-import numpy as np
-snapshots = {name: np.array(snapshots_hf[name]) for name in ['vx', 'vy', 'mag(v)', 'p']}
-# convert the dict files into numpy
-
-def hf_to_numpy(ds):
- return np.stack([np.array(ds[col]) for col in ds.column_names], axis=1)
-
-params = hf_to_numpy(param_hf)
-triangles = hf_to_numpy(triangles_hf)
-coords = hf_to_numpy(coords_hf)
-
-# The `NavierStokesDataset()` class contains the attribute:
-# - `snapshots`: the matrices of snapshots stored by row (one matrix for any output field)
-# - `params`: the matrix of corresponding parameters
-# - `pts_coordinates`: the coordinates of all nodes of the discretize space
-# - `faces`: the actual topology of the discretize space
-# - `triang`: the triangulation, useful especially for rendering purposes.
-#
-# In the details, `snapshots` is a dictionary with the following output of interest:
-# - **vx:** velocity in the X-direction.
-# - **vy:** velocity in the Y-direction.
-# - **mag(v):** velocity magnitude.
-# - **p:** pressure value.
-#
-# In total, the dataset contains 500 parametric configurations in a space of 1639 degrees of freedom. In this case, we have just one parameter, which is the velocity (along $x$) we impose at the inlet.
-
-# In[2]:
-
-
-for name in ['vx', 'vy', 'p', 'mag(v)']:
- print('Shape of {:7s} snapshots matrix: {}'.format(name, snapshots[name].shape))
-
-print('Shape of parameters matrix: {}'.format(params.shape))
-
-
-# ### Initial setting
-#
-# First of all, we import the required packages.
-#
-# From `EZyRB` we need:
-# 1. The `ROM` class, which performs the model order reduction process.
-# 2. A module such as `Database`, where the matrices of snapshots and parameters are stored.
-# 3. A dimensionality reduction method such as Proper Orthogonal Decomposition `POD` or Auto-Encoder network `AE`.
-# 4. An interpolation method to obtain an approximation for the parametric solution for a new set of parameters such as the Radial Basis Function `RBF`, Gaussian Process Regression `GPR`, K-Neighbors Regressor `KNeighborsRegressor`, Radius Neighbors Regressor `RadiusNeighborsRegressor` or Multidimensional Linear Interpolator `Linear`.
-#
-# We also need to import:
-# * `numpy:` to handle arrays and matrices we will be working with.
-# * `torch:` to enable the usage of Neural Networks
-# * `matplotlib.pyplot:` to handle the plotting environment.
-# * `matplotlib.tri:` for plotting of the triangular grid.
-
-# In[3]:
-
-
-# Database module
-from ezyrb import Database
-
-# Dimensionality reduction methods
-from ezyrb import POD, AE
-
-# Approximation/interpolation methods
-from ezyrb import RBF, GPR, KNeighborsRegressor, RadiusNeighborsRegressor, Linear, ANN
-
-# Model order reduction calss
-from ezyrb import ReducedOrderModel as ROM
-
-
-import torch
-import torch.nn as nn
-
-import matplotlib.tri as mtri
-import matplotlib.pyplot as plt
-
-import warnings
-warnings.filterwarnings("ignore", message="Ill-conditioned matrix ")
-# get_ipython().run_line_magic('matplotlib', 'inline')
-
-
-# Before starting with the reduced order model, we visualize some of the snapshots in our dataset.
-
-# In[4]:
-x, y = coords
-from matplotlib.tri import Triangulation
-triang = Triangulation(x, y, triangles)
-
-fig, ax = plt.subplots(nrows=3, ncols=3, figsize=(16, 8), sharey=True, sharex=True)
-ax = ax.flatten()
-for i in range(9):
- ax[i].tricontourf(triang, snapshots['vx'][i], levels=16)
- ax[i].set_title('Original snapshot at inlet velocity = {}'.format(*params[i].round(2)))
-
-
-# In this step, we perform the model order reduction to obtain a reduced space from the full order space. We refer to [Tutorial 1](https://github.com/mathLab/EZyRB/blob/master/tutorials/tutorial-1.ipynb) for the description of the basic workflow, here we just quickly describe the steps implemented in the next cell.
-#
-# We start by passing the matrices of the parameters and snapshots to the `Database()` class. It must be said that at this time we create the ROM for the `vx` field. We also instantiate the `POD` and `RBF` object to have a benchmark ROM.
-
-# In[5]:
-
-
-db = Database(params, snapshots['vx'])
-rom = ROM(db, POD(), RBF())
-rom.fit();
-
-
-# Three lines for a data-driven reduced order model, not bad!
-#
-# Just to have a visual check that everything is going well, we plot the approximation for new parameters in the range $[1, 80]$.
-
-# In[6]:
-
-
-new_params = np.random.uniform(size=(2))*79.+1.
-
-fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 3))
-for i, param in enumerate(new_params):
- ax[i].tricontourf(triang, *rom.predict([param]).snapshots_matrix)
- ax[i].set_title('Predicted snapshots at inlet velocity = {}'.format(param))
-
-
-# We are now calculating the approximation error to see how close is our reduced solution to the full-order solution/simulation using the **k-fold Cross-Validation** strategy by passing the number of splits to the `ReducedOrderModel.kfold_cv_error(n_splits)` method, which operates as follows:
-#
-# 1. Split the dataset (parameters/snapshots) into $k$-number of groups/folds.
-# 2. Use $k-1$ groups to calculate the reduced space and leave one group for testing.
-# 3. Use the approximation/interpolation method to predict each snapshot in the testing group.
-# 4. Calculate the error for each snapshot in the testing group by taking the difference between the predicted and the original snapshot.
-# 5. Average the errors for predicting snapshots of the testing group/fold.
-# 6. Repeat this procedure using different groups for testing and the remaining $k-1$ groups to calculate the reduced space.
-# 7. In the end, we will have $k$-number errors for predicting each group/fold that we can average them to have one value for the error.
-
-# In[7]:
-
-
-errors = rom.kfold_cv_error(n_splits = 5)
-print('Average error for each fold:')
-for e in errors:
- print(' ',e)
-print('\nAverage error = {}'.format(errors.mean()))
-
-
-# Another strategy for calculating the approximation error is called **leave-one-out** by using the `ReducedOrderModel.loo_error()` method, which is similar to setting the number of folds equal to the number of snapshots (eg. in this case setting `n_splits` = 500) and it operates as follows:
-# 1. Combine all the snapshots except one.
-# 2. Calculate the reduced space.
-# 3. Use the approximation/interpolation method to predict the removed snapshot.
-# 4. Calculate the error by taking the difference between the predicted snapshot and the original removed one.
-# 5. The error vector is obtained by repeating this procedure for each snapshot in the database.
-#
-# It is worth mentioning that it consumes more time because we have 500 snapshots and the algorithm will perform space order reduction and calculate the approximation error 500 times. For this reason, we commented the next line of code, in order to limit the computational effort needed to run this tutorial. Uncomment it only if you are a really brave person!
-
-# In[8]:
-
-
-# errors = rom.loo_error()
-
-
-# ### Comparison between different methods
-#
-# One of the advantages of the data-driven reduced order modeling is the modular nature of the method. Practically speaking, we need
-# - a method for reducing the dimensionality of input snapshots;
-# - a method for approximate the solution manifold;
-#
-# allowing in principle a large variety of combinations.
-#
-# The list of implemented **reduction methods** in EZyRB contains:
-# - `POD`: *proper orthogonal decomposition*
-# - `AE`: *autoencoder*
-#
-# while the list of implemented **approximation methods** contains:
-# - `RBF`: *radial basis function interpolation*
-# - `GPR`: *gaussian process regression*
-# - `KNeighborsRegressor`: *k-neighbors regression*
-# - `RadiusNeighborsRegressor`: *radius neighbors regression*
-# - `Linear`: *multidimensional linear interpolation*
-#
-# Moreover, new state-of-the-art methods will arrive, so we invite you to read the [documentation](https://mathlab.github.io/EZyRB/) for the complete list of all the possibilities!
-#
-# In the next cell, we create two dictionaries with the objects, such that we can easily test everything with simple `for` cycles. **WARNING** since several methods require the solution of an optimization problem (eg. GPR, ANN, AE), the cell may require some minutes to be run.
-
-# In[9]:
-
-
-reductions = {
- 'POD': POD('svd',rank=10),
- 'AE': AE([200, 100, 10], [10, 100, 200], nn.Tanh(), nn.Tanh(), 10),
-}
-
-approximations = {
-# 'Linear': Linear(),
- 'RBF': RBF(),
- 'GPR': GPR(),
- 'KNeighbors': KNeighborsRegressor(),
- 'RadiusNeighbors': RadiusNeighborsRegressor(),
- 'ANN': ANN([20, 20], nn.Tanh(), 10),
-}
-
-header = '{:10s}'.format('')
-for name in approximations:
- header += ' {:>15s}'.format(name)
-
-print(header)
-for redname, redclass in reductions.items():
- row = '{:10s}'.format(redname)
- for approxname, approxclass in approximations.items():
- rom = ROM(db, redclass, approxclass)
- rom.fit()
- row += ' {:15e}'.format(rom.kfold_cv_error(n_splits=5).mean())
-
- print(row)
-
-
-# In a very compact way, we tested several frameworks - like POD-RBF, POD-GPR, POD-NN -, showing the accuracy reached by any of them.
-#
-# We can also note that the frameworks that involve neural networks (`AE` and `ANN`) show a very poor precision. This is due to the fact of the limited number of epochs we impose in the learning procedure. You can try to increase the number of epochs as we shown in the next cell in order to obtain better results, at the cost of a longer training phase.
-
-# In[10]:
-
-
-reductions['AE'] = AE([100, 10], [10, 100], nn.ReLU(), nn.ReLU(), 30000)
-approximations['ANN'] = ANN([50, 10], nn.ReLU(), 30000)
diff --git a/tutorials/tutorial-3.ipynb b/tutorials/tutorial-3.ipynb
index e0664f6d..4ac39780 100644
--- a/tutorials/tutorial-3.ipynb
+++ b/tutorials/tutorial-3.ipynb
@@ -7,9 +7,7 @@
"tags": []
},
"source": [
- "# EzyRB Tutorial 3\n",
- "\n",
- "## Tutorial for developing ROM using Neural-Network shift Proper Orthogonal Decomposition : NNsPOD-ROM\n",
+ "## Using `Plugin`for implementing Neural-Network shift POD\n",
"\n",
"In this tutorial we will explain how to use the **NNsPOD-ROM** algorithm implemented in **EZyRB** library.\n",
"\n",
@@ -84,7 +82,7 @@
"tags": []
},
"source": [
- "## Offline phase \n",
+ "### Offline phase \n",
"\n",
"In this case, we obtain 15 snapshots from the analytical model. "
]
@@ -496,7 +494,7 @@
"id": "0eda175f-9bc2-45d5-be47-d8c0a92465c8",
"metadata": {},
"source": [
- "## Online phase "
+ "### Online phase "
]
},
{
diff --git a/tutorials/tutorial-3.py b/tutorials/tutorial-3.py
deleted file mode 100644
index 01fa2bd7..00000000
--- a/tutorials/tutorial-3.py
+++ /dev/null
@@ -1,252 +0,0 @@
-#!/usr/bin/env python
-# coding: utf-8
-
-# # EzyRB Tutorial 3
-#
-# ## Tutorial for developing ROM using Neural-Network shift Proper Orthogonal Decomposition : NNsPOD-ROM
-#
-# In this tutorial we will explain how to use the **NNsPOD-ROM** algorithm implemented in **EZyRB** library.
-#
-# NNsPOD algorithm is purely a data-driven machine learning method that seeks for an optimal mapping of the various snapshots to a reference configuration via an automatic detection [1] and seeking for the low-rank linear approximation subspace of the solution manifold. The nonlinear transformation of the manifold leads to an accelerated KnW decay, resulting in a low-dimensional linear approximation subspace, and enabling the construction of efficient and accurate reduced order models. The complete workflow of the NNsPOD-ROM algorithm, comprising of both the offline and online phases is presented in [2].
-#
-# References:
-#
-# [1] Papapicco, D., Demo, N., Girfoglio, M., Stabile, G., & Rozza, G.(2022). The Neural Network shifted-proper orthogonal decomposition: A machine learning approach for non-linear reduction of hyperbolic equations.Computer Methods in Applied Mechanics and Engineering, 392, 114687 - https://doi.org/10.1016/j.cma.2022.114687
-#
-# [2] Gowrachari, H., Demo, N., Stabile, G., & Rozza, G. (2024). Non-intrusive model reduction of advection-dominated hyperbolic problems using neural network shift augmented manifold transformations. arXiv preprint - https://arxiv.org/abs/2407.18419.
-#
-# ### Problem defintion
-#
-# We consider **1D gaussian distribution functions**, in wihch $x$ is random variable, $ \mu $ is mean and $ \sigma^2 $ is variance, where $ \sigma $ is the standard deviation or the width of gaussian.
-# $$
-# f(x)=\frac{1}{\sigma \sqrt{2 \pi}} e^{-(x-\mu)^2 /\left(2 \sigma^2\right)}
-# $$
-#
-# To mimic travelling waves, here we parameterize the mean $\mu$ values, where changing $\mu$ shifts the distribution along x-axis,
-#
-# ### Initial setting
-#
-# First of all import the required packages: We need the standard Numpy, Torch, Matplotlib, and some classes from EZyRB.
-#
-# * `numpy:` to handle arrays and matrices we will be working with.
-# * `torch:` to enable the usage of Neural Networks
-# * `matplotlib:` to handle the plotting environment.
-#
-# From `EZyRB` we need:
-# 1. The `ROM` class, which performs the model order reduction process.
-# 2. A module such as `Database`, where the matrices of snapshots and parameters are stored.
-# 3. A dimensionality reduction method such as Proper Orthogonal Decomposition `POD`
-# 4. An interpolation method to obtain an approximation for the parametric solution for a new set of parameters such as the Radial Basis Function `RBF`, or Multidimensional Linear Interpolator `Linear`.
-
-#%% Dependencies
-import numpy as np
-import torch
-from matplotlib import pyplot as plt
-
-from ezyrb import POD, RBF, Database, Snapshot, Parameter, Linear, ANN
-from ezyrb import ReducedOrderModel as ROM
-from ezyrb.plugin import AutomaticShiftSnapshots
-#%% 1D Travelling wave function definition
-
-def gaussian(x, mu, sig):
- return np.exp(-np.power(x - mu, 2.) / (2 * np.power(sig, 2.)))
-
-def wave(t, res=256):
- x = np.linspace(0, 11, res)
- return x, gaussian(x, t, 0.2).T # parameterizing mean value
-#%% Offline phase
-# In this case, we obtain 20 snapshots from the analytical model.
-
-n_params = 20
-params = np.linspace(0.75, 10.25, n_params).reshape(-1, 1)
-
-pod = POD(rank=1)
-rbf = RBF()
-db = Database()
-
-for param in params:
- space, values = wave(param)
- snap = Snapshot(values=values.T, space=space)
- db.add(Parameter(param), snap)
-
-print("Snapshot shape : ", db.snapshots_matrix.shape)
-print("Parameter shape : ", db.parameters_matrix.shape)
-# In[4]:
-db_train, db_test = db.split([10,10])
-print("Lenght of training data set:", len(db_train))
-print(f"Parameters of training set: \n {db_train.parameters_matrix.flatten()}")
-
-print("Lenght of test data set:", len(db_test))
-print(f"Parameters of testing set: \n {db_test.parameters_matrix.flatten()}")
-#%%
-plt.rcParams.update({
- "text.usetex": True,
- "font.family": "serif",
- "font.serif": ["Times New Roman"],
-})
-
-fig1 = plt.figure(figsize=(5,5))
-ax = fig1.add_subplot(111,projection='3d')
-
-for param in params:
- space, values = wave(param)
- snap = Snapshot(values=values.T, space=space)
- ax.plot(space, param*np.ones(space.shape), values, label = f"{param}")
-ax.set_xlabel('x')
-ax.set_ylabel('t')
-ax.set_zlabel('$f_{g}$(t)')
-ax.set_xlim(0,11)
-ax.set_ylim(0,11)
-ax.set_zlim(0,1)
-ax.legend(loc="upper center", ncol=7, prop = { "size": 7})
-ax.grid(False)
-ax.view_init(elev=20, azim=-60, roll=0)
-ax.set_title("Snapshots at original position")
-plt.show()
-#%% 3D PLOT : db_train snpashots at original position
-fig2 = plt.figure(figsize=(5,5))
-ax = fig2.add_subplot(111, projection='3d')
-
-for i in range(len(db_train)):
- ax.plot(space,(db_train.parameters_matrix[i]*np.ones(space.shape)), db_train.snapshots_matrix[i], label = db_train.parameters_matrix[i])
-ax.set_xlabel('x')
-ax.set_ylabel('t')
-ax.set_zlabel('$f_{g}$(t)')
-ax.set_xlim(0,11)
-ax.set_ylim(0,11)
-ax.set_zlim(0,1)
-ax.legend(loc="upper center", ncol=7, prop = { "size": 7})
-ax.grid(False)
-ax.view_init(elev=20, azim=-60, roll=0)
-ax.set_title("Training set snapshots at original position")
-plt.show()
-#%%
-
-# `InterpNet:` must learn the reference configuration in the best possible way w.r.t its grid point distribution such that it will be able to reconstruct field values for every shifted centroid disrtribution.
-#
-# `ShiftNet:` will learn the shift operator for a given problem, which quantifies the optimal-shift, resulting in shifted space that transports all the snapshots to the reference frame.
-#
-# `Training:` The training of ShiftNet and InterpNet are seperated with the latter being trained first. Once the network has learned the best-possible reconstruct of the solution field of the reference configuration, its forward map will be used for the training of Shiftnet as well, in a cascaded fashion. For this reason, we must optimise the loss of interpnet considerably more than ShiftNet's.
-
-torch.manual_seed(1)
-
-interp = ANN([10,10], torch.nn.Softplus(), [1e-6, 200000], frequency_print=1000, lr=0.03)
-shift = ANN([], torch.nn.LeakyReLU(), [1e-3, 10000], optimizer=torch.optim.Adam, frequency_print=500, l2_regularization=0, lr=0.0023)
-
-rom = ROM(
- database=db_train,
- reduction=pod,
- approximation=rbf,
- plugins=[
- AutomaticShiftSnapshots(
- shift_network= shift,
- interp_network=interp,
- interpolator=Linear(fill_value=0),
- reference_index=4,
- parameter_index=4,
- barycenter_loss=20.)
- ]
- )
-rom.fit()
-
-#%% Snapshots shifted reference position after training
-for i in range(len(db_train.parameters_matrix)):
- plt.plot(space, rom.shifted_database.snapshots_matrix[i], label = f"t = {db_train.parameters_matrix[i]}") #rom._shifted_reference_database.parameters_matrix
- plt.legend(prop={'size': 8})
- plt.ylabel('$f_{g}$(t)')
- plt.xlabel('X')
- plt.title(f'After training : Snapshot of db_train set shifted to reference snapshot {db_train.parameters_matrix[5]}')
-plt.show()
-#%% Showing the snapshots before (left) and after pre-processing (right) of solution manifold
-
-fig3 = plt.figure(figsize=(10, 5))
-
-# First subplot
-ax1 = fig3.add_subplot(121, projection='3d')
-for i in range(len(db_train)):
- ax1.plot(space, (db_train.parameters_matrix[i] * np.ones(space.shape)), db_train.snapshots_matrix[i])
-ax1.set_xlabel('x')
-ax1.set_ylabel('t')
-ax1.set_zlabel('$f_{g}$(t)')
-ax1.set_xlim(0,11)
-ax1.set_ylim(0,11)
-ax1.set_zlim(0,1)
-ax1.grid(False)
-ax1.view_init(elev=20, azim=-60, roll=0)
-
-# Second subplot
-ax2 = fig3.add_subplot(122, projection='3d')
-for i in range(len(rom.shifted_database)):
- ax2.plot(space, (rom.shifted_database.parameters_matrix[i] * np.ones(space.shape)),
- rom.shifted_database.snapshots_matrix[i], label=rom.shifted_database.parameters_matrix[i])
-ax2.set_xlabel('x')
-ax2.set_ylabel('t')
-ax2.set_zlabel('$f_{g}$(t)')
-ax2.set_xlim(0, 11)
-ax2.set_ylim(0, 11)
-ax2.set_zlim(0, 1)
-ax2.grid(False)
-ax2.view_init(elev=20, azim=-60, roll=0)
-handles, labels = ax2.get_legend_handles_labels()
-fig3.legend(handles, labels, loc='center right', ncol=1, prop={'size': 8})
-plt.show()
-
-#%% Singular values of original snapshots and shifted snapshots
-U, s = np.linalg.svd(db.snapshots_matrix.T, full_matrices=False)[:2]
-N_modes = np.linspace(1, len(s),len(s))
-
-# Singular values of shifted snapshots
-U_shifted , s_shifted = np.linalg.svd(rom.shifted_database.snapshots_matrix.T, full_matrices=False)[:2]
-N_modes_shifted = np.linspace(1, len(s_shifted),len(s_shifted))
-
-# Compare singular values
-plt.figure(figsize=(6,4))
-plt.plot(N_modes[:10], s[:10]/np.max(s),"-s",color = "blue", label='POD')
-plt.plot(N_modes_shifted, s_shifted/np.max(s_shifted),"-o", color = "red", label='NNsPOD')
-plt.ylabel(r'$\sigma/\sigma_{1}$', size=15)
-plt.xlabel('Modes', size=15)
-plt.xlim(0, 11)
-plt.legend(fontsize=12)
-plt.xticks(fontsize=15)
-plt.yticks(fontsize=15)
-plt.show()
-
-#%% POD MODES
-modes = pod.modes
-plt.figure(figsize=(6,4))
-plt.plot(space, modes*-1)
-plt.ylabel('$f_{g}$(t)', size=15)
-plt.xlabel('x', size=15)
-plt.title('NNsPOD mode', size=15)
-plt.xticks(fontsize=15)
-plt.yticks(fontsize=15)
-plt.show()
-
-#%% Online phase
-# Test set predictions using NNsPOD-ROM
-pred = rom.predict(db_test.parameters_matrix) # Calculate predicted solution for given mu
-
-fig5 = plt.figure(figsize=(5,5))
-ax = fig5.add_subplot(111, projection='3d')
-for i in range(len(pred)):
- space, orig = wave(db_test.parameters_matrix[i])
- ax.plot(space,(db_test.parameters_matrix[i]*np.ones(space.shape)), pred.snapshots_matrix[i], label = f'Predict={db_test.parameters_matrix[i]}')
- ax.plot(space,(db_test.parameters_matrix[i]*np.ones(space.shape)), orig, '--', label = f'Truth={db_test.parameters_matrix[i]}')
-ax.set_xlabel('x')
-ax.set_ylabel('t')
-ax.set_zlabel('$f_{g}$(t)')
-ax.set_xlim(0,11)
-ax.set_ylim(0,11)
-ax.set_zlim(0,1)
-ax.legend(loc="upper center", ncol=5, prop = { "size": 7})
-ax.grid(False)
-ax.view_init(elev=20, azim=-60, roll=0)
-ax.set_title('Predicted Snapshots for db_test set parameters')
-plt.show()
-#%% Reconstruction and prediction error
-
-train_err = rom.test_error(db_train)
-test_err = rom.test_error(db_test)
-
-print('Mean Train error: ', train_err)
-print('Mean Test error: ', test_err)
diff --git a/tutorials/tutorial-4.ipynb b/tutorials/tutorial-4.ipynb
index 9b1ab3bf..490a7d2b 100644
--- a/tutorials/tutorial-4.ipynb
+++ b/tutorials/tutorial-4.ipynb
@@ -5,8 +5,7 @@
"id": "7ef4304f-b19a-4064-aa49-2b1b71c9474b",
"metadata": {},
"source": [
- "# EZyRB Tutorial 4\n",
- "## Build a Multi Reduced Order Model (MultiROM)\n",
+ "## Build a Multi Reduced Order Model\n",
"\n",
"In this tutorial, we will show how to aggregate the predictions of different ROMs following the method presented in the [paper by Ivagnes et al.](https://link.springer.com/content/pdf/10.1007/s00707-024-04007-9.pdf)\n",
"\n",