@@ -29,7 +30,7 @@ CUDA is a parallel computing platform and programming model created by NVIDIA. F
{{< imagegallery 5 "images/tutorials/visualization/pathtracer" "PathTracer1" "PathTracer2" "PathTracer3" "PathTracer4" "PathTracer5" >}}
-The `SoPathTracer` module implements the main renderer (like the `SoGVRVolumeRenderer`). It collects all `SoPathTracer*` extensions (on its left side) in the scene and renders them. Picking is also supported, but currently only the first hit position. It supports an arbitrary number of objects with different orientation and bounding boxes.
+The `SoPathTracer` module implements the main renderer (like the `SoGVRVolumeRenderer`). It collects all `SoPathTracer*` extensions (on its left side) in the scene and renders them. Picking is also supported, but it supports currently only the first hit position instead of a full hit profile. It supports an arbitrary number of objects with different orientation and bounding boxes.
## Path Tracing
Path Tracing allows interactive, photorealistic 3D environments with dynamic light and shadow, reflections, and refractions.
diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample7.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample7.md
index 5a15b671d..b161bdaad 100644
--- a/mevislab.github.io/content/tutorials/visualization/visualizationexample7.md
+++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample7.md
@@ -1,5 +1,5 @@
---
-title: "Example 7: Add 3D viewer to OrthoView2D"
+title: "Example 7: Add 3D Viewer to OrthoView2D"
date: 2023-11-21
status: "OK"
draft: false
@@ -8,19 +8,21 @@ tags: ["Beginner", "Tutorial", "Visualization", "3D", "OrthoView2D"]
menu:
main:
identifier: "visualization_example7"
- title: "Add 3D viewer to OrthoView2D."
+ title: "Add 3D Viewer to OrthoView2D"
weight: 590
parent: "visualization"
---
-# Example 7: Add 3D viewer to OrthoView2D {#TutorialVisualizationExample7}
+
+# Example 7: Add 3D Viewer to OrthoView2D {#TutorialVisualizationExample7}
{{< youtube "vRtFcaPBAko" >}}
## Introduction
In this example we will use the `OrthoView2D` module and add a 3D viewer to the layout *Cube*.
-## Steps to do
-### Develop your network
+## Steps to Do
+
+### Develop Your Network
Add the modules `LocalImage` and `OrthoView2D` to your workspace and connect them.

@@ -29,7 +31,7 @@ The `OrthoView2D` module allows you to select multiple layouts. Select layout *C

-We now want to use a 3D rendering in the top left segment, whenever the layout *Cube Equal* is chosen. Add a `View3D` and a `SoViewportRegion` module to your workspace. Connect the `LocalImage` with your `View3D`. The image is rendered in 3D. Hit {{< keyboard "SPACE" >}} on your keyboard to make the hidden output of the `View3D` module visible. Connect it with your `SoViewportRegion` and connect the `SoViewportRegion` with the *inInvPreLUT* input of the `OrthoView2D`.
+We now want to use a 3D rendering in the top left segment whenever the layout *Cube Equal* is chosen. Add a `View3D` and a `SoViewportRegion` module to your workspace. Connect the `LocalImage` with your `View3D`. The image is rendered in 3D. Hit {{< keyboard "SPACE" >}} on your keyboard to make the hidden output of the `View3D` module visible. Connect it with your `SoViewportRegion` and connect the `SoViewportRegion` with the *inInvPreLUT* input of the `OrthoView2D`.

@@ -41,7 +43,7 @@ You can see your `View3D` being visible in the bottom right segment of the layou

-The `View3D` image is now rendered to the top left segment of the `OrthoView2D`, because the module `SoViewportRegion` renders a sub graph into a specified viewport region (VPR). The problem is: We cannot rotate and pan the 3D object, because there is no camera interaction available after adding the `SoViewportRegion`. The camera interaction is consumed by the `View3D` module before it can be used by the viewport.
+The `View3D` image is now rendered to the top left segment of the `OrthoView2D`, because the module `SoViewportRegion` renders a subgraph into a specified viewport region (VPR). The problem is: We cannot rotate and pan the 3D object, because there is no camera interaction available after adding the `SoViewportRegion`. The camera interaction is consumed by the `View3D` module before it can be used by the viewport.
Add a `SoCameraInteraction` module between the `View3D` and the `SoViewportRegion`. You can now interact with your 3D scene but the rotation is not executed on the center of the object. Trigger *ViewAll* on your `SoCameraInteraction` module.
@@ -58,6 +60,6 @@ If the selected layout in `OrthoView2D` now matches the string *LAYOUT_CUBE_EQUA

## Summary
-* The module `SoViewportRegion` renders a sub graph into a specified viewport region (VPR)
+* The module `SoViewportRegion` renders a subgraph into a specified viewport region (VPR).
{{< networkfile "examples/visualization/example5/VisualizationExample7.mlab" >}}
diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample8.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample8.md
index 9484e1021..8888ef04c 100644
--- a/mevislab.github.io/content/tutorials/visualization/visualizationexample8.md
+++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample8.md
@@ -1,5 +1,5 @@
---
-title: "Example 8: Vessel Segmentation using SoVascularSystem"
+title: "Example 8: Vessel Segmentation Using SoVascularSystem"
date: 2023-12-08
status: "OK"
draft: false
@@ -8,10 +8,11 @@ tags: ["Advanced", "Tutorial", "Visualization", "3D", "Vessel Segmentation"]
menu:
main:
identifier: "visualization_example8"
- title: "Vessel Segmentation using SoVascularSystem."
+ title: "Vessel Segmentation Using SoVascularSystem."
weight: 592
parent: "visualization"
---
+
# Example 8: Vessel Segmentation using SoVascularSystem {#TutorialVisualizationExample8}
{{< youtube "tEwEgI_3ZGM" >}}
@@ -20,8 +21,8 @@ menu:
In this tutorial, we are using an input mask to create a vessel centerline using the `DtfSkeletonization` module and visualize the vascular structures in 3D using the `SoVascularSystem` module. The second part uses the distance between centerline and surface of the vessel structures to color thin vessels red and thick vessels green.
## Steps to Do
-### Develop Your Network
+### Develop Your Network
Load the example [tree mask](examples/visualization/example8/EditedImage.mlimage) by using the `LocalImage` module. Connect the output to a `DtfSkeletonization` module as seen below. The initial output of the `DtfSkeletonization` module is empty. Press the *Update* button to calculate the skeleton and the erosion distances.

@@ -48,7 +49,7 @@ Use the `SoLUTEditor` for the `View2D`, too.

- Open the output of the `GraphToVolume` module and inspect the images in Output Inspector. You will see that the HU value of the black background is defined as *-1* while the vessel tree is defined as *0*.
+ Open the output of the `GraphToVolume` module and inspect the images in Output Inspector. You will see that the voxel value of the black background is defined as *-1* while the vessel tree is defined as *0*.

@@ -64,7 +65,7 @@ The viewers now show your vessel graph.

-### Store Edge IDs in Skeletons with RunPythonScript
+### Store Edge IDs in Skeletons With RunPythonScript
Each edge of the calculated skeleton gets a unique ID defined by the `DtfSkeletonization` module. We now want to use this ID to define a different color for each edge of the skeleton. You can use the **Label** property of each skeleton to store the ID of the edge.
Add a `RunPythonScript` module to your network, open the panel of the module and enter the following Python code:
@@ -82,7 +83,7 @@ ctx.field("GraphToVolume.update").touch()
```
{{}}
-First, we always want a fresh skeleton. We touch the *update* trigger of the module `DtfSkeletonization`. Then we get the graph from the *DtfSkeletonization.outBase1* output. If a valid graph is available, we walk through all edges of the graph and print the ID of each edge. In the end, we update the GraphToVolume module to get the calculated values of the Python script in the viewers. Click *Execute*.
+First, we always want a fresh skeleton. We touch the *update* trigger of the module `DtfSkeletonization`. Then, we get the graph from the *DtfSkeletonization.outBase1* output. If a valid graph is available, we walk through all edges of the graph and print the ID of each edge. In the end, we update the GraphToVolume module to get the calculated values of the Python script in the viewers. Click *Execute*.
The Debug Output of the MeVisLab IDE shows a numbered list of edge IDs from 1 to 153.
@@ -120,7 +121,7 @@ Your viewers now show a different color for each skeleton, based on our LUT.

-### Render Vascular System Using SoVascularSystem
+### Render the Vascular System Using SoVascularSystem
The `SoVascularSystem` module is optimized for rendering vascular structures. In comparison to the `SoGVRVolumeRenderer` module, it allows to render the surface, the skeleton or points of the structure in an open inventor scene graph. Interactions with edges of the graph are also already implemented.
Add a `SoVascularSystem` module to your workspace. Connect it to your `DtfSkeletonization` module and to the `SoLUTEditor` as seen below. Add another `SoExaminerViewer` for comparing the two visualization. The same `SoBackground` can be added to your new scene.
@@ -147,7 +148,7 @@ To establish connections between fields with the type *Float*, you can use the *
Camera interactions are now synchronized between both `SoExaminerViewer` modules.
-Now you can notice the difference between the two modules. We use `SoVascularSystem` for a smoother visualization of the vascular structures by using the graph as reference. The `SoGVRVolumeRenderer` renders the volume from the `GraphToVolume` module, including the visible stairs from voxel representations in the volume.
+Now, you can notice the difference between the two modules. We use `SoVascularSystem` for a smoother visualization of the vascular structures by using the graph as reference. The `SoGVRVolumeRenderer` renders the volume from the `GraphToVolume` module, including the visible stairs from voxel representations in the volume.

@@ -166,7 +167,7 @@ For volume calculations, use the original image mask instead of the result from
### Enhance Vessel Visualization Based on Distance Information
Now that you've successfully obtained the vessel skeleton graph using `DtfSkeletonization`, let's take the next step to enhance the vessel visualization based on the radius information of the vessels. We will modify the existing code to use the minimum distance between centerline and surface of the vessels for defining the color.
-The values for the provided vascular tree vary between 0 and 10mm. Therefore define the range of the `SoLUTEditor` to *New Range Min* as *1* and *New Range Max* as *10*. On *Editor* tab, define the following LUT:
+The values for the provided vascular tree vary between 0mm and 10mm. Therefore, define the range of the `SoLUTEditor` to *New Range Min* as *1* and *New Range Max* as *10*. On *Editor* tab, define the following LUT:

@@ -193,10 +194,10 @@ ctx.field("SoVascularSystem.apply").touch()
{{
}}
-Be aware that the *MinDistance* and *MaxDistance* values are algorithm-specific and don't precisely represent vessel diameters. The result of `DTFSkeletonization` is a vascular graph with an idealized, circular profile while in reality, the vessels have more complicated profiles. It is an idealized graph where all vessels have a circular cross-section. This cross-section only has one radius, described by *MinDistance* and *MaxDistance*. Those are not the two radii of an elliptical cross-section, but the results of two different algorithms to measure the one, idealized radius at Skeletons.
+Be aware that the *MinDistance* and *MaxDistance* values are algorithm-specific and don't precisely represent vessel diameters. The result of `DTFSkeletonization` is a vascular graph with an idealized, circular profile while in reality, the vessels have more complicated profiles. It is an idealized graph where all vessels have a circular cross section. This cross section only has one radius, described by *MinDistance* and *MaxDistance*. Those are not the two radii of an elliptical cross section, but the results of two different algorithms to measure the one, idealized radius at Skeletons.
{{}}
-Instead of using the ID of each edge for the label property, we are now using the *MinDistance* property of the skeleton. The result is a color coded 3D visualization depending on the radius of the vessels. Small vessels are red, large vessels are green.
+Instead of using the ID of each edge for the label property, we are now using the *MinDistance* property of the skeleton. The result is a color-coded 3D visualization depending on the radius of the vessels. Small vessels are red, large vessels are green.

diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample9.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample9.md
index 3e8024b59..184d52212 100644
--- a/mevislab.github.io/content/tutorials/visualization/visualizationexample9.md
+++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample9.md
@@ -1,5 +1,5 @@
---
-title: "Example 9: Creating Dynamic 3D Animations using AnimationRecorder"
+title: "Example 9: Creating Dynamic 3D Animations Using AnimationRecorder"
date: 2024-01-08
status: "OK"
draft: false
@@ -8,10 +8,11 @@ tags: ["Advanced", "Tutorial", "Visualization", "3D", "Animation Recorder", "Mov
menu:
main:
identifier: "visualization_example9"
- title: "Creating Dynamic 3D Animations using AnimationRecorder"
+ title: "Creating Dynamic 3D Animations Using AnimationRecorder"
weight: 593
parent: "visualization"
---
+
# Example 9: Creating Dynamic 3D Animations using AnimationRecorder {#TutorialVisualizationExample9}
{{< youtube "Sxfwwm6BGnA" >}}
@@ -19,41 +20,40 @@ menu:
## Introduction
In this tutorial, we are using the `AnimationRecorder` module to generate dynamic and visually appealing animations of our 3D scenes. We will be recording a video of the results of our previous project, particularly the detailed visualizations of the muscles, bones, and blood vessels created using `PathTracer`.
-## Steps to do
-
+## Steps to Do
Open the network and files of [Example 6.2](tutorials/visualization/pathtracer/pathtracerexample2/), add a `SoSeparator` module and an `AnimationRecorder` module to your workspace and connect them as shown below.
The `SoSeparator` module collects all components of our scene and provides one output to be used for the `AnimationRecorder`.
-The `AnimationRecorder` module allows to create animations and record them as video streams. It provides an editor to create key frames for animating field values.
+The `AnimationRecorder` module allows to create animations and record them as video streams. It provides an editor to create keyframes for animating field values.

-Define the following LUTs in `SoLUTEditor` of the knee or load this [XML file](examples/visualization/example6/LUT_AnimationRecorder.xml) with `LUTLoad1` to use a pre-defined LUT.
+Define the following LUTs in `SoLUTEditor` of the knee or load this [XML file](examples/visualization/example6/LUT_AnimationRecorder.xml) with `LUTLoad1` to use a predefined LUT.

-Open the `AnimationRecorder` module and click on *New* to initiate a new animation, selecting a filename for the recorded key frames (*.mlmov*).
+Open the `AnimationRecorder` module and click on *New* to initiate a new animation, selecting a filename for the recorded keyframes (*.mlmov*).
-At the bottom of the `AnimationRecorder` panel, you'll find the key frame editor, which is initially enabled. It contains the camera track with a key frame at position *0*. The key frame editor at the bottom serves as a control hub for playback and recording.
+At the bottom of the `AnimationRecorder` panel, you'll find the keyframe editor, which is initially enabled. It contains the camera track with a keyframe at position *0*. The keyframe editor at the bottom serves as a control hub for playback and recording.
{{
}}
-Close the SoExaminerViewer while using the AnimationRecorder to prevent duplicate renderings and save resources.
+Close the `SoExaminerViewer` while using the `AnimationRecorder` to prevent duplicate renderings and to save resources.
{{}}

-Key frames in the `AnimationRecorder` mark specific field values at defined timepoints. You can add key frames on the timeline by double-clicking at the chosen timepoint or right-clicking and selecting *Insert Key Frame*. Between these key frames, values of the field are interpolated (linear or spline) or not. Selecting a key frame, a dialog *Edit Camera Key Frame* will open.
+Keyframes in the `AnimationRecorder` mark specific field values at defined timepoints. You can add keyframes on the timeline by double-clicking at the chosen timepoint or right-clicking and selecting *Insert Key Frame*. Between these keyframes, values of the field are interpolated (linear or spline) or not. Selecting a keyframe, a dialog *Edit Camera Key Frame* will open.
-When adding a key frame at a specific timepoint, you can change the camera dynamically in the viewer. This involves actions such as rotating to left or right, zooming in and out, and changing the camera's location. Within the *Edit Camera Key Frame* dialog save each key frame by clicking on the *Store Current Camera State* button. Preview the video to observe the camera's movement.
+When adding a keyframe at a specific timepoint, you can change the camera dynamically in the viewer. This involves actions such as rotating to left or right, zooming in and out, and changing the camera's location. Within the *Edit Camera Key Frame* dialog save each keyframe by clicking on the *Store Current Camera State* button. Preview the video to observe the camera's movement.
-The video settings in the `AnimationRecorder` provide essential parameters for configuring the resulting animation. You can control the *Framerate*, determining the number of frames per second in the video stream. It's important to note that altering the framerate may lead to the removal of key frames, impacting the animation's smoothness.
+The video settings in the `AnimationRecorder` provide essential parameters for configuring the resulting animation. You can control the *Framerate*, determining the number of frames per second in the video stream. It's important to note that altering the framerate may lead to the removal of keyframes, impacting the animation's smoothness.
Additionally, the *Duration* of the animation, specified as *videoLength*, defines how long the animation lasts in seconds. The *Video Size* determines the resolution of the resulting video.
-Repeat this process for each timepoint where adjustments to the camera position are needed, thus creating a sequence of key frames.
+Repeat this process for each timepoint where adjustments to the camera position are needed, thus creating a sequence of keyframes.
-Before proceeding further, use the playback options situated at the base of the key frame editor. This allows for a quick preview of the initial camera sequence, ensuring the adjustments align seamlessly for a polished transition between key frames.
+Before proceeding further, use the playback options situated at the base of the keyframe editor. This allows for a quick preview of the initial camera sequence, ensuring the adjustments align seamlessly for a polished transition between keyframes.
{{
}}
Decrease the number of iterations in the SoPathTracer module for a quicker preview if you like. Make sure to increase again before recording the final video.
@@ -62,44 +62,42 @@ Decrease the number of iterations in the SoPathTracer module for a quicker previ

## Modulating Knee Visibility with LUTRescale in Animation
-
We want to show and hide the single segmentations during camera movements. Add two `LUTRescale` modules to your workspace and connect them as illustrated down below. The rationale behind using `LUTRescale` is to control the transparency and by that the visibility of elements in the scene at different timepoints.

## Animate Bones and Vessels
-
Now, let's shift our focus to highlighting bones and vessels within the animation. Right-click on the `LUTRescale` module, navigate to *Show Window*, and select *Automatic Panel*. This will bring up the control window for the `LUTRescale` module. Search for the field named *targetMax*. You can either drag and drop it directly from the *Automatic Panel*, or alternatively, locate the *Max* field in the *Output Index Range* box within the module panel and then drag and drop it onto the fields section in the `AnimationRecorder` module, specifically under the *Perspective Camera* field.
By linking the *targetMax* field of the `LUTRescale` module to the `AnimationRecorder`, you establish a connection that allows you to define different values of the field for specific timepoints. The values between these timepoints can be interpolated as described above.

-To initiate the animation sequence, start by adding a key frame at position *0* for the *targetMax* field. Set the *Target Max* value in the *Edit Key Frame – [LUTRescale.targetMax]* window to *1*, and click on the *Store Current Field Value* button to save it.
+To initiate the animation sequence, start by adding a keyframe at position *0* for the *targetMax* field. Set the *Target Max* value in the *Edit Key Frame – [LUTRescale.targetMax]* window to *1*, and click on the *Store Current Field Value* button to save it.
-Next, proceed to add key frames at the same timepoints as the desired key frames of the *Perspective Camera* field's first sequence. For each selected key frame, progressively set values for the *Target Max* field, gradually increasing to *10*. This ensures specific synchronization between the visibility adjustments controlled by the `LUTRescale` module and the camera movements in the animation, creating a seamless transition. This gradual shift visually reveals the bones and vessels while concealing the knee structures and muscles.
+Next, proceed to add keyframes at the same timepoints as the desired keyframes of the *Perspective Camera* field's first sequence. For each selected keyframe, progressively set values for the *Target Max* field, gradually increasing to *10*. This ensures specific synchronization between the visibility adjustments controlled by the `LUTRescale` module and the camera movements in the animation, creating a seamless transition. This gradual shift visually reveals the bones and vessels while concealing the knee structures and muscles.
-To seamlessly incorporate the new key frame at the same timepoints as the *Perspective Camera* field, you have two efficient options. Simply click on the key frame of the first sequence, and the line will automatically appear in the middle of the key frame. A double-click will effortlessly insert a key frame at precisely the same position. If you prefer more accurate adjustments, you can also set your frame manually using the *Edit Key Frame - [LUTRescale.targetMax]* window. This flexibility allows for precise control over the animation timeline, ensuring key frames align precisely with your intended moments.
+To seamlessly incorporate the new keyframe at the same timepoints as the *Perspective Camera* field, you have two efficient options. Simply click on the keyframe of the first sequence, and the line will automatically appear in the middle of the keyframe. A double-click will effortlessly insert a keyframe at precisely the same position. If you prefer more accurate adjustments, you can also set your frame manually using the *Edit Key Frame - [LUTRescale.targetMax]* window. This flexibility allows for precise control over the animation timeline, ensuring keyframes align precisely with your intended moments.

-## Showcasing only Bones
+## Showcasing Only Bones
To control the visibility of the vessels, right-click on the ` LUTRescale1` module connected to the vessels. Open the *Show Window* and select *Automatic Panel*. Drag and drop the *targetMax* field into the `AnimationRecorder` module's fields section.

-Add key frames for both the *Perspective Camera* field and the *targetMax* in `LUTRescale1` at the same timepoints. Access the *Edit Camera Key Frame* window for the added key frame in the *Perspective Camera* field and save the *current camera state*. To exclusively highlight only bones, adjust the *Target Max* values from *1* to *10000* in *Edit Key Frame - [LUTRescale1.targetMax]*.
+Add keyframes for both the *Perspective Camera* field and the *targetMax* in `LUTRescale1` at the same timepoints. Access the *Edit Camera Key Frame* window for the added keyframe in the *Perspective Camera* field and save the *current camera state*. To exclusively highlight only bones, adjust the *Target Max* values from *1* to *10000* in *Edit Key Frame - [LUTRescale1.targetMax]*.

-To feature everything again at the end, copy the initial key frame of each field and paste it at the end of the timeline. This ensures a comprehensive display of all elements in the closing frames of your animation.
+To feature everything again at the end, copy the initial keyframe of each field and paste it at the end of the timeline. This ensures a comprehensive display of all elements in the closing frames of your animation.

-Finally, use the playback and recording buttons at the bottom of the key frame editor to preview and record your animation.
+Finally, use the playback and recording buttons at the bottom of the keyframe editor to preview and record your animation.
## Summary
-* Animations are created by strategically placing key frames at different timepoints in the timeline using the `AnimationRecorder` module.
+* Animations are created by strategically placing keyframes at different timepoints in the timeline using the `AnimationRecorder` module.
* It is possible to add any field of your network to your animation via drag-and-drop.
* The visibility of elements can be controlled using the `LUTRescale` module.
* Video settings in the `AnimationRecorder` can be adjusted to specify resolution, framerate, and duration of the resulting animation.