Version:

Lamb's problem

Part 3 - Visualization and extensions

In this part, we'll take a look at other output options, visualize the entire wavefield and start exploring other types of sources and receivers.
Copy
%matplotlib inline
# This notebook will use this variable to determine which
# remote site to run on.
import os

SALVUS_FLOW_SITE_NAME = os.environ.get("SITE_NAME", "local")
PROJECT_DIR = "project"
import matplotlib.pyplot as plt
import numpy as np
import pathlib

import salvus.namespace as sn
from salvus.project.tools.processing import block_processing
from salvus.toolbox.helpers.wavefield_output import (
    WavefieldOutput,
    wavefield_output_to_xarray,
)

print("Opening existing project.")
p = sn.Project(path=PROJECT_DIR)
Opening existing project.
We want to rerun simulation_3, but with volumetric output. In addition to displacement, let's also output the spatial gradient of the displacement field, so we can compute derived quantities like the divergence or curl afterwards.
p.simulations.launch(
    ranks_per_job=2,
    site_name=SALVUS_FLOW_SITE_NAME,
    events=p.events.list(),
    simulation_configuration="simulation_3",
    extra_output_configuration={
        "volume_data": {
            "sampling_interval_in_time_steps": 10,
            "fields": ["displacement", "gradient-of-displacement"],
        },
    },
    # We have previously simulated the same event but without
    # extra output. We have to thus overwrite the existing
    # simulation.
    delete_conflicting_previous_results=True,
)
[2024-11-20 13:03:47,646] INFO: delete_conflicting_previous_results is set to True. Deleting results for simulation configuration simulation_3 and event event_0.
[2024-11-20 13:03:47,653] INFO: Removing contents of `project/EVENTS/event_0/WAVEFORM_DATA/INTERNAL/55/11/abcfd69ba132`.
[2024-11-20 13:03:47,694] INFO: Submitting job ...
Uploading 1 files...

🚀  Submitted job_2411201303867328_c392b7c6bc@local
1
And that's it! The simulations are off and running. SalvusFlow will take care of abstracting the machine archcitecture, and SalvusProject will take care of saving all the output data into the correct location, copying it from any remote machines as necessary. We can get the current status of the simulations by calling query_simulations() as below.
p.simulations.query(block=True)
True
The jupyter widgets are great to get a glimpse of many entities. However, more advanced visualization requires external tools such as Paraview.
Meshes and wavefield data are read and written as HDF5 files. The HDF5 library allows us to performantly access single files in parallel, as well as allowing for the portable sharing of such files. To see a visual representation of what's in these files we recommend using Paraview.
Note that we strongly recommend to download Paraview from the official website -- the versions installed through Linux package management systems often do not come with the correct libraries installed.
Files are visualized in Paraview using the XDMF standard. This is why you'll see for every mesh, surface, or volume output file used by Salvus, an associated file with the same name but with a .xdmf file suffix.
You can query the internal folder where the output is stored with the following command.
p.simulations.get_simulation_output_directory(
    simulation_configuration="simulation_3", event=p.events.list()[0]
)
PosixPath('project/EVENTS/event_0/WAVEFORM_DATA/INTERNAL/55/11/abcfd69ba132')
# Uncomment the next line and paste the path from above
# !ls _paste_the_path_above_here
We see some log files, meta information, receiver data and volumetric output. The file volume_data_output_elastic_volume.xdmf is what we are after. It is this .xdmf file that should be opened in Paraview. When you do open such a file, you should see a dialogue box to select the correct reader. Make sure to select the XDMF Reader and not any of the Xdmf3Reader or Paraview will crash immediately :-/
If there is no time step explicitly provided in the WaveformSimulationConfiguration, Salvus will automatically compute a suitable time step for the simulation based on the CFL condition. This will typically heavily oversample the signal. For post-processing, however, it might be convenient to work with resampled data.

Receiver output

ed = p.waveforms.get("simulation_3", p.events.list())
Currently, the event data object contains the raw simulation output using the sampling rate / time step of the simulation. We can print the receiver data to verify the number of samples and sampling rate for all traces.
print(
    ed[0].get_receiver_data(
        receiver_name="XX.REC1.", receiver_field="displacement"
    )
)
2 Trace(s) in Stream:
XX.REC1..X | 1969-12-31T23:59:59.920000Z - 1970-01-01T00:00:00.600000Z | 1038.2 Hz, 707 samples
XX.REC1..Y | 1969-12-31T23:59:59.920000Z - 1970-01-01T00:00:00.600000Z | 1038.2 Hz, 707 samples
The event data object allows us to define a custom time interval and sampling rate to resample all traces on-the-fly. To this end, we first specify the desired time discretization. Any subsequent calls quereing data from the EventData object will then work with re-sampled data. For example, we can extract 250 samples at a sampling rate of 500 Hz starting from time zero as follows:
ed[0].set_temporal_interpolation(
    start_time_in_seconds=0.0, sampling_rate_in_hertz=500, npts=250
)
Retrieving the receiver data again now shows the desired sampling rate.
print(
    ed[0].get_receiver_data(
        receiver_name="XX.REC1.", receiver_field="displacement"
    )
)
ed[0].get_receiver_data(
    receiver_name="XX.REC1.", receiver_field="displacement"
).plot()
2 Trace(s) in Stream:
XX.REC1..X | 1970-01-01T00:00:00.000000Z - 1970-01-01T00:00:00.498000Z | 500.0 Hz, 250 samples
XX.REC1..Y | 1970-01-01T00:00:00.000000Z - 1970-01-01T00:00:00.498000Z | 500.0 Hz, 250 samples
<Figure size 800x500 with 2 Axes>
get_receiver_data returns an obspy stream object containing a list of traces. You can access individual traces by index and the raw data by the member data, which is a numpy array. Here, we query the Y component at index 1.
trace_data = (
    ed[0]
    .get_receiver_data(
        receiver_name="XX.REC1.", receiver_field="displacement"
    )[1]
    .data
)
print(type(trace_data), trace_data.shape)
<class 'numpy.ndarray'> (250,)
Alternatively, you can retrieve the resampled data of all receivers for a single component. Note that you can pass an additional sorting routine to get_data_cube in case you seek to obtain the receivers in a certain order. The function returns a tuple of the time axis as 1D array, and receiver data as 2D array (nrec x nt). There is also the option to sort the receivers in case a certain order is desired.
times, values = ed[0].get_data_cube(
    receiver_field="displacement", component="Y"
)
Both functions return the same data, obviously.
plt.plot(times, values[0, :], label="get_data_cube")
plt.plot(times, trace_data, "--", label="get_receiver_data")
plt.legend()
plt.show()
For volumetric wavefield output, we additionally have to deal with the complexity of the spatial discretization. Salvus uses unstructured meshes, but it might be desirable to obtain the wavefield output on a regular grid or at a predefined set of points. This kind of post-processing is possible with the WavefieldOutput object.
# First, retrieve the volumetric output file
output_file = pathlib.Path(
    p.simulations.get_simulation_output_directory("simulation_3", "event_0"),
    "volume_data_output.h5",
)

# Create a WavefieldOutput object for resampling
volume_output = WavefieldOutput.from_file(
    output_file, "displacement", "volume"
)
Now, we resample the volumetric wavefield output. The output is an xarray.DataArray with dimensions for time, components and the array of points provided. The second argument can be an arbitrary array of points. Here, we just use the coordinates of the points in the mesh.
sampled_output_1 = wavefield_output_to_xarray(
    volume_output, p.simulations.get_mesh("simulation_3").points
).T
For output on a regular grid, we can alternatively provide the sampling along each coordinate axis as a list. For instance, we can sample on a regular grid in x and y with 10 m spacing.
sampled_output_2 = wavefield_output_to_xarray(
    volume_output,
    [
        np.linspace(0.0, 2000.0, 200),
        np.linspace(0.0, 1000.0, 100),
    ],
).T
We can now easily plot a snapshot of the wavefield using the built-in routines of xarray.
ax = sampled_output_2.sel(c=1, t=sampled_output_2.t[20]).plot(
    shading="gouraud", infer_intervals=False
)
ax.axes.set_aspect("equal")
As a final step of post-processing, we want to resample the volumetric output in time. There is currently no unified interface to do that, but you could manually achieve it with the following snippet.
resampled_output = block_processing.resample(
    sampled_output_1.data, sampled_output_1.t.data, np.linspace(0.0, 0.5, 251)
)
Now, it's time to play around with Salvus.
How about adding a new event with a moment tensor point source and strain receivers? Have a look at the beginning of part one and try to create event_1 with newly defined source(s) and receivers.
You could also ramp up the frequency and send some simulations to a bigger machine. Once you have registered a site with SalvusFlow, you only need to adjust the site_name and the ranks_per_job in the launch function and Salvus will take care about the rest.
PAGE CONTENTS