"do_not_decouple_child_processes"
flag for local sites has been introduced to
support that behavior.salvus-cli init-site
step.with sn.functions.add_new_executor( site_name="my_other_site", executor_type="local", salvus_binary=salvus_binary, run_directory=run_directory_2, tmp_directory=tmp_directory_2, use_cuda_capable_gpus=False, default_ranks=1, max_ranks=2, ): # Site is available here. sn.functions.get_site("my_other_site")
import salvus.namespace as sn from salvus.mesh import layered_meshing as lm n = 4 # polynomial order domain = sn.domain.dim2.BoxDomain(x0=0, x1=1, y0=0, y1=1) # Create a simple mesh to use as a test. Let parameters vary only in a # single direction so that we know their gradients in advance. mesh = lm.mesh_from_domain( domain=domain, model=lm.material.from_params( vp=1 + sympy.symbols("x"), rho=sympy.symbols("v") ), mesh_resolution=sn.MeshResolution( reference_frequency=10.0, elements_per_wavelength=1.0, model_order=n, ), ) # Compute the spatial derivatives of the model mesh.compute_spatial_gradients(fields=["VP", "RHO"]) # VP varies only in the x direction. np.testing.assert_allclose(mesh.element_nodal_fields["dVP/dx"], 1.0) np.testing.assert_allclose( mesh.element_nodal_fields["dVP/dy"], 0.0, atol=1e-10 ) # RHO varies only in the y direction. np.testing.assert_allclose( mesh.element_nodal_fields["dRHO/dx"], 0.0, atol=1e-10 ) np.testing.assert_allclose( mesh.element_nodal_fields["dRHO/dy"], 1.0, )
get_interpolator(...)
from
xarray_tools
, faster interpolations are now provided via the fast_interp
package. This package will be used automatically if the interpolation source is
regularly spaced, 2- or 3-D, and the "linear" interpolation method is selected.
Note that if the source grid has a variable grid spacing in any one dimension
scipy's regular grid interpolation routines will be fallen back on.enclosing_elements_method
to the simple_post_refinements
generator. The default value is "inverse_coordinate_transform"
, which retains
legacy behavior. Another value "bounding_box"
is also supported, which trades
the accuracy of the enclosing element search to performance, as only a simple
bounding box check is done to determine point ownership.# To attach the block ids to the mesh m = sn.UnstructuredMesh.from_exodus( filename, attach_element_block_indices=True ) # To only read given blocks m = sn.UnstructuredMesh.from_exodus( filename, select_element_block_indices=[2, 4], )
mesh.transform_parameter
, which will automatically try to convert the
parameters defined on a mesh to your chosen material, if they are compatible.
Returns a new mesh. Ideal when a model in a different parametrization is
desired.import salvus.namespace as sn from salvus import material d = sn.domain.dim2.BoxDomain(x0 =0, x1=10, y0=0,y1=10) mr = sn.MeshResolution(reference_frequency=1) mesh_material_iso = sn.layered_meshing.mesh_from_domain( domain=d, model=material.acoustic.Velocity.from_params(vp=1.0, rho=1.0), mesh_resolution=mr, ) mesh_material_tho = sn.layered_meshing.mesh_from_domain( domain=d, model=material.acoustic.elliptical_hexagonal.Velocity.from_params( vpv=1.0, vph=1.0, rho=1.0 ), mesh_resolution=mr, ) converted_mesh = mesh_material_iso.transform_material( material.acoustic.elliptical_hexagonal.Velocity ) assert converted_mesh == mesh_material_tho
material.elastic.hexagonal.Thomsen
) is added, which has
from_acoustic
which needs injection of just 2 extra parameters.
Additionally, it can convert back using to_acoustic
.from salvus.flow.simple_config.source.srf_file_reader import _read_srf_file srf_data = _read_srf_file( "example.srf", plot=True, )
def fixed_vp_vs_ratio( proposed_model: sn.UnstructuredMesh, prior_model: sn.UnstructuredMesh, ) -> sn.UnstructuredMesh: vp_vs_ratio = ... # some value model_update = proposed_model.copy() model_update.elemental_fields["VP"] = vp_vs_ratio * model_update.elemental_fields["VS"] return model_update mapping = sn.Mapping( inversion_parameters=["VS"], scaling="relative_deviation_from_prior", postprocess_model_update=fixed_vp_vs_ratio, )
mapping
can then be passed to an InverseProblemConfiguration
to enforce a
fixed VP/VS ratio without inverting for VP.%load_ext salvus %salvus_snippets
salvus.data.external_data_to_hdf5
takes an
xarray.Dataset
, validates it, and converts it to a Salvus
compatible HDF5 file, which can be added to a project.from salvus.data import external_data_to_hdf5 for event in project.events.list(): external_data_to_hdf5( data=data_per_event, receivers=event.receivers, receiver_field="displacement", output_filename=external_data_path, ) project.waveforms.add_external( data_name="observed_data", event=event.event_name, data_filename=external_data_path, )
get_bm_string
should now contain
only duplicated pairs at most. This issue may have affected those generating
background models in SalvusProject from an n-dimensional xarray dataset.salvus.mesh.layered_mesher.material
), accessing it this way is not
encouraged.# Acoustic isotropic material material_sandstone = material.acoustic.Velocity.from_params( rho=1500, vp=1875 ) # Or something more exotic material_weird_sandstone = material.elastic.triclinic.TensorComponents( ... ) mc_sandstone = sn.ModelConfiguration( background_model=sn.model.background.homogeneous.FromMaterial( material_sandstone ) ) sc_sandstone = sn.SimulationConfiguration( name="sandstone", model_configuration=mc_sandstone, ... )
find_side_sets_enclosing
as a method to meshes to find a all side sets
that enclose the mesh, by first finding all planar surfaces and constructing
the remainder surface from all unclaimed facets.def forward_norm(data_synthetic, data_observed, sampling_rate_in_hertz): ... def jacobian_norm( adjoint_source, data_synthetic, data_observed, sampling_rate_in_hertz ): ... norm = sn.TraceNormalization(forward=forward_norm, jacobian=jacobian_norm) p += sn.MisfitConfiguration( ..., normalization=norm, )
events
as a setting to p.viz.nb
's misfits
, to visualize
misfits only for selected events. If not passed, to previous default of all
events will be used. Also allows silent query for p.simulations.query()
,
useful when calling query often and keeping output down.fast_unsafe
as a setting to p.viz.nb
's shotgather
and
custom_gather
, to visualize data not on the most dense but on the most sparse
time axis of the selected datasets. This might alias data.receiver_sampling_rate_in_time_steps
of a simulation configuration. By
default, Salvus will now drop the final data point if it does not lie on the
regular time grid of the subsampled simulation output, e.g. with 100 time
steps, but with a receiver sampling interval of 11. These are the sampling
rates that can be set via:sn.WaveformSimulationConfiguration( receiver_sampling_rate_in_time_steps=11, ... )
EventData.get_waveform_data(..., enforce_regular_time_grid=False)
, but note
that the resulting data will only be regularly spaced for all but the last
sample.get_time_axis_from_meta_json
now optionally allows passing which
output type the time axis needs to be retrieved for, defaulting to the
simulation's time axis (no subsampling).extra_output_configuration
.p.simulations.get_simulation_output_directories( simulation_configuration="my_simulation", events=p.events.list() )
salvus.material.from_params()
and
salvus.material.from_dataset()
methods that will automatically find and
initialize the appropriate material class.p.simulations.run()
. This function waits for all simulations to finish before
returning and enables a new way of parallelism.sn.SiteConfig
, this new
function can also run several simulations in parallel on a local (or ssh)
system, which can give a substantial speed up for use cases with many small
simulations.task_chain_config = TaskChainSiteConfig( site_name="test_site", number_of_parallel_workers=5, ranks_per_salvus_simulation=2, max_threads_per_worker=2, shared_file_system=True, ) p.simulations.run( events=p.events.list(), simulation_configuration="my_simulation", site_config=task_chain_config, )
p.simulations.launch()
for those
instead.libmpicxx
has been removed. Salvus runs with any
ABI-compatible MPI implementation such as MPICH and only the C-interface and
library is required at runtime.libmpicxx
.salvus-cli status
that would throw an index error when trying
to query empty job arrays from the database.%load_ext salvus %salvus_snippets
sn.simple_mesh.linear_solid.LinearSolid
has been removed and
superseded. The class sn.LinearSolids
can be used directly for a constant Q
approximation. Alternatively, and for more fine-grained control, the
coefficients of the standard linear solids (SLS) can be obtained using
salvus.material.attenuation.lsqr_fit_q_factor_model
.salvus.mesh.algorithms.unstructured_mesh_utils
: retain_unique_facets
,
uniquefy_side_sets
, get_internal_side_set_facets
,
disconnect_along_side_set
.p.inversions.get_misfits()
and p.entities.get_inverse_problem_configurations()
.ObsPy
. If the wurlitzer
library is installed, it will now
be used to suppress these warnings.EventData.get_wavefield_output()
method parsing outputs to
WavefieldOutput
objects.EventData.get_associated_salvus_job()
method returning a potentially
still existing Salvus job for the EventData
object.EventData.get_remote_extra_output_filenames()
method to get paths of
remote output files.EventData.download_extra_outputs()
method to download remote wavefield
output data.EventData.delete_associated_job_and_data()
method to delete potentially
still existing remote wavefield data.split_layered_model
: This function allows for a layered model to be split
into two based on conditions targeting the model's interfaces or materials.flood
: This function allows for a layer's material parameters to be
"flooded", or extruded, in the vertical direction.blend
: This function supports the blending of two materials, and currently
supports the averaging of constant parameters, as well as the linear or
cosine-taper-based blending of discrete models along their vertical coordinate.salvus.mesh.layered_meshing.utils
module.numpy>=1.26.0
and xarray>=2023.10
.extra_output_configuration
argument passed
to p.simulations.launch()
. It will detect if the simulations have
previously been run with different output settings and will optionally
overwrite existing results.ValueError
which was hard to recover
without manually removing the delinquent stations from the observed data.get_misfit_comparison_table()
, which would trigger an exception
when passing the same data for comparison multiple times.final_time_data
it is possible to output the primary fields
(displacement
, velocity
, phi
, phi_t
) at final time on an unstructured
mesh. The output file will thus be independent from the partition and number of
ranks used in the simulation.extra_output_configuration
in p.simulations.launch
:extra_output_configuration={ "final_time_data": {"fields": ["displacement"]}, },
simple_config
simulation object:w.output.final_time_data.format = "hdf5-minimal" w.output.final_time_data.fields = ["displacement"] w.output.final_time_data.filename = "final_time.h5"
shotgather
; it will no longer accept width, height and DPI, but
instead accepts a dataclass with those fields, or an existing axes, passed as
plot_using
.compute_max_distance_in_m()
function that computes the maximum
distance between any two points in a given set in Cartesian or spherical
coordinates..estimate_max_travel_distance_in_m()
method is available
for all domain object utilizing the new function. This is useful for example to
judge the approximate time waves will take to travel through a given domain.from salvus.mesh.tools.transforms import interpolate_mesh_to_mesh int_mesh = interpolate_mesh_to_mesh( ..., fields_to_interpolate=["custom"], )
EventBlockData
object in a
custom axes frame.pandas>=2.1.0
.ValueError
in
some cases when adding events through p.actions.seismology.add_asdf_file()
.layer
id, which increments
the layer id at the surface by one.ZeroDivisionError
if there were one or more events without windows in the
data selection.TypeError
for
constant traces.salvus-cli upgrade
command now works in cases where the upgrade changes
the initialization logic of Salvus.salvus.mesh.chunked_interface.create_mesh_chunkwise()
function is now
deprecated.start_time_in_seconds
and
end_time_in_seconds
can now be passed to the respective output groups
in a simulation.Waveform(...)
object. Times passed are always truncated
(floored) to the preceding time step.w = sn.simple_config.simulation.Waveform(mesh=m) # If all settings for the same type of boundary condition are the same, # the side sets will be merged. w.add_boundary_conditions([ sn.simple_config.boundary.HomogeneousDirichlet(side_sets=["x0"]), sn.simple_config.boundary.HomogeneousDirichlet(side_sets=["x1"]) ]) # However, using different settings for a boundary type will raise an error. w.add_boundary_conditions([ sn.simple_config.boundary.Absorbing( side_sets=["x0"], taper_amplitude=0.0, width_in_meters=0.0 ) sn.simple_config.boundary.Absorbing( side_sets=["x1"], taper_amplitude=1.0, width_in_meters=1.0 ) ])
event_origin_time
keyword argument. We believe this should not affect
any users, if it does: the error message is very clear and fixing it is
straight forward.WavefieldOutput
object to an xarray
.salvus.modules.source_inversion
submodule by using
one of the following functions:invert_wavelet
: Accepts the observed and synthetically generated data as
NumPy arrays and returns the inverted wavelet as a NumPy array.invert_wavelet_from_event_data
: Performs the same inversion as within
invert_wavelet
, but accepts EventData
objects instead of NumPy arrays.
This is generally the more convenient approach when used together with
SalvusProject.custom_commands
setting for Slurm sites to add custom
bash commands to a job script.mesh.adjust_side_sets.adjust_site_set
that allows for the
re-interpolation of a mesh's vertical side set to a new digital elevation model,
potentially at a higher order than originally used. This can be useful, for
instance, when changing frequency bands and taking advantage of the smaller mesh
size to better resolve topography, changing a mesh's model order, or
re-interpolating after adding local refinements. This re-interpolation also now
happens by default when using the layered meshing interface.p.simulations.launch( simulation_configuration="my_simulation", events=p.events.list(), ranks_per_job=2, site_name="local", extra_output_configuration={ "frequency_domain": { "frequencies": [10.0], "fields": ["phi"], "start_time_in_seconds": 1.0, "end_time_in_seconds": 1.1, } } )
salvus.utils.logging.log_timing()
context manager that logs the
time the context takes to execute.salvus.toolbox.helpers.wavefield_output
module. To use this, one simply needs
to pass "surface" as an output type when loading the wavefield output. Also
added the ability to drop dimensions from outputs, so 3-D surface outputs can be
visualized in 2-D planar plots (such as in matplotlib). This can be accessed by
calling the .drop(...)
function on a WavefieldOutput
instance.name_free_side_set
to salvus_mesh_utils
. This
function finds all mesh surfaces not currently assigned to a side set and either
a) creates a new side set with the name passed, assigning the found surfaces, or
b) appends to an existing side set of the same name if it already exists in the
mesh.salvus.toolbox.helpers.wavefield_output
to help
encapsulate and manipulate raw volumetric wavefield output from SalvusCompute.
Additional functions allows for the extraction of time-dependent wavefield data
from said files to regular grids in space and time.frequency-domain
for (time-dependent) volume and surface
data. In combination with the static frequency-domain output, this enables
storing the time-evolution of the discrete Fourier transform with the
polynomial degree of the SEM shape functions.job_script_shebang
setting for Slurm sites to allow
overwriting the job script shebang.apply_element_mask
, which can lead to significant
speed-ups for meshes that neither have nodal parameters nor layered topography.max_concurrent_chains
can be passed to the
TaskChainSiteConfig
to limit the maximum number of concurrent chains.
Note that when using more than one site, this parameter has to be consistent
for all sites in the current implementation.p.simulations.cancel( simulation_configuration="my_simulation", events=p.events.list() )
p.simulations.cancel_all()
will cancel all simulations from
the simulations store.AbsorbingBoundary
constructor.shutil.copyfile()
instead of shutil.copy()
and shutil.copy2()
to
avoid problems copying between file systems with and without permission bits.use_event_dependent_gradients
for the trust-region method.
When set to True
, only the accumulated gradient for all events is used, and
tasks of type TaskMisfitsAndSummedGradient
are issued instead of
TaskMisfitsAndGradients
.discontinuous_model_blocks
to the mapping function.
When used in combination with a homogeneous
scaling, this function allows
for the parameterization of piecewise constant models.ipywidgets>=8.0.0
. The changes are backward compatible with older
versions of ipywidgets
.control_group_events
in the constructor of an iteration.max_events_per_job_submission
for inversion action
component to limit the maximum number of events that are simultaneously
submitted per job to reduce the memory overhead.event_batch_selection
to the
InverseProblemConfiguration
, which allows to define an iteration-dependent
selection of events and/or control group to check the trial model only for a
subset of events.job_submission
settings can now optionally receive a TaskChainSiteConfig
to compute misfits in parallel, and to chain forward and adjoint simulations.omit_tasks_per_node
setting for Slurm sites.ntasks-per-node
to be omitted for both the #SBATCH
command as well as the call to srun
for the rare site where this is
necessary."EXTERNAL_DATA:raw_data | bandpass(1.0e3, 2.0e3) | normalize"
strain
, gradient-of-displacement
and
gradient-of-phi
as valid receiver fields in the misfit configuration and
related event data and event misfit objects.XXX
, XXY
, XYX
, XYY
XG0
, XG1
, XG2
, XG3
XXX
, XXY
, XXZ
, XYX
, XYY
, XYZ
, XZX
, XZY
, XZZ
XG0
, XG1
, XG2
, XG3
, XG4
, XG5
, XG6
, XG7
, XG8
p.viz.waveforms()
, which caused to function to fail with a
cryptic error message when data
was passed as a string.QKAPPA
as a material
parameter.SegyEvent
, but it should be straightforward to
upgrade.paramiko.SSHClient.connect()
to allow more fine tuning for some SSH connections.[sites.my_site.ssh_settings] hostname = "some_host" username = "some_user" [sites.my_site.ssh_settings.extra_paramiko_connect_arguments.disabled_algorithms] pubkeys = ["rsa-sha2-512", "rsa2-sha2-256"]
init-site
command now has a --verbose
flag to facilitate
debugging tricky connections:salvus-cli init-site my_site --verbose
mpirun_template
parameter for ssh
and local
site types that allows
full customization of the actual call to mpirun
in case it is necessary.[sites.my_site.site_specific] mpirun_template = "/custom/mpirun -machinefile ~/mf -n {RANKS}"
mpirun
executable with a non-standard
argument for all Salvus runs on that site. The {RANKS}
argument will be
filled in by Salvus with the number of ranks for each simulation.UnstructuredMeshSimulationConfiguration
objects are now properly recognized when trying to overwrite an existing
configuration of the same name.StructuredModel
to SalvusOpt to invert models
parameterized on a regular grid using xarray.Datasets.WaveformSimulationConfiguration
.Stats
tab of the iteration widget.WaveformSimulationConfiguration
. This is useful for Dirichlet-type
boundaries or absorbing boundaries of a
UnstructuredMeshSimulationConfiguration
.
Boundaries specified here will be applied in addition to ocean load and/or
absorbing boundaries specified as AbsorbingBoundaryParameters
in the
SimulationConfiguration
. A ValueError
is raised for duplicated conditions
on a side set.compute_misfits
, the simulation results are
considered corrupted and will be automatically deleted. This means that those
simulations will be resubmitted when calling compute_misfits
again.TaskChain
.StructuredGrid2D
, StructuredGrid3D
and Skeleton
classes and
replaced them with new MeshBlock
and MeshBlockCollection
classes.iterate()
and resume()
. Specifying site_name
,
ranks_per_job
and wall_time_in_seconds_per_job
is no longer
supported. Instead, the job_submission_settings
either need to be passed to
the constructor of the InverseProblemConfiguration
or by using
p.inversions.set_job_submission_configuration()
.p.simulations.query( simulation_configuration="my_simulation", misfit_configuration="my_misfit", wavefield_compression=sn.WavefieldCompression( forward_wavefield_sampling_interval=15 ), events=p.events.list(), )
wget https://mondaic.com/environment.yml -O ~/environment.yml conda env update -n salvus -f ~/environment.yml
TaskChain
workflow primitive that can be used to run multiple Salvus jobs
and/or Python scripts in a linear chain within a single local/ssh/HPC job.# Construct a forward simulation object. w_forward = sn.simple_config.simulation.Waveform( ..., store_adjoint_checkpoints=True ) # Construct an adjoint simulation object. w_adjoint = sn.simple_config.simulation.Waveform(...) ... # There is a new `PROMISE_FILE:` prefix to tell SalvusFlow that a file does # not exist yet but it will exist when the simulation is run. w_adjoint.adjoint.point_source_block = { "filename": "PROMISE_FILE:task_2_of_2/input/adjoint_source.h5", "groups": ["adjoint_sources"], } # Define a Python function that is run between forward and adjoint simulations # to generate the adjoint sources. def compute_adjoint_source( task_index: int, task_index_folder_map: typing.Dict[int, pathlib.Path] ): folder_forward = task_index_folder_map[task_index - 1] folder_adjoint = task_index_folder_map[task_index + 1] output_folder = folder_forward / "output" event = sn.EventData.from_output_folder(output_folder=output_folder) event_misfit = sn.EventMisfit( synthetic_event=event, misfit_function="L2_energy_no_observed_data", receiver_field="displacement", ) input_folder_adjoint = folder_adjoint / "input" event_misfit.write(input_folder_adjoint / "adjoint_source.h5") # Launch the task chain. It will serialize the Python function and launches # everything either locally or in Slurm/other supported systems. tc = sn.api.run_task_chain_async( site_name="local", tasks=[w_forward, compute_adjoint_source, w_adjoint], ranks=4, ) # Wait until it finishes. tc.wait()
mesh = p.simulations.get_mesh("sim") # Modify scaling parameters for all fields mesh.elemental_fields["VP"] = ... mesh.elemental_fields["RHO"] = ... m = Mapping( scaling=mesh, inversion_parameters=["M"], map_to_physical_parameters={"VP": "M", "RHO": "M"}, )
model.background.one_dimensional.FromBm
.sn.WavefieldCompression( forward_wavefield_sampling_interval=N )
N
during the
adjoint run.
forward_wavefield_sampling_interval=1
corresponds to no compression and is
equivalent to what was done prior to this release.forward_wavefield_sampling_interval=1
,
which is consistent with Salvus <= 0.11.47
.# Inversion actions: p.action.inversion.compute_misfits( ..., derived_job_config=sn.WavefieldCompression( forward_wavefield_sampling_interval=5 ), ... ) p.action.inversion.compute_gradients( ..., wavefield_compression=sn.WavefieldCompression( forward_wavefield_sampling_interval=5 ), ... ) p.action.inversion.sum_gradients( ..., wavefield_compression=sn.WavefieldCompression( forward_wavefield_sampling_interval=5 ), ... ) # InverseProblemConfiguration sn.InverseProblemConfiguration( ... wavefield_compression=sn.WavefieldCompression( forward_wavefield_sampling_interval=5 ), ... )
store_adjoint_checkpoints
in p.simulations.launch(...)
and
store_checkpoints
in p.actions.inversion.compute_misfits(...)
are
deprecated. Instead, usederived_job_config=WavefieldCompression( forward_wavefield_sampling_interval=N ),
derived_job_config=None
is the new default, which corresponds
to the deprecated store_adjoint_checkpoints=False
or
store_checkpoints=False
, respectively.compute_misfits
and compute_gradients
in p.action.inversion
are now
keyword-only functions. Additionally, we made the wavefield compression
settings mandatory arguments of a few lower-level functions, such as:p.misfits.get_gradient_filenames() p.simulations.get_adjoint_input_files() p.action.validation.validate_model_gradients()
TypeError: compute_gradients() needs keyword-only argument wavefield_compression
UnstructuredMeshSimulationConfiguration
.[[sites.site_name.site_specific.additional_qsub_arguments]] name = "pe" value = "mpi {NODES * TASKS_PER_NODE}" [[sites.site_name.site_specific.additional_qsub_arguments]] name = "l" value = "ngpus={int(ceil(RANKS / 12))}"
lat_lng_to_utm_crs()
helper routine to directly get a pyproj UTM
CRS object from a point given in latitude and longitude.salvus-cli
alias for the salvus-flow
command line call. This will
become the default at one point.p.misfits.compute_adjoint_source()
EventWindowAndWeightSet
in case
the window selection routine did not pick a single window for a chosen event.p.actions.seismology.get_events_with_windows("DSC_NAME")
UTMDomain
objects no longer require the ellipsoid to be passed.salvus-flow upgrade
and salvus-flow upgrade-site
for
double precision versions using the salvus_f64
binary.init-site
script will wait a bit longer for the stderr
in case it is
not yet available due to some synchronization delays on shared and parallel
file systems.UnstructuredMesh.extrude_side_set_2D()
method now also works as expected
for higher order meshes. Additionally fixed an issue with inverted elements
in certain scenarios.InverseProblemConfiguration
after
the project has been transferred to another python environment with a
different site configuration.Event( event_name="event", sources=[point_src1, point_src2], receivers=receivers, ) ... EventConfiguration( waveform_simulation_configuration=..., wavelet=[ simple_config.stf.Ricker(center_frequency=1.0), simple_config.stf.Ricker(center_frequency=2.0), ], )
int
instead of float
types.p.viz.shotgather( data="simulation", event="event", receiver_field="phi", component="A" )
extra_output_configuration
to the launch function.p.simulations.launch( ranks_per_job=4, site_name="local", events=p.events.list(), simulation_configuration="simulation", extra_output_configuration={ "frequency_domain": { "fields": ["displacement"], "frequencies": [1.0, 2.0, 3.0], } }, )
event_misfit = EventMisfit( ..., # Optionally downsample to the given number of npts. max_samples_for_misfit_computation=max_samples_for_misfit_computation )
mc = MisfitConfiguration( ..., # Optionally downsample to the given number of npts. max_samples_for_misfit_computation=max_samples_for_misfit_computation
from salvus.modules.near_surface.processing import convert_point_to_line_source new_st = convert_point_to_line_source( st=st, source_coordinates=[22.0, 1.0], receiver_coordinates=[132.0, 0.0], transform_type="single_velocity_exact", velocity_m_s=550.0 )
import numpy as np from salvus.modules.near_surface.processing import geophone_response frequencies = np.logspace(0.1, 2, 2000) response = geophone_response.compute_geophone_response( frequencies=frequencies, geophone_frequency=4.5, damping_ratio=0.4, calibration_factor=11.2, ) geophone_response.plot_response(frequencies=frequencies, response=response)
L. Métivier, R. Brossier, Q. Mérigot, and E. Oudet. 2019."A graph space optimal transport distance as a generalization of Lp distances: application to a seismic imaging inverse problem"Inverse Problems, Volume 35, Number 8, https://doi.org/10.1088/1361-6420/ab206f
"graph_space_optimal_tranport"
everywhere Salvus
accepts misfit functionals. The one tuning parameters is the
"max_expected_time_shift"
which, as the name implies, should be set to the
maximum expected time shift in seconds for individual wiggles between observed
and synthetic data."EXTERNAL_DATA:raw_data | bandpass(1.0, 2.0) | normalize"
as the data name.time_shift({SHIFT})
normalize
scale({FACTOR})
flip
bandpass({FREQ_MIN}, {FREQ_MAX}[, zerophase][, corners={CORNERS}])
p.viz.shotgather( data=[ "PROCESSED_DATA:corrected_data_10_15_hz | normalize", "SYNTHETIC_DATA:starting_model_10_15_hz | normalize", ], event="shot_049", receiver_field="velocity", component="Y", sort_by=lambda r: r.location[0] )
mesh_from_xarray
function in the Salvus Toolbox. This allows one to override the default spline
interpolation routine, which may be useful in the presence of strong velocity
contrasts.simple_config.Waveform
objects by lazy evaluation of
UnstructuredMesh objects.p.inversions.delete_disposable_files( inverse_problem_configuration="my_inversion", data_to_remove=["auxiliary", "waveforms", "gradients"] )
auxiliary
, waveforms
, gradients
, because gradients are the most
expensive to recompute.# Delete waveforms p.simulations.delete_results( simulation_configuration="my_sim_config", events=["event"] ) # Delete gradients p.simulations.delete_results( simulation_configuration="my_sim_config", misfit_configuration="my_misfit_config", events=p.events.list() )
mesh = basic_mesh.CartesianHomogeneousAcoustic2D( vp=1500.0, rho=1000.0, x_max=1.0, y_max=0.5, max_frequency=1e3, ).create_mesh() mesh = mesh.extrude_side_set_2D( side_set="x1", offsets=np.array([0.0, 2.0, 4.0]), direction="x", )
-np.infty
to deform all elements in y
direction
with negative y
coordinates.mesh = basic_mesh.CartesianHomogeneousAcoustic2D( vp=1500.0, rho=1000.0, x_max=1.0, y_max=2.0, max_frequency=10e3 ).create_mesh() mesh.points[:, 1] -= 0.5 mesh.add_dem_2D( x=np.array([0.0, 1.0]), dem=np.array([-0.25, 0.25]), y0=-np.infty, y1=np.infty, kx=1, ky=1, ) mesh.apply_dem()
hdf5-full
and hdf5-minimal
.
hdf5-full
contains additional fields stored in the input mesh, e.g., the
index of layers / regions. hdf5-minimal
will only write the data fields of
the gradient, but neither coordinates, connectivity nor an xdmf file.hdf5
remains unchanged..to_list()
function to all receiver collections so that multiple
collections can more easily be appended to each other.free_surface
argument of
AbsorbingBoundaryParameters
. These side sets will in turn not be extended with
absorbing layers (if they are requested), and no absorbing conditions will be
placed on them. This is useful, for instance, to model a thing plate
(free_surface=["z0", "z1"]
).mesh = p.simulations.get_mesh("initial_model") lb = mesh.copy() lb.elemental_fields["VP"] *= 0.8 lb.elemental_fields["RHO"] *= 0.8 ub = mesh.copy() ub.elemental_fields["VP"] *= 1.2 ub.elemental_fields["RHO"] *= 1.2 p.inversions.set_constraints( inverse_problem_configuration="my_third_inversion", constraints={ "lower_bounds": lb, "upper_bounds": ub, }, )
xarray
and SQLAlchemy
. A consequence of this is that the Python netCDF4
package is now a hard dependency.xarray
and SQLAlchemy
.
The netCDF4
package is now a hard dependency which should improve
compatibility when reading and writing files with xarray
.salvus-flow add-site
wizard will now also ask if it should
run Salvus using GPUs on a newly configured site.SiteNameValidator
that caused add-site
to crash when no other site has been initialized yet.salvus_mesh_utils._match_layers
that could
crop up if a mesh was generated outside of SalvusProject.get_enclosing_elements
would
not return the proper reference coordinates.salvus.mesh.tools.transforms
. Along with full-mesh interpolations,
layer-to-layer discontinuity-preserving interpolations, and "flattened"
interpolations handling topographic deformations, are supported. Additional
helper functions for re-meshing applications have also been added to
salvus.mesh.salvus_mesh_utils
. One of theses is a function to extract
a conservative 1-D model from an unstructured mesh, which helps to ensure that
a new mesh can be generated that respects both the 3-D minimum velocities and
existing discontinuities. Another is a function to match up layers between
two unstructured meshes in the case that their IDs are different. This is
helpful in the case where the interpolation should respect pre-defined
discontinuities in the model.simple_config.simulation.Waveform
object using
output.frequency_domain
.SphericalChunkDomain
together with a spherical polygon to further constrain
the spatial extent. Most importantly conversions between WGS84 and fully
spherical coordinates are now consistent with the rest of Salvus. The
in-notebook domain plot now also works and shows the original spherical chunk
together with the domain polygon.max_simultaneous_jobs_in_job_array = 4
%4
in --array=0-15%4
.WaveformSimulationConfiguration
.raise_on_deletion_failure=True
when using the low-level SalvusFlow API calls:sn.api.run(..., raise_on_deletion_failure=True) sn.api.run_many(..., raise_on_deletion_failure=True)
WaveformSimulationConfiguration
.w = WaveformSimulationConfiguration(spectral_element_order=6)
4
as in all previous releases.extract_model_to_regular_grid
to the salvus.mesh.salvus_mesh_utils
module allows users to regularly sample a given mesh given locations
specified by an xarray
dataset. Additional improvements include a more
flexible API for surface topography in UTM domains, a memory use mitigation
when running many shots with the same mesh, and the automatic up-casting of
related types in certain schema (for example: "2 elements per wavelength" is
now upcast to "2.0 elements per wavelength").w_copy = w.copy(copy_mesh=True)
extract_model_to_regular_grid
has been added to the
salvus_mesh_utils
module. This function performs an interpolation from a
general, unstructured mesh (spherical or cartesian) to a set of points defined
on a xarray
dataset. This can be useful, for instance, to create slices
through a model for efficient visualization and analysis. See the function's
docstring for additional examples, information, and tips.SurfaceTopography
object that restricts the mesh deformation above a given z-value. This is
useful, for instance, when placing a layer (i.e. the atmosphere) above a free
surface with topography. Related constructors (such as
SurfaceTopography.from_gmrt_file()
optionally take this parameter as well.wget https://mondaic.com/environment.yml -O salvus-0.11.26.yml conda env update -n salvus -f salvus-0.11.26.yml
SalvusJobArray.copy_output(..., copy_partial_results=True)
copy_partial_results
defaults to True
.salvus.flow.functions.run_many()
command
and also defaults to True
there.extra_output_configuration
passed to launch
overwrites additional output
values defined in the WaveformSimulationConfiguration
. The output options
in the WaveformSimulationConfiguration
will be deprecated in a future
release.p.waveforms.has_data()
method now also works for synthetic data.salvus.flow.utils.simulation_src_rec_to_vtk
, has
been added to facilitate this. As arguments it takes a simulation.Waveform
object, along with a string specifying what type of entity one wants to
generate a .vtk
file for (i.e. source or receiver). Check the function's
docstring for advice on how to open the resultant file in Paraview.sn.Mapping( scaling="relative_deviation_from_prior", inversion_parameters=["VSV", "VSH", "VP", "RHO"], map_to_physical_parameters={ "VPV": "VP", "VPH": "VP", } )
VPV
and VPH
are tied to the same
inversion variable VP
..memory_per_rank_in_MB
as optional parameter to SiteConfig
.
This allows to adjust the memory settings for different machines
during an inversion.store_checkpoints
and cleanup_checkpoints
in the job_submission
settings.job_submission={ "forward": ..., "preconditioner": ..., "store_checkpoints": False, "cleanup_checkpoints": False, }
False
checkpoints will only be stored when gradients are requested. If the latter
is False
checkpoints will never be deleted automatically._plot
method of an EventData
object now takes an additional optional
parameter: _exclude_cbar
. If set to true, a colorbar won't be added to the
axis if a shotgather plot is requested. This makes it easier to produce
shotgathers for multi-component data (i.e. for strain output).p.viz.nb.gradients
.GenericModel
constructor.~/.ssh/config
file is now respected.[sites.site_name.ssh_settings] hostname = "host" username = "user" interactive_login = true
salvus.mesh.tools.transforms
uniformly_refine_chunkwise()
wraps uniformly_refine()
working with a subset of elements at a time,
writing each of the readily processed chunks to a temporary file and finally
merging everything together. This can both be used to refine each element by
subdivision into an arbitrary number of elements in each dimension and/or to
increase the order of the shape mapping.salvus.mesh.chunked_interface.create_mesh_chunkwise()
can be used
instead of create_mesh()
of SphericalChunk3D
, Cartesian3D
and
Cartesian2D
using the same approach as described above to directly create
large meshes that would otherwise reach the memory limitations of the
machine.p.inversions.set_stopping_criteria( inverse_problem_configuration="...", criteria={ "max_iterations_global": 10, "max_iterations": 5, } )
global
settings apply to the entire inversion tree, the other settings
apply to individual branches.teleseismic_2d
) has been
added, fleshing out this feature.GenericModel
constructor for more details. Crustal and Mantle models will
now ignore fluid elements by default, to ensure that elastic models are not
interpolated into oceans. Is this behavior is undesirable, one has more
control with the GenericModel
interface.force_trust_region_scaling
badly scaled
adjont sources can be mitigated by enforcing to scale the proposed model
update to the trust region radius.python_binary
key to choose which Python is used to execute the minimal
wrapper script used to launch and control SalvusCompute. This is only
important for sites which do not have a Python on their default paths.[sites.laptop] ... [sites.laptop.site_specific] python_executable = "/usr/local/bin/python"
ProxyJump user@hostname
in ~/.ssh/config
files.salvus-flow init-site SITE_NAME
now always prints the disk usage of the
Salvus controlled run and temp directories. As init-site
is called each
time Salvus is upgraded this serves as a period reminder to monitor the size
of these directories.m.refine_locally(mask, spherical=True, r0_spherical=1.0, r1_spherical=2.0)
m.refine_locally( mask, interpolation_mode="spherical", interpolation_kwargs={"r0_spherical": 1.0, "r1_spherical": 2.0} )
m.refine_locally( mask, interpolation_mode="cylindrical", interpolation_kwargs={"r0_cylindrical": 1.0, "r1_cylindrical": 2.0} )
p.simulations.launch()
method can now take extra output configurations.
These can be any setting that does not modify the receiver output which
currently are surface and volume data settings as well as the memory per rank
buffer settings.p.simulations.launch( simulation_configuration="basic", events=p.events.list(), site_name="my_computer", ranks_per_job=12, extra_output_configuration={ "volume_data": { "sampling_interval_in_time_steps": 50, "fields": ["displacement"], }, "surface_data": { "sampling_interval_in_time_steps": 20, "fields": ["acceleration", "velocity"], "side_sets": ["x0", "x1"], }, "memory_per_rank_in_MB": 2000.0, }, )
Project.from_domain()
constructor now takes an optional argument
load_if_exists
which defaults to False
. If this is set to True
, and if
the domains are identical, future runs of this constructor will simply load
the existing project from disk rather than raising a "project already exists"
error. This helps remove some boilerplate code which the user previously had
to write.p.viz.nb.domain
that did not
display some source types correctly.ModelConfiguration
via the LinearSolids
class (which can be found in
salvus.namespace
). Additional validations and checks have also been added
to help the user when viscous simulations are either requested or thought to
be desired.xarray > 0.16.0
.velocity
and phi_t
. Additionally the (time frequency) phase misfits have
seen some significant improvements and added flexibility."z1"
side-set is
assumed, and the receivers will be buried exactly depth_in_meters
meters
below this side-set.EventMisfit
object. This should not have any (or only a very minimal) effect
on existing inversions but is more stable in certain edge cases.EventMisfit
object can now compute fully consistent adjoint sources for
measurements on receivers recording velocity
or phi_t
with any misfit
functional. This functionality is also available from within SalvusProject.from salvus.opt.misfits import get_misfit_function misfit, adjoint_source = get_misfit_function("phase_misfit")( data_observed=data_observed, data_synthetic=data_synthetic, sampling_rate_in_hertz=sampling_rate_in_hertz, frequency_limits=(0.1, 0.5), # These two are optional. absolute_value_threshold=1e-3, taper_type="hanning", )
get_misfit_function("phase_misfit", version=1)
MisfitConfiguration
has been created.from salvus.opt.misfits import get_misfit_function misfit, adjoint_source = get_misfit_function("time_frequency_phase_misfit")( data_observed=data_observed, data_synthetic=data_synthetic, sampling_rate_in_hertz=sampling_rate_in_hertz, # These two are mandatory. segment_length_in_seconds=15.0, frequency_limits=(0.1, 0.5), # The others are optional. segment_overlap_fraction=0.75, absolute_value_threshold=1e-3, taper_type="hanning", stft_window="hann", )
derive_bm_file()
to derive a 1-D background model from a 2-
or 3-D Cartesian volume model. This is useful, for instance, when the 3-D
model has a significant increase in velocity with depth, and when no
appropriate 1-D model exists a-priori. If the velocity profile allows, using
the 1-D model output by this function may result in the insertion of doubling
layers when a simulation mesh is created, and as a consequence may allow for
a reduction of computational cost.create_events_from_segy_file
to handle 2-D events.DataSelectionConfiguration
allows window- and station-dependent
weights. Furthermore, there are several new options for visualization and
quality control of misfits. See the Seismological Gradient
tutorial in the
Experimental
section for several examples.SalvusMesh
.EventWindowAndWeightSet
class. It is a per-event data
structure that can be serialized to disc that is able to store interval
windows (e.g. windows that have a start as well as an end time). Additionally
it can store weights at the receiver, component, as well as the individual
window level.Event
and
EventMisfit
infrastructure.sm = sn.simple_mesh.SmoothieSEM() sm.basic.model = "prem_iso_one_crust" sm.basic.min_period_in_seconds = 200.0 sm.basic.elements_per_wavelength = 2.0 sm.basic.number_of_lateral_elements = 2 sm.advanced.tensor_order = 4 sm.source.latitude = 38.82 sm.source.longitude = 40.14 # Ellipticity. sm.spherical.ellipticity = 0.0033528106647474805 # Surface topography. sm.topography.topography_file = "topography_earth2014_egm2008_lmax_256.nc" sm.topography.topography_varname = ( "topography_earth2014_egm2008_lmax_256_lmax_16" ) # Moho topography. sm.topography.moho_topography_file = "moho_topography_crust_1_0_egm2008.nc" sm.topography.moho_topography_varname = ( "moho_topography_crust_1_0_egm2008_lmax_16" ) # Ocean loading. sm.ocean.bathymetry_file = "bathymetry_earth2014_lmax_256.nc" sm.ocean.bathymetry_varname = "bathymetry_earth2014_lmax_256_lmax_16"
from salvus.mesh.tools.transforms import uniformly_refine new_mesh = uniformly_refine(mesh, tensor_order=2)
p.viz.nb.domain()
method for 2D box domains.site_name
, ranks_per_job
and
wall_time_per_job_in_seconds
through the iterate
and resume
functions
still work, but will be deprecated in the future.toolbox.interpolate_spherical
, which will print a progress bar if the
verbosity is 1 or greater. The verbose behavior is the default when
generating spherical meshes with project.UnstructuredMeshSimulationConfiguration
objects.windows
component
cannot be imported anymore.windows
component has been removed in favor of a more generic
DataSelectionConfiguration
. This is for one more in line with the rest of
SalvusProject but also more powerful and flexible.window_set_name
argument now takes a data_selection_configuration
argument.DataSelectionConfiguration
is either a function that is applied on the
fly to select pieces of data (think generalized windows and weights) or one
EventWindowAndWeightSet
set per event (or a combination of both).DataSelectionConfiguration
with an attached
data_selection_function
. The function itself does not have to be changed.p.add_to_project( sn.DataSelectionConfiguration( name="mute_with_direct_phase", receiver_field="phi", data_selection_function=mute_with_direct_phase, ) )
pick_windows()
seismological action also requires changing
window_set_name
to data_selection_configuration
(if that configuration
does not yet exist, it will be created). In addition the
window_taper_width_in_seconds
argument is now exposed and required. It
controls the width of the taper for the windows.p.actions.seismology.pick_windows( data_selection_configuration="custom_windows", ... window_taper_width_in_seconds=10.0, )
# Get the data selection configuration. dsc = p.entities.get( entity_type="data_selection_configuration", entity_name="custom_windows" ) # If interval windows have been picked before an event, its # `EventWindowAndWeightSet` can be retrieved. ewws = dsc.get_event_window_and_weight_set(event=p.events.get("event_0000"))
DataSelectionConfiguration
.In [1]: ewws.receivers Out [1]: {'XX.A0000.': {'receiver_weight': 1.5462720296177797, 'components': {'Z': {'component_weight': 1.0, 'windows': [{'window_start': 229.67554, 'window_end': 1389.219377, 'window_weight': 1.0}]}, ... In [2]: ewws.receivers["XX.A0000."]["receiver_weight"] = 0.7 In [3]: ewws.write(overwrite=True)
def data_selection_function(st, receiver, sources): # Receiver and component weights. weights = { # This is optional but allows to specify additional weights # important for the misfit and adjoint source computations. "receiver_and_component_weights": { # Pass the receiver weight here. "receiver_weight": 0.75, # It is also possible to give weights to individual # receivers. "component_weights": {"X": 1.0, "Y": 0.8, "Z": 0.5}, } } for tr in st: component = tr.stats.channel[-1] temporal_weights = compute_window(...) # Individual window weights. weights[component] = [ {"values": temporal_weights, "misfit_weight": 0.8}, ... ] return weights # Add to the project. p.add_to_project( sn.DataSelectionConfiguration( name="custom_data_selection", receiver_field="phi", data_selection_function=data_selection_function, ), overwrite=True, )
# Custom receiver weighting function. def my_favorite_station(event, receivers, receiver_name): weights = {} for name in receivers: if name == receiver_name: weight = 2.0 else: weight = 1.0 weights[name] = weight return weights p.actions.seismology.add_receiver_weights_to_windows( # Add the weights to existing windows in this data selection # configuration. data_selection_configuration="custom_windows", events=p.events.list(), # The weights for each item in the chain will be multiplied together. weighting_chain=[ { "weighting_scheme": "ruan_et_al_2019", "function_kwargs": {"ref_distance_condition_fraction": 1.0 / 3.0}, }, { "weighting_scheme": "mute_near_source_receivers", "function_kwargs": { # All receivers closer than this to the sources # will have zero weight. "minimum_receiver_distance_in_m": 200e3, # All receivers further away than this will not be muted. "maximum_receiver_mute_distance_in_m": 1000e3, # How to go from weight 0 to weight 1 for receivers in the # transition region. Currently only "hanning" is supported. "taper_type": "hanning", }, }, # Custom weighting function. { "weighting_scheme": my_favorite_station, "function_kwargs": {"receiver_name": "XX.A0044."}, }, ], # Normalize the sum of all receiver weights to the receiver count. normalize=True, )
p.viz.seismology.receiver_weights_map( data_selection_configuration="custom_windows" )
p.viz.seismology.misfit_map()
method is now a widget and the event can
be selected interactively. Thus passing the event
argument is no longer
necessary.p.viz.misfit_histogram( simulation_configuration_a="initial_model_70_120_seconds", simulation_configuration_b=p.inversions.get_simulation_name( inverse_problem_configuration="inversion", iteration_id=25 ), misfit_configuration="phase_misfit_70_120_seconds", events=p.events.list(), merge_all_components=False )
pandas.DataFrame
object containing statistics
about all windows for a list of events. This is very useful to do some custom
analysis and figure out which events to use in the final inversion.df = p.entities.get( entity_type="data_selection_configuration", entity_name="selection_a" ).get_interval_window_statistics_table(events=p.events.get_all())
p.viz.interval_window_statistics("selection_a")
EventMisfit
object. Last but not least
it includes a bugfix for gradients in meshes employing the region-of-interest
feature while using nontrivial checkpoints.EventMisfit.misfit_per_receiver_and_component_and_weight_set
attribute
to yield a more detailed view of how the misfit for a single event is
computed.In [1]: ed.misfit_per_receiver_and_component_and_weight_set Out [1]: {'XX.A0000.': {'Z': [0.00088682823102153761], 'N': [0.000256782248385296], 'E': [0.00012302960702887129, 0.0004303495400]}, 'XX.A0001.': {'Z': [0.0010641578304181843], 'N': [0.00025058292454050639], 'E': [0.00012369945631215902]}, ... }
simple_config.receiver.seismology.collections.SideSetGridPoint3D
receiver collection.lat_c, lon_c = 45.0, 10.0 lat_e, lon_e = 15.0, 18.0 receivers = sn.simple_config.receiver.seismology.collections.SideSetGridPoint3D( lat_center=lat_c, lat_extent=lat_e * 0.9, lon_center=lon_c, lon_extent=lon_e * 0.9, fields=["displacement"], n_lat=10, n_lon=10, )
UnstructuredMesh.get_side_set_nodes()
to get all nodes in a side
set.SurfaceMaskGenerator
, RayMaskGenerator
) replacing
the existing JSON based masking interface.processing_function
argument to the
central run_mesher()
function.SmoothieSEM
.sm = sn.simple_mesh.SmoothieSEM() sm.basic.model = "prem_iso_one_crust" sm.basic.min_period_in_seconds = 200.0 sm.basic.elements_per_wavelength = 2.0 sm.basic.number_of_lateral_elements = 4 sm.advanced.tensor_order = 2 sm.source.latitude = 35.0 sm.source.longitude = 12.0
"displacement"
or "phi"
are used in
a MisfitConfiguration
object. Otherwise the adjoint sources are currently
not fully consistent.p.misfits.get_misfit_comparison_table()
method yielding a
pandas.DataFrame
containing detailed per-receiver misfit information that
can be used for further custom analysis.In [1]: p.misfits.get_misfit_comparison_table( reference_data="initial_model", other_data=[ p.inversions.get_simulation_name( inverse_problem_configuration="inv", iteration_id=4 ) ], misfit_configuration="L2-misfit-to-target-model", event="event_0000", ) Out [1]: initial_model (ref) inv_it_3__trial Reduction inv_it_3__trial XX.A0000. 1.266640e-03 0.000224 0.001043 XX.A0001. 1.438440e-03 0.000361 0.001078 XX.A0002. 1.450685e-03 0.000369 0.001081 XX.A0003. 1.607995e-03 0.000430 0.001178 XX.A0004. 9.473526e-04 0.000238 0.000709 ... ... ... ... XX.A0095. 1.182061e-05 0.000065 -0.000053 XX.A0096. 1.360653e-06 0.000529 -0.000527 XX.A0097. 9.762485e-07 0.000102 -0.000101 XX.A0098. 4.663780e-06 0.000306 -0.000301 XX.A0099. 1.784453e-06 0.000165 -0.000163 [100 rows x 3 columns]
p.viz.nb.misfit_comparison()
method to display the output from the
p.misfits.get_misfit_comparison_table()
method in a pretty table.p.viz.nb.misfit_comparison( reference_data="initial_model", other_data=[ p.inversions.get_simulation_name( inverse_problem_configuration="inv", iteration_id=4 ) ], misfit_configuration="L2-misfit-to-target-model", event=p.events.list()[0], )
p.viz.seismology.misfit_map()
method to display the output from the
p.misfits.get_misfit_comparison_table()
method on a geographical map.p.viz.seismology.misfit_map( reference_data="initial_model", compare_data=p.inversions.get_simulation_name( inverse_problem_configuration="inv", iteration_id=5 ), misfit_configuration="L2-misfit-to-target-model", event=p.events.list()[0], )
is_point_in_domain()
check for spherical chunk domains.sn.domain.dim3.SphericalChunkDomain( lat_center=52.0, lat_extent=16.0, lon_center=170.0, lon_extent=66.0, radius_in_meter=6371000.0, ).plot()
reference_time_in_seconds
attribute. Thus
also receiver output in the HDF5 block format can now be absolutely located
in time and not just ASDF files.EventData
object will now raise an exception if a temporal weight
function returns components that the data cannot have.p.actions.inversion.smooth_model( model=gradient, smoothing_configuration=sn.ConstantSmoothing( smoothing_lengths_in_meters={"VP": 0.01, "RHO": 0.01,}, ), ranks_per_job=4, site_name="local", )
SalvusMesh
: Fix side-sets for masked global domains.SalvusCompute
: Minor bug fix for safely writing the meta json.SalvusCompute
: Auto-time-step detection for the diffusion equation.SalvusFlow
: Removed the cpu_count
argument from all functions
performing receiver/source placement. It was not actually any faster and we
have to re-evaluate the chosen approach.SalvusFlow
: More control over the end time when using a
FilteredHeaviside
source time function.SalvusFlow
: The FilteredHeaviside
STFs now defaults to 3 lowpass filter
corners and additionally allows explicitly settings the sampling rate.
changed defaultsSalvusOpt
: New implementation of time-frequency phase misfits.SalvusOpt
: Added new phase misfit.SalvusOpt
: Bug fix for scaling of cross-correlation adjoint sources.SalvusProject
: Correctly plot the time axis for seismological data.SalvusProject
: More stable and informative window picking process.SalvusProject
: Statistical window visualizations.SalvusProject
: The window picking action can now work with external
window picking functions. API changeSalvusProject
: Function serialization now works with imports as well as
closure variables making it a lot more flexible.SalvusProject
: The window picking action can now optionally also only act
on a subset of receivers and not store the results in the project. Useful
for debugging and tuning of the window picking process.SalvusProject
: All specialized processing configurations have been moved
to functions. The only remaining processing configurations are
ProcessingConfiguration
and SeismologyProcessingConfiguration
. The old
ones are deprecated but will stay around for the rest of the lifecylce of
Salvus 0.11.x. Please move to the new way of doings things. API changeSalvusProject
: The compute_window()
function has been moved to
salvus.project.tools.windows
. API changeSalvusProject
: Adding optional time_step_in_seconds
to the
WaveformSimulationConfiguration
constructor.SalvusMesh
: Still attach absorbing side sets if the number of wavelengths
is zero for Cartesian meshes.SalvusFlow
: Various optimizations to improve the speed of job arrays.SalvusFlow
: Various optimizations for simple config objects.SalvusProblem
: Tags in ASDF output will now always correspond to the
receiver field names. It is thus consistent with the input as well as the
HDF5 output. API changeSalvusCompute
: Compatibility with the ppc64le CPU architecture.SalvusProject
: Make event names in EventCollections customizable.SalvusOpt
: Improve trust-region scaling.SalvusProject
: Consistent coordinate order for all models (volume,
topography, bathymetry).SalvusCompute
: Add a check for NaNs in the time loop of CUDA runs.SalvusMesh
: More control over mesh file size, improved documentation.SalvusCompute
: Speedup gradient postprocessing, especially for
anisotropic models.SalvusCompute
: Compliance with CWE/SANS Top 25.SalvusOpt
: Auto-detect gradient parameterization from the model.
API changeSalvusProject
: Fix tmp file handling on foreign file systems.SalvusProject
: Fix bug in bathymetry dataset loading and improved test
coverage.SalvusProject
: Iteration widget for notebook visualization.SalvusOpt
: Simplified interfaces for diffusion-based preconditioning.SalvusCompute
: Fix allocation bug for adjoint absorbing boundaries on GPUs.SalvusFlow
: If files (e.g. meshes, source time functions, ...) are shared
between multiple jobs in a job array they will only be uploaded once to the
remote site and shared between jobs.SalvusFlow
: Optimized usage of SalvusFlow's internal job database.SalvusFlow
: Can now deal with ProxyJump
settings in SSH config files.SalvusProject
: Fix domain vis. for sph. chunk domains, and improve it for
UTM domains.SalvusProject
: Add events from custom ASDF files.SalvusProject
: Inversion action component for misfit and gradient
computations.SalvusOpt
: Support for fully asynchronous inversions.SalvusMesh
: Fix ocean load for orders > 1SalvusFlow
: Improved validation of diffusion inputDocs
: Render doc strings for methods inherited from abstract methods.SalvusProject
: Fix side set handling for masked spherical chunksSalvusOpt
: More robust trace interpolation for misfit computationsSalvusFlow
: Increase default verbosity of simulations.launch()
to 2
SalvusCompute
: Small bug fixes in receiver handling.SalvusCompute
: Compile for additional CUDA device architectures.Meta
: Refined parameter validation in JSON schemaSalvusProject
: Propagate NANs as a mask from spherical to cartesian
models.SalvusProject
: Make Courant number configurable in WaveformSimulationConfigurationSalvusCompute
: Remove spurious device sync.SalvusCompute
: Adjoing simulations and gradients on GPUs.SalvusCompute
: Implement a MeshPartitioner
class to encapsulate
partitioning (including deterministic re-partitioning).SalvusCompute
: Temporarily disable multiple checkpoints.SalvusCompute
: Fix some edge cases in handling checkpointsSalvusCompute
: Ensure unique boundary conditions on side sets.SalvusFlow
: Fix upgrading remote sites for incompatible versions.SalvusFlow
: Validate side-sets for output and boundary conditions.SalvusFlow
: Enable manual SSH password entry.SalvusFlow
: Optionally use a login shell to execute commands at a site.SalvusFlow
: More flexible in regards to any extra output added by a job
management system.SalvusFlow
: New features for LSF sites:
{RANKS}
, {NODES}
, and {TASKS_PER_NODE}
variables in mpirun
replacement as well as all bsub arguments.SalvusMesh
: Ensure that additional ABC elements are strictly outside of the
domain.SalvusToolbox
: dimensionless
is now an acceptable unit for parameters
that do not have units in the IRIS EMC model reader.SalvusToolbox
: Read attenuation models with the IRIS EMC model reader.SalvusProject
: Better support for oceans and ocean loading.SalvusProject
: Notebook visualization of spherical domains.SalvusCompute
: Better estimate of the sampling interval required to
buffer fields during a checkpoint run. Relevant for simulations with
attenuation.SalvusFlow
: Increase wall-time of init-site
job.SalvusFlow
: Correctly distinguish and upload files if they have the same
name but different local folders.salvus-flow upgrade
bash -c "$(curl -sSL https://get.mondaic.com)"
0.11.3
or higher. Make sure to follow all the
installation steps, especially the one regarding pip install salvus*.whl
.
Once the installation process is complete, the acquisition issue should be
fixed. salvus-flow upgrade
should work for any future changes.Meta
: Adapted license check to server side changes.Meta
: New salvus
CLI command that is a strict alias to the existing
salvus-flow
CLI command. We plan on keeping both for the foreseeable
future.Meta
: New salvus --version
/salvus-flow --version
CLI command to
retrieve the current Salvus version number at a glance.SalvusFlow
: Enforce that the local Python Salvus version is identical to
the remote SalvusCompute version.SalvusFlow
: Automatically reinitialize all matching local sites upon
salvus-flow upgrade
.SalvusFlow
: Recommend a list of suitable remote sites to update after
salvus-flow upgrade
has finished.SalvusFlow
: Make sure the run and temp directories are not in the folder
managed by the Mondaic downloader.SalvusCompute
: More robust estimation of wavefield buffer sizes for
checkpointing.