Installation

We currently offer the various Salvus packages for Linux and OSX, with Windows support planned. SalvusCompute and SalvusOpt are distributed as a single statically-compiled binary, while SalvusMesh and SalvusFlow are binary platform-dependent Python packages. All packages require a valid license, you should have received username and password from us. Before continuing, make sure this is the case.

Installing SalvusCompute and SalvusOpt

No compilation is required to install these packages - simply download the version which is appropriate for your architecture and operating system and you are good to go. To do this, copy:

Copy
bash -c "$(curl -sSL https://get.mondaic.com)"

into a terminal prompt.

Make sure to answer "Yes" when it asks you to create a license file in your home directory. If you already have a license file, you will not be asked.

The downloader will check your operating system, processor architecture, and license file, and then ask you a few questions. One of these questions will inquire as to which Python version you would prefer -- we recommend Python 3.7. At the end of this process the appropriate files will be downloaded. Congratulations! SalvusCompute and SalvusOpt are now installed.

Here is a short video of the process:

On sites without internet access, just run the downloader on some other machine to download a version of Salvus appropriate for that machine and copy the resulting folder.

Installing SalvusMesh and SalvusFlow

We recommend using Anaconda to help manage the Salvus Python packages, although you are free to use any Python distribution you would like. If you do choose to use Anaconda, keeping in mind the version of Python you chose in the previous step, please visit www.anaconda.com and download the correct package for your system. Upon a successful install it is good practice to then set up a salvus conda environment as follows (assuming Python 3.7):

conda update --yes conda
conda config --prepend channels conda-forge
conda create -n salvus python=3.7
conda activate salvus
conda install --yes pip psutil

# Optional dependencies for Jupyter notebook support
conda install -c conda-forge --yes obspy matplotlib ipython ipywidgets \
    jupyter jupyter_contrib_nbextensions pythreejs nbconvert nbval \
    pyasdf pytest xarray numba jupytext
pip install 'arrow==0.14.7'
jupyter nbextension enable skip-traceback/main

Following this the environment should be ready to install SalvusMesh and SalvusFlow. To complete this step, simply do

cd SALVUS_INSTALLATION_DIRECTORY/python_packages
pip install salvus_*.whl

SalvusToolbox (optional)

Mondaic supports an open-source repository with various tools for model handling and conversion, which you might find useful to get started with your own simulations. Some of the tutorials make use of it, so feel free to download and install it:

git clone https://gitlab.com/MondaicSupport/salvus_toolbox.git
cd salvus_toolbox
pip install -v -e .

Keep in mind that, if you are using Anaconda, you must be in the salvus conda environment whenever you are working with the Salvus Python packages. You can enter the environment from the command prompt with

conda activate salvus

Please note that the use of Anaconda is optional, and that the user is responsible for complying with the licenses provided by any Python package used.

Setting up SalvusFlow

To use the workflow management features of Salvus Flow you first need to initialize a "site". To begin this process, simply run

salvus-flow add-site

on the command line, and follow the prompts. There are currently three different types of sites:

  • local - for simulations on your local machine

  • slurm - for clusters with the slurm job submission system

  • pbs - for clusters with the PbsPro job submission system

  • ssh - for simulations on remote computers connected via ssh

For each of the types you need to specify a name, the number of available compute cores, the paths to the Salvus binaries as well as to the directories where Salvus will run simulations and store temporary data.

Here is an example of a local site on a PC with four cores:

? What type of site do you want to initialize?  local
? Name of the site: local
? default_ranks:  2
? max_ranks:  4
? salvus_binary:  /path/to/Salvus/bin/salvus
? run_directory:  /some/path/run
? tmp_directory:  /some/path/tmp

For a remote site connected via ssh, we need to additionally provide hostname and username:

? hostname -- (Hostname of the site.):  remote_machine
? username -- (SSH username of the site.):  user

Finally, when configuring a site that uses slurm for submitting jobs, we need to indicate a few more details, like the tasks per node, the name of the partition and the path to the slurm binaries:

? tasks_per_node:  12
? partition:  normal
? path_to_slurm_binaries:  /path/to/srun/bin

Depending on the system, you might also have to configure a few environment variables or modules, which you can do manually using $ salvus-flow edit-config.

Here is a short video of the process for a remote site connected through `ssh:

Once the wizard has completed, you can test whether everything is working properly by running by running

salvus-flow init-site site_name

where you should substitute your chosen site name for site_name. If the site initialization is successful then you are ready to move forward with the tutorials!

More information

Paraview

To visualize meshes and wavefield output, we recommend installing the latest version of Paraview. Note that we strongly recommend to download Paraview from the official website -- the versions installed through Linux package management systems often do not come with the correct libraries installed.

MPI

SalvusCompute and SalvusOpt require MPI. Our distributions come with the required MPI binaries and shared libraries and we recommend to use these on small single node workstations.

Large HPC clusters tend to have their custom MPI distributions. Our packages will dynamically link in any MPI implementation following the MPI ABI Compatibility Initiative. This ends up being most of them and we successfully tested this on a large number of HPC clusters around the world.

The only widely used MPI implementation that cannot be used with Salvus is OpenMPI. OpenMPI is not ABI compatible with Salvus. Most clusters should offer an alternative.

For this to work, two manual steps might be required:

  1. Loading an ABI compatible MPI module in the site's SalvusFlow config, e.g. modules_to_load = ["xxx-mpich-abi"]
  2. The loaded module should already set the correct $LD_LIBRARY_PATH. If it does not do that, manually set it, again in the SalvusFlow site config:
[[sites.some_site.environment_variable]]
    name = "LD_LIBRARY_PATH"
    value = "/path/to/some/lib/dir"

The value/path has to be the library folder containing the libmpicxx.so.12 and libmpi.so shared libraries.

Instruction sets

When you downloaded Salvus, you may have noticed that you had several choices with regards to which specific binary you downloaded. We currently compile several different versions, each with instruction sets optimized for a certain processor architecture.

  • Linux (works with all distributions):

    • For generic x86-64 CPU architectures. Slowest but works everywhere. (required CPU features: MODE64, CMOV, SSE1, SSE2)
    • For Sandybridge architectures (required CPU features: x64-64 + AVX)
    • For Haswell architectures (required CPU features: Sandybridge + AVX2, BMI, BMI2, FMA)
    • For Skylake architectures (required CPU features: Haswell + ADX)
  • OSX:

    • For generic x86-64 CPU architectures. Slowest but works everywhere. (required CPU features: MODE64, CMOV, SSE1, SSE2)
    • For Sandybridge architectures (required CPU features: x64-64 + AVX)
    • For Haswell architectures (required CPU features: Sandybridge + AVX2, BMI, BMI2, FMA)
    • Choose the most suitable one for your machine. Older ones will work with newer CPU architectures but might not run as efficient.

Open source licenses

Salvus contains open source software packages from third parties. A list of these packages, along with their licenses, is available at www.mondaic.com/credits.

PAGE CONTENTS