Create a simple parallel model

Unsupported Processes

LIE related processes are not supported to use the parallel FEM scheme with PETSc.

Software Modules and Compilation on EVE

  • Checkout the OGS-6 source code
    • Source the environment scripts scripts/env/eve/cli.sh
  • Configure with utilities enabled to:
    • Create the binary for generation of the structured mesh
    • Create the binary for the mesh partitioner tool partmesh
    • cmake <path_to_source> -DOGS_BUILD_UTILS=ON
  • Configure OGS for parallel usage with PETSc:
    • Source the environment scripts scripts/env/eve/petsc.sh
    • cmake <path_to_source> -DOGS_USE_PETSC=ON
  • Build the source code with make

Create a structured mesh

In order to discretize the domain of a unit cube do bin/generateStructuredMesh -o cube_1x1x1_hex_axbxc.vtu -e hex --lx 1 --ly 1 --lz 1 --nx a --ny b --nz c where a, b, and c should be chosen according to the needs.

a b c #cells in 10^6 compute cores success
150 150 150 ~ 3.38 20 yes
175 175 175 ~ 5.36 20 yes
190 190 190 ~ 6.86 20 yes
196 196 196 ~ 7.59 20 no
216 216 216 ~ 10.08 40 yes
236 236 236 ~ 13.14 40 yes
292 292 292 ~ 24.90 80 yes
368 368 368 ~ 49.84 160 yes
422 422 422 ~ 75.15 240 yes
465 465 465 ~ 100.54 320

For the boundary conditions, if they are simple (i.e. homogeneous), the simplest .gml file is sufficient for the serial case, but for heterogeneous boundary conditions and parallelization boundary mesh files are needed. There are two possibilities to create such files:

  • If there is a .gml file, use constructMeshesFromGeometry tool which takes the mesh file and geometry and creates all the boundaries which are named in the .gml file with the required bulk_node_ids and bulk_element_ids mappings.
  • If there is a boundary mesh (generated by Gmsh or Salome, or extracted in ParaView, for example) use the identifySubdomains tool to create or verify the needed bulk_node_ids and bulk_element_ids mappings.

Partition the mesh with partmesh

  1. Convert VTU mesh into METIS input mesh
    bin/partmesh -i cube_1x1x1_hex_axbxc.vtu --ogs2metis
    => results in a file cube_1x1x1_hex_axbxc.mesh
  2. partition the mesh and the corresponding boundaries.
    bin/partmesh -n number_of_partitions -m -i cube_1x1x1_hex_axbxc.vtu -- boundary_meshes*.vtu
    This will result in a bunch of .bin files.

Deciding the part to benchmark

  • Usually, for solving elliptic problems (for instance, the steady state groundwater flow) the time for the linear solver is the main part of the simulation.
  • The hydro-thermal process can be used for testing of the IO capabilities.
  • In the mechanics process large parts of the computational resources are spent for the assembly.

ToDo

  • script to create models and partitions
  • submit script to cluster
  • Metrics to take into account could be
    • weak scaling (problem size increases as the number of processors increases)
    • strong scaling (fixed problem size, use increasing number of processors => speedup and parallel efficiency)
    • fixed number of processors, increasing problem size

This article was written by Thomas Fischer. If you are missing something or you find an error please let us know.
Generated with Hugo 0.122.0 in CI job 504124 | Last revision: November 12, 2024
Commit: [BL/MPI] Use MPI_COMM_WORLD in reduceMin 4aae83e  | Edit this page on