Set up a rectangular regional CESM-MOM6 run#
A typical workflow of utilizing CrocoDash consists of four main steps:
Generate a regional MOM6 domain.
Create the CESM case.
Prepare ocean forcing data.
Build and run the case.
SECTION 1: Generate a regional MOM6 domain#
We begin by defining a regional MOM6 domain using CrocoDash. To do so, we first generate a horizontal grid. We then generate the topography by remapping an existing bathymetric dataset to our horizontal grid. Finally, we define a vertical grid.
Step 1.1: Horizontal Grid#
from CrocoDash.grid import Grid
grid = Grid(
resolution = 0.05, # in degrees
xstart = 278.0, # min longitude in [0, 360]
lenx = 3.0, # longitude extent in degrees
ystart = 7.0, # min latitude in [-90, 90]
leny = 3.0, # latitude extent in degrees
name = "panama1",
)
Step 1.2: Topography#
from CrocoDash.topo import Topo
topo = Topo(
grid = grid,
min_depth = 9.5, # in meters
)
from pathlib import Path
bathymetry_path= Path("/glade/campaign/cgd/oce/projects/CROCODILE/workshops/2025/CrocoDash/data/gebco/GEBCO_2024.nc")
# The GEBCO bathymetry path on Derecho is: /glade/campaign/cgd/oce/projects/CROCODILE/workshops/2025/CrocoDash/data/gebco/GEBCO_2024.nc
if not bathymetry_path.exists():
raise FileNotFoundError("Bathymetry file not found, please replace with path to bathymetry file")
topo.interpolate_from_file(
file_path = bathymetry_path,
longitude_coordinate_name="lon",
latitude_coordinate_name="lat",
vertical_coordinate_name="elevation"
)
topo.depth.plot()
%matplotlib ipympl
from CrocoDash.topo_editor import TopoEditor
TopoEditor(topo)
Step 1.3: Vertical Grid#
from CrocoDash.vgrid import VGrid
vgrid = VGrid.hyperbolic(
nk = 75, # number of vertical levels
depth = topo.max_depth,
ratio=20.0 # target ratio of top to bottom layer thicknesses
)
import matplotlib.pyplot as plt
plt.close()
# Create the plot
for depth in vgrid.z:
plt.axhline(y=depth, linestyle='-') # Horizontal lines
plt.ylim(max(vgrid.z) + 10, min(vgrid.z) - 10) # Invert y-axis so deeper values go down
plt.ylabel("Depth")
plt.title("Vertical Grid")
plt.show()
SECTION 2: Create the CESM case#
After generating the MOM6 domain, the next step is to create a CESM case using CrocoDash. This process is straightforward and involves instantiating the CrocoDash Case object. The Case object requires the following inputs:
CESM Source Directory: A local path to a compatible CESM source copy.
Case Name: A unique name for the CESM case.
Input Directory: The directory where all necessary input files will be written.
MOM6 Domain Objects: The horizontal grid, topography, and vertical grid created in the previous section.
Project ID: (Optional) A project ID, if required by the machine.
Compset: The set of models to be used in the Case. Standalone Ocean, Ocean-BGC, Ocean-Seaice, Ocean-Runoff
Step 2.1: Specify case name and directories:#
Begin by specifying the case name and the necessary directory paths. Ensure the CESM root directory points to your own local copy of CESM. Below is an example setup:
from pathlib import Path
# CESM case (experiment) name
casename = "panama-not"
# CESM source root (Update this path accordingly!!!)
cesmroot ="/glade/work/<YOURUSERNAME>/CROCESM"
# Place where all your input files go
inputdir = Path("/glade/work/<YOURUSERNAME>/crocodile_2025") / "croc_input" / casename
# CESM case directory
caseroot = Path("/glade/work/<YOURUSERNAME>/crocodile_2025") / "croc_cases" / casename
Step 2.2: Create the Case#
To create the CESM case, instantiate the Case
object as shown below. This will automatically set up the CESM case based on the provided inputs: The cesmroot
argument specifies the path to your local CESM source directory.
The caseroot
argument defines the directory where the case will be created. CrocoDash will handle all necessary namelist modifications and XML changes to align with the MOM6 domain objects generated earlier.
from CrocoDash.case import Case
case = Case(
cesmroot = cesmroot,
caseroot = caseroot,
inputdir = inputdir,
ocn_grid = grid,
ocn_vgrid = vgrid,
ocn_topo = topo,
project = 'CESM0030',
override = True,
machine = "derecho",
compset = "GR_JRA" # This is the alias of the compset, the longname (which is printed when you run this command) is 1850_DATM%JRA_SLND_SICE_MOM6%REGIONAL_SROF_SGLC_SWAV. Feel free to use either way!
)
Section 3: Prepare ocean forcing data#
We need to cut out our ocean forcing. The package expects an initial condition and one time-dependent segment per non-land boundary. Naming convention is "east_unprocessed"
for segments and "ic_unprocessed"
for the initial condition.
In this notebook, we are forcing with the Copernicus Marine “Glorys” reanalysis dataset. There’s a function in the CrocoDash
package, called configure_forcings
, that generates a bash script to download the correct boundary forcing files for your experiment. First, you will need to create an account with Copernicus, and then call copernicusmarine login
to set up your login details on your machine. Then you can run the get_glorys_data.sh
bash script.
Step 3.1 Configure Initial Conditions and Forcings#
case.configure_forcings(
date_range = ["2020-01-01 00:00:00", "2020-01-09 00:00:00"],
boundaries=["south","east","west"],
function_name="get_glorys_data_from_rda"
)
Step 3.3: Process forcing data#
In this final step, we call the process_forcings
method of CrocoDash to cut out and interpolate the initial condition as well as all boundaries. CrocoDash also updates MOM6 runtime parameters and CESM xml variables accordingly.
case.process_forcings()
print("You can now build and run your case at",caseroot)
Section 4: Build and run the case#
After completing the previous steps, you are ready to build and run your CESM case. Begin by navigating to the case root directory specified during the case creation. Before proceeding, review the user_nl_mom
file located in the case directory. This file contains MOM6 parameter settings that were automatically generated by CrocoDash. Carefully examine these parameters and make any necessary adjustments to fine-tune the model for your specific requirements. While CrocoDash aims to provide a solid starting point, further tuning and adjustments are typically necessary to improve the model for your use case.
Once you have reviewed and modified the parameters as needed, you can build and execute the case using the following commands:
qcmd -- ./case.build
./case.submit