Set up a rectangular regional CESM-MOM6-CICE coupled run#

We’ll go through these steps:

  1. Generate a regional MOM6 domain.

  2. Create the CESM case.

  3. Prepare ocean forcing data.

  4. Generate the regional CICE grid.

  5. Build and run the case.

Remember to swap your environment to CrocoDash first!

Section 1: Generate a regional MOM6 domain#

We begin by defining a regional MOM6 domain using CrocoDash. To do so, we first generate a horizontal grid. We then generate the topography by remapping an existing bathymetric dataset to our horizontal grid. Finally, we define a vertical grid.

Step 1.1: Horizontal Grid#

The horizontal grid described below is from the Antarctic domain (https://maps.app.goo.gl/WLLzioPgahQpLMSK8 - near the Mendel station) at 60°W, 64°S.

from CrocoDash.grid import Grid

grid = Grid(
  resolution = 0.01,
  xstart = 300.0,
  lenx = 4.0,
  ystart = -64.0,
  leny = 4.0,
  name = "antarctica2",
)

Step 1.2: Topography#

from CrocoDash.topo import Topo

topo = Topo(
    grid = grid,
    min_depth = 9.5,
)

Swap your bathymetry_path accordingly below. Grab your data from: https://www.gebco.net/data-products/gridded-bathymetry-data

bathymetry_path='s3://crocodile-cesm/CrocoDash/data/gebco/GEBCO_2024.zarr/' # Swap this!

topo.interpolate_from_file(
    file_path = bathymetry_path,
    longitude_coordinate_name="lon",
    latitude_coordinate_name="lat",
    vertical_coordinate_name="elevation",
    write_to_file = True
)

TODO: when the above topo.interpolate_from_file() method is called with the original GEBCO dataset, the interpolation fails do to large computational demand when run on login nodes. Check if there is a failsafe way, or at least, a means to precaution the user.

topo.depth.plot()

Run the topo-editor below to configure your bathymetry!

%matplotlib ipympl
from CrocoDash.topo_editor import TopoEditor
topo.depth["units"] = "m"
TopoEditor(topo)

Step 1.3: Vertical Grid#

from CrocoDash.vgrid import VGrid

vgrid  = VGrid.hyperbolic(
    nk = 75,
    depth = topo.max_depth,
    ratio=20.0
)
print(vgrid.dz)
import matplotlib.pyplot as plt
plt.close()
# Create the plot
for depth in vgrid.z:
    plt.axhline(y=depth, linestyle='-')  # Horizontal lines

plt.ylim(max(vgrid.z) + 10, min(vgrid.z) - 10)  # Invert y-axis so deeper values go down
plt.ylabel("Depth")
plt.title("Vertical Grid")
plt.show()

SECTION 3: Create the CESM case#

After generating the MOM6 domain, the next step is to create a CESM case using CrocoDash. This process is straightforward and involves instantiating the CrocoDash Case object. The Case object requires the following inputs:

  • CESM Source Directory: A local path to a compatible CESM source copy.

  • Case Name: A unique name for the CESM case.

  • Input Directory: The directory where all necessary input files will be written.

  • MOM6 Domain Objects: The horizontal grid, topography, and vertical grid created in the previous section.

  • Project ID: (Optional) A project ID, if required by the machine.

Step 2.1: Specify case name and directories:#

Begin by specifying the case name and the necessary directory paths. Ensure the CESM root directory points to your own local copy of CESM. Note that if the case is already created - in case you want to rerun this notebook - the /run directory must be removed for the case to be created anew. Below is an example setup:

from pathlib import Path
# CESM case (experiment) name
casename = "antarctica-tutorial"

# CESM source root (Update this path accordingly!!!)
cesmroot = "/home/runner/work/CrocoGallery/CrocoGallery/CESM/"

# Place where all your input files go
inputdir = Path.home() / "input" / casename

# CESM case directory
caseroot = Path.home() / "cases" / casename
print(cesmroot, inputdir, caseroot) # View your directory and change as you need!

Step 2.2: Create the Case#

To create the CESM case, instantiate the Case object as shown below. This will automatically set up the CESM case based on the provided inputs: The cesmroot argument specifies the path to your local CESM source directory. The caseroot argument defines the directory where the case will be created. CrocoDash will handle all necessary namelist modifications and XML changes to align with the MOM6 domain objects generated earlier.

Remember to change/remove your project ID (with parameter project) and machine (with parameter machine, likely derecho) as needed.

from CrocoDash.case import Case

case = Case(
    cesmroot = cesmroot,
    caseroot = caseroot,
    inputdir = inputdir,
    ocn_grid = grid,
    ocn_vgrid = vgrid,
    ocn_topo = topo,
    project = 'P93300012', # Switch this
    override = True,
    machine = "derecho", # And this
    compset = "1850_DATM%JRA_SLND_CICE_MOM6_SROF_SGLC_SWAV", 
)

Section 3: Prepare ocean forcing data#

We need to cut out our ocean forcing. The package expects an initial condition and one time-dependent segment per non-land boundary. Naming convention is "east_unprocessed" for segments and "ic_unprocessed" for the initial condition.

In this notebook, we are forcing with the Copernicus Marine “Glorys” reanalysis dataset. There’s a function in the CrocoDash package, called configure_forcings, that generates a bash script to download the correct boundary forcing files for your experiment. First, you will need to create an account with Copernicus, and then call copernicusmarine login to set up your login details on your machine. Then you can run the get_glorys_data.sh bash script.

Step 3.1 Configure Initial Conditions and Forcings#

from CrocoDash.case import Case
case.configure_forcings(
    date_range = ["2020-01-01 00:00:00", "2020-01-09 00:00:00"],
)

Step 3.2 Run get_glorys_data.sh#

Follow the instructions printed by the configure_forcings method above to navigate to your glorys folder. Once you’re in your /glorys folder, run chmod +x get_glorys_data.sh to enable execution of the bash script. Also activate your conda environment with:

module load conda && conda activate CrocoDash ! or the name of the virtual environment you have installed with CrocoDash enabled

if you haven’t done so. Finally, run ./get_glorys_data.sh.

Note: You’ll have to enter your login multiple times for copernicus marine. Do not escape the script until finished.

TODO: user copernicusmarine python API within CrocoDash, instead of directing users to run it via CLI. Also, on a derecho login node, both CLI and API fails to run due to the computational demand. We also need to address that.

Step 3.3: Process forcing data#

In this final step, we call the process_forcings method of CrocoDash to cut out and interpolate the initial condition as well as all boundaries. CrocoDash also updates MOM6 runtime parameters and CESM xml variables accordingly.

case.process_forcings()

Optional: Use restart files as initial condition for your run#

In case you want to start the model with a restart file instead of using the generated initial condition, follow the below steps. Note that you have to have finished a run beforehand for the restart files to appear.

  1. Locate your restart file - they are usually in your previous case’s /archive/rest/<year> folder with a .r infix, e.g., cice.test.cice.r.1994-01-01-00000.nc. If you do not know your /archive folder location, run ./xmlquery DOUT_S_ROOT in your (previous) case folder, which will return a path similar to DOUT_S_ROOT: <PATH TO ARCHIVE>.

  2. Use the cp command to copy the file to your current case /run directory, e.g., cp cice.test.cice.r.1994-01-01-00000.nc <your_run_dir>.

  3. Open user_nl_cice in your case directory again and change the ice_ic variable from "UNSET" to your file name, e.g., ice_ic=cice.test.cice.r.1994-01-01-00000.nc

Your file will now be used automatically as the ice initial condition for your next run. Note that .h and .h1 files (i.e. history files) currently cannot be used as initial conditions for CICE.

Section 5: Build and run the case#

After completing the previous steps, you are ready to build and run your CESM case. Begin by navigating to the case root directory specified during the case creation. Before proceeding, review the user_nl_mom file located in the case directory. This file contains MOM6 parameter settings that were automatically generated by CrocoDash. Carefully examine these parameters and make any necessary adjustments to fine-tune the model for your specific requirements. While CrocoDash aims to provide a solid starting point, further tuning and adjustments are typically necessary to improve the model for your use case.

Once you have reviewed and modified the parameters as needed, you can build and execute the case using the following commands:

./case.build
./case.submit

Optional: Write full grid#

The function below will output the full MOM6 grid in case you need it!

import xarray as xr
from CrocoDash.grid import Grid

# List of explicitly defined grid metric attributes
varnames = [
    "tlon", "tlat", "ulon", "ulat", "vlon", "vlat", "qlon", "qlat",
    "dxt", "dyt", "dxCv", "dyCu", "dxCu", "dyCv", "angle", "tarea"
]

data_vars = {name: getattr(grid, name) for name in varnames}

# Create Dataset
ds = xr.Dataset(data_vars)
ds.attrs["name"] = getattr(grid, "_name", "unnamed_grid")

# Write to NetCDF
ds.to_netcdf("antarctica_fullvar_grid.nc") # Change as needed!
# Check your variables!
for name in varnames:
    var = getattr(grid, name)
    print(f"{name}: shape={var.shape}, dims={getattr(var, 'dims', 'unknown')}")