Iterative OBC generation (for Long CESM-MOM6 runs)#

Often, generating OBC datasets can be done all in one shot, but in longer and larger cases (like running the Northwest Atlantic for a year) we need to start iterating through the generation. Here is an example, it only affects configure and process forcing steps!

Step 1: Trigger Large Data Workflow#

This can be done by setting the too_much_data bool to true.

case.configure_forcings(
    date_range = ["2020-01-01 00:00:00", "2020-01-09 00:00:00"],
    too_much_data = True
)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Input In [1], in <cell line: 0>()
----> 1 case.configure_forcings(
      2     date_range = ["2020-01-01 00:00:00", "2020-01-09 00:00:00"],
      3     too_much_data = True
      4 )

NameError: name 'case' is not defined

Step 2: Run the iterative OBC processor#

In a terminal session, locate the large_data_workflow folder, which is put in the case input directory under the forcing folder, default called “glorys/large_data_workflow”. Then, execute the driver.py to generate boundary conditions. It uses the config.json file to generate the OBCs in a piecewise format before merging. Modify the code as you see fit!

Especially consider adjusting the specific function being used in config.json to download the data. For example, On Derecho? Using the RDA reader. On a local computer? Use the python GLORYS api. You can change the function by changing the respective line in config.json.

Step 3: Process forcing data#

In this final step, we call the process_forcings method of CrocoDash to interpolate the initial condition as well as all boundaries. CrocoDash also updates MOM6 runtime parameters and CESM xml variables accordingly. It will auto skip the OBCs because of the large data workflow.

case.process_forcings()