CrocoDash.extract_forcings.case_setup package#
Submodules#
CrocoDash.extract_forcings.case_setup.driver module#
nCrocoDash Forcing Extraction Driver
This module orchestrates the forcing extraction workflow for a CrocoDash case. It coordinates multiple forcing data sources (tides, runoff, BGC, etc.) and processes them into MOM6-compatible file formats.
The script can be run from the command line with various component flags to control which forcings are processed. It loads configuration from config.json and coordinates all extraction, regridding, and formatting operations.
- Typical usage:
python driver.py –all # Process all configured components python driver.py –tides –bgcic # Process only tides and BGC initial conditions python driver.py –all –skip runoff # Process all except runoff python driver.py –ic –no-get # Process IC but skip data download step
- CrocoDash.extract_forcings.case_setup.driver.parse_args()#
- CrocoDash.extract_forcings.case_setup.driver.process_bgcic()#
Extract and copy BGC initial conditions from CESM MARBL inputdata.
- CrocoDash.extract_forcings.case_setup.driver.process_bgcironforcing()#
- CrocoDash.extract_forcings.case_setup.driver.process_bgcrivernutrients()#
Process river nutrient inputs for BGC.
- CrocoDash.extract_forcings.case_setup.driver.process_chl()#
Process satellite-derived chlorophyll data
- CrocoDash.extract_forcings.case_setup.driver.process_conditions(get_dataset_piecewise=True, regrid_dataset_piecewise=True, merge_piecewise_dataset=True, run_initial_condition=True, run_boundary_conditions=True)#
Process initial and/or boundary conditions through the three-step pipeline.
This function orchestrates the data extraction workflow: 1. get_dataset_piecewise: Download/retrieve raw data from source datasets 2. regrid_dataset_piecewise: Regrid data to your custom regional grid 3. merge_piecewise_dataset: Merge regridded data into final forcing files
- Parameters:
get_dataset_piecewise – Whether to download raw data (can skip if already cached)
regrid_dataset_piecewise – Whether to regrid data to regional grid
merge_piecewise_dataset – Whether to merge data into final files
run_initial_condition – Whether to process initial conditions (t=0)
run_boundary_conditions – Whether to process boundary conditions (open boundaries)
- CrocoDash.extract_forcings.case_setup.driver.process_runoff()#
Generate runoff mapping files and interpolation weights.
- CrocoDash.extract_forcings.case_setup.driver.process_tides()#
Extract and process tidal forcing from TPXO database.
- CrocoDash.extract_forcings.case_setup.driver.resolve_components(args, cfg)#
Resolve which components should run based on CLI flags and config availability.
This function takes the parsed command-line arguments and the configuration, then determines which forcing components should actually execute. It handles: - –all: Enable all components that exist in config - –skip: Disable specific components by name (case-insensitive) - Individual flags: Enable only specified components - Config validation: Skip components requested but not in config
The function modifies args in-place, setting each component flag to True/False based on the resolution logic.
- Parameters:
args – Parsed command-line arguments (from parse_args())
cfg – Config object with .config dict of available components
- Returns:
Modified args object with all component flags resolved
- CrocoDash.extract_forcings.case_setup.driver.run_from_cli(args, cfg)#
Execute the forcing extraction workflow based on CLI arguments.
This is the main entry point that coordinates the entire workflow: 1. Resolves which components to run 2. Executes the appropriate process_* functions 3. Maintains component dependencies (e.g., runoff before bgcrivernutrients)
- Parameters:
args – Parsed and resolved command-line arguments
cfg – Config object from utils.Config(CONFIG_PATH)
- CrocoDash.extract_forcings.case_setup.driver.should_run(name, args, cfg)#
- CrocoDash.extract_forcings.case_setup.driver.test_driver()#
Test that all the imports work