CrocoDash package#
Subpackages#
- CrocoDash.extract_forcings package
- CrocoDash.raw_data_access package
- Subpackages
- CrocoDash.raw_data_access.datasets package
- Submodules
- CrocoDash.raw_data_access.datasets.empty_products module
- CrocoDash.raw_data_access.datasets.gebco module
- CrocoDash.raw_data_access.datasets.glofas module
- CrocoDash.raw_data_access.datasets.glorys module
- CrocoDash.raw_data_access.datasets.mom6_output module
- CrocoDash.raw_data_access.datasets.seawifs module
- CrocoDash.raw_data_access.datasets.utils module
- Module contents
- CrocoDash.raw_data_access.datasets package
- Submodules
- CrocoDash.raw_data_access.base module
- CrocoDash.raw_data_access.registry module
ProductRegistryProductRegistry.call()ProductRegistry.get_access_function()ProductRegistry.get_product()ProductRegistry.list_access_methods()ProductRegistry.list_products()ProductRegistry.load()ProductRegistry.loadedProductRegistry.product_exists()ProductRegistry.product_is_of_type()ProductRegistry.productsProductRegistry.register()ProductRegistry.validate_function()
- CrocoDash.raw_data_access.utils module
- Module contents
- Subpackages
Submodules#
CrocoDash.case module#
- class CrocoDash.case.Case(*, cesmroot: str | Path, caseroot: str | Path, inputdir: str | Path, compset: str, ocn_grid: Grid, ocn_topo: Topo, ocn_vgrid: VGrid, atm_grid_name: str = 'TL319', rof_grid_name: str | None = None, ninst: int = 1, machine: str | None = None, project: str | None = None, override: bool = False, ntasks_ocn: int | None = None, job_queue: str | None = None, job_wallclock_time: str | None = None)#
Bases:
objectThis class represents a regional MOM6 case within the CESM framework. It is similar to the Experiment class in the regional_mom6 package, but with modifications to work within the CESM framework.
- property bgc_in_compset#
Check if BGC is included in the compset.
- property cice_in_compset#
Check if CICE is included in the compset.
- configure_forcings(date_range: list[str], boundaries: list[str] = ['south', 'north', 'west', 'east'], product_name: str = 'GLORYS', function_name: str = 'get_glorys_data_script_for_cli', **kwargs)#
Configure the boundary conditions and tides for the MOM6 case.
Sets up initial and boundary condition forcing data for MOM6 using a specified product and download function. Optionally configures tidal constituents if specified. Supports a large data workflow mode that defers data download and processing to an external script.
- Parameters:
date_range (list of str) – Start and end dates for the forcing data, formatted as strings. Must contain exactly two elements.
boundaries (list of str, optional) – List of open boundaries to process (e.g., [“south”, “north”]). Default is [“south”, “north”, “west”, “east”].
product_name (str, optional) – Name of the forcing data product to use. Default is “GLORYS”.
function_name (str, optional) – Name of the function to call for downloading the forcing data. Default is “get_glorys_data_script_for_cli”.
product_info (str | Path | dict, optional) – The equivalent MOM6 names to Product Names. Example: xh -> lat time -> valid_time salinity -> salt, as well as any other information required for product parsing The None option assumes the information is in raw_data_access/config under {product_name}.json. Every other option is copied there.
kwargs – These are the configuration options (please see accepted arguments in the configuration classes)
- Raises:
TypeError – If inputs such as date_range, boundaries, or tidal_constituents are not lists of strings.
ValueError – If date_range does not have exactly two elements, or if tidal arguments are inconsistently specified. Also raised if an invalid product or function is provided.
AssertionError – If the selected data product is not categorized as a forcing product.
Notes
Downloads forcing data (or creates a script) for each boundary and the initial condition unless the large data workflow is used.
In large data workflow mode, creates a folder structure and config.json file for later manual processing.
This method must be called before process_forcings().
See also
process_forcingsExecutes the actual boundary, initial condition, and tide setup based on the configuration.
- configure_initial_and_boundary_conditions(date_range: list[str], boundaries: list[str] = ['south', 'north', 'west', 'east'], product_name: str = 'GLORYS', function_name: str = 'get_glorys_data_script_for_cli')#
- property expt: experiment#
- find_MOM6_rectangular_orientation(input)#
Convert between MOM6 boundary and the specific segment number needed, or the inverse.
- classmethod init_args_check(*, cime, caseroot: str | Path, inputdir: str | Path, ocn_grid: Grid, ocn_topo: Topo, ocn_vgrid: VGrid, compset_lname: str, atm_grid_name: str, rof_grid_name: str | None, ninst: int, machine: str | None, project: str | None, override: bool, ntasks_ocn: int | None = None, job_queue: str | None = None, job_wallclock_time: str | None = None)#
Perform sanity checks on the input arguments to ensure they are valid and consistent.
- property name: str#
- process_forcings(process_initial_condition=True, process_velocity_tracers=True, **kwargs)#
Process boundary conditions, initial conditions, and other forcings for a MOM6 case. It’s a wrapper around extract_forcings/case_setup/driver.py
This method configures a regional MOM6 case’s ocean state boundaries and initial conditions using previously downloaded data setup in configure_forcings. The method expects configure_forcings() to be called beforehand.
- Parameters:
process_initial_condition (bool, optional) – Whether to process the initial condition file. Default is True.
process_velocity_tracers (bool, optional) – Whether to process velocity and tracer boundary conditions. Default is True. This will be overridden and set to False if the large data workflow in configure_forcings is enabled.
kwargs (bool, optional) – Whether to process the other forcings, of the form process_{configurator.name} = False
- Raises:
RuntimeError – If configure_forcings() was not called before this method.
FileNotFoundError – If required unprocessed files are missing in the expected directories.
Notes
This method uses variable name mappings specified in the forcing product configuration.
If the large data workflow has been enabled, velocity and tracer OBCs are not processed within this method and must be handled externally.
Applies forcing-related namelist and XML updates at the end of the method.
See also
configure_forcingsMust be called before this method to set up the environment.
- property runoff_in_compset#
Check if runoff is included in the compset.
CrocoDash.forcing_configurations module#
CrocoDash.grid module#
CrocoDash.logging module#
This module (logging) contains logging functions that are used across the CrocoDash package.
- CrocoDash.logging.setup_logger(name)#
This function sets up a logger format for the package. It attaches logger output to stdout (if a handler doesn’t already exist) and formats it in a pretty way!
- Parameters:
name (str) – The name of the logger.
- Returns:
The logger
- Return type:
logging.Logger