CESM Models

CCSM Research Tools: CSM1.4 Users Guide


The Climate System Model (CSM) is a fully-coupled, global climate model that provides state-of-the-art computer simulations of the Earths past, present, and future climate states.


1. Introduction

The Climate System Model (CSM) is a coupled climate model for simulating the earths climate system. Composed of four separate model components simultaneously simulating the earths atmosphere, ocean, land surface and sea ice, and one central coupler component, the CSM allows researchers to conduct fundamental research into the earth's past, present and future climate states.

To achieve the high execution rates necessary to take full advantage of this model's capabilities, the CSM is typically run on Cray-class supercomputers, attached to a high capacity automated tape archive device, in a UNIX environment. A port of the complete CSM to a high-end workstation (SGI/Sun/HP, RS-6000) environment is expected in the very near future.

Both high and low-resolution versions of the CSM are supported. The high resolution version is best suited for simulating near-past, present-day and future climate scenarios while the low resolution option is used for paleoclimate research and debugging runs.

CSM output data is available for baseline control runs (b003), the IPCC increasing CO2 scenario (b006), a 1870's pre-industrial baseline (b018) and a 1870-1990 climate of the 20th century run (b019).

[back to contents]

2. The CSM component models

The Climate System Model consists of four dynamical geophysical models linked by a central flux coupler.

During the course of a CSM integration, each of the four component models integrate forward in time simultaneously, periodically stopping to exchange information with the coupler. The cpl receives state variables from the component models, computes the interfacial fluxes from this information and returns the flux information back to the component models. By brokering this sequence of communication interchanges, the cpl manages the overall time progression of the coupled model.

All of the component models exist in two forms, a "real" dynamical model and a data-cycling testing stub. The "real" models are the complete, fully interactive, dynamical, models such as the CCM3 global atmospheric general circulation model. These dynamical models consume a large amount of memory and CPU time. The data-cycling versions of the components simply read existing datasets which have been written by the dynamical models and pass this data to the flux coupler. These data-cycling components are very inexpensive to run and are used for both test and model spin-up runs.

The CSM1.4 Components

Name Description Version/Documentation
csm1.4 - The CSM1.4 download page
cpl CSM1.4 Flux Coupler cpl4.0.5 documentation
atm CSM1.4 Atmospheric model ccm3.6.6 documentation
ocn CSM1.4 Ocean model ncom_1.5.0.beta4 documentation
ice CSM1.4 Sea-ice model csim2.2.9 documentation
lnd CSM1.4 Land-surface model ccm3.6.6 documentation
datm CSM1.4 Data Atmospheric Component datm4.0.1 documentation
docn CSM1.4 Data Ocean Component docn4.0.1 documentation
dice CSM1.4 Data Sea-ice Component dice4.0.1 documentation
dlnd CSM1.4 Data Land-surface Component dlnd4.0.1 documentation

[back to contents]

3. The CSM run environment

The general CSM run environment has four primary units:

From their workstation, the scientist edits the CSM run scripts and any model source code that needs to be changed. Once the integration has begun, the scientist also monitors the run from this point.


The scripts and model source code are kept on a large shared file system visible from both the scientist's workstation and the main compute engine. The directory where the run scripts and any modified source code reside is referred to as the $NQSDIR . The large body of unmodified model source code is also kept in a common portion of this file system


The model itself is run on a large compute engine, such as a Cray. All components are run simultaneously under a large directory referred to as $EXEDIR. The output data-stream from all components are archived on a large volume tape storage device.

[back to contents]

4. Preparing the model for execution

Directions for downloading and building CSM1.4 can be found on the CSM1.4 Download.

[back to contents]

5. CSM Conventions

The CSM design philosophy treats each of the component models as distinct modules, allowing different models to easily plug into the system. Accordingly, there are no specific rules for how any component model should be built. However, the basic CSM distribution follows some general conventions and uses some common environment variables to simplify matters. Following these conventions makes it much easier for others who are familiar with running the CSM to help you diagnose any problems that might be encountered.

[back to contents]

5.1 Naming Conventions

Only one piece of information is required to identify a unique CSM run, a "CASE" name for the experiment. Each CSM run or experiment is a given name referred to as a "CASE" and it is set as an environment variable in the main CSM script (i.e. setenv CASE test.01). Once specified, the $CASE environment variable is used throughout the CSM environment to identify the experiment run scripts and any data products. $CASE fixes the location of the run scripts, is attached to all of the output data products and identifies the location of the model while it is running.

The $CASE environment variable is used throughout in the scripts to generate unique file and directory names. A CASE name is limited to eight characters or less and must be a valid Unix filename. In these examples, the CASE name will be "test.01"

[back to contents]

5.2 Script Conventions

Two levels of c-shell scripts are used to build and run the model. The NQS script coordinates the building/running the complete system while the component setup scripts are responsible for building for each CSM component model. A CSM run is controlled by a master script, referred to as the NQS script. The NQS script has three main tasks:

The common build and run environment variables set in the NQS script are automatically propagated to each of the component model setup scripts. These variables define such things as the machine architecture, the number of CPUs to run on, common MSS retention periods and passwords. The complete list of common environment variables is found in section 6.

Once the master NQS script is defined, each of the component models are built using the component setup scripts. The details of the component setup scripts are described in the next section.

Finally, the NQS script executes all components simultaneously.

By convention, the name of the script is the name of the CASE with a suffix of ".nqs" For example, if the CASE name was "test.01", the NQS script would be named "test.01.nqs, located on the shared file system in $NQSDIR, with pathname $HOME/csm/nqs/test.01/test.01.nqs

[back to contents]

5.3 Script conventions: The Component Setup Scripts

Each CSM component (cpl, atm, ocn, ice lnd) is built with an individual script which

[back to contents]

5.4 Output Conventions: log files

The printed output from each CSM components is saved in a "log file" in the respective component subdirectories under $EXEDIR. Each time the CSM is run, a single coordinated timestamp is attached to the filenames of all the output log files associated with that run. This common timestamp is generated by the NQS script and is of the form YYMMDD-hhmmss where YYMMDD are the Year, Month, Day and hhmmss are the hour, minute and second that the run began (i.e. $EXEDIR/ocn/ocn.log.000626-082714) .

[back to contents]

5.5 Output Conventions: MSS filenames

The binary data output from each CSM component is coordinated for each CASE and is specified via the MSS_ROOT environment variable in the main NQS script. The convention used by the default CSM distribution is /USERNAME/csm/$CASE/$model/… Write passwords and retention times are also coordinated at the NQS script level.

[back to contents]

5.6 CSM Conventions: Summary:

On the NCAR system, the CSM run scripts and any individual source code files modified for this experiment are kept on a NFS mounted shared file system (winterpark) visible from both the CGD suns and each of the Crays. This directory is called the $NQSDIR. Typically, the $NQSDIR name is of the form ~username/csm/nqs/case.01/ The files in $NQSDIR define and archive the precise configuration of the case.

The base CSM source code repository is also located on the shared file system. For a typical run, the base model source code for each component is kept under the directory /fs/cgd/csm/models/…

The model executables are built and run on the various Crays under a /tmp directory called $EXEDIR (/tmp/username/case.01/…) .

[back to contents]

6. CSM Environment variables

The master nqs script coordinates the model components by setting a number of environment variables. These environment variable are used by each of the component build scripts to configure each component in a similar manner.

The CSM Environment Variables

Name Description Description
$CASE csm1.4a Name of current experiment
$NQSDIR $HOME/csm/nqs/$CASE Permenent directory containing model run scripts and modified code
$EXEDIR /ptmp/$LOGNAME/$CASE Large Temporary Disk where model is run

[back to contents]

6.1 Directory Conventions: $NQSDIR

The $NQSDIR directory will contain the scripts which build and run the CSM. This directory should be on a file system which is both visible to the main computing platform and easily accessed from your workstation for ease of editing.

[back to contents]

6.2 Directory Conventions: $EXEDIR

The model is executed in the $EXEDIR directory. This directory is usually located on the large/tmp file system of the high-speed compute engine. During the course of a run, the input and output data files and any printed output from stdout will be stored in (or in a subdirectory below) $EXEDIR

Each component will "run" out in a separate directory named for that component. The executable, all input, output and stdout log files for that component will reside in this directory.

A number of objects are required to build and run the CSM:

These objects are grouped in directories.

[back to contents]

6.3 Directory Conventions: $CODDIR

The basic repository of frozen CSM source code will reside under the directory $CODDIR. "Frozen" code should never be touched. If you wish to modifie frozen code, copy the file to the appropriate src.* directory under $NQSDIR. At NCAR, this is /fs/cgd/csm/models. For remote sites, CODDIR points to the directory in which the csim1.4.tar file is untarred.

[back to contents]

7. The model resolution

At high resolution, the atmosphere and land model grids are approximately 2.8 degrees latitude by longitude (T42 spectral truncation) with 18 vertical levels in the atmosphere while the ocean and ice grids are approximately 2 by 2 degrees in the ocean. At low resolution, the atmosphere/land and ocean/ice grids are roughly 3.75 by 3.75 degrees (T31) at 18 levels and 3.0 by 3 degrees respectively.

[back to contents]

8. Monitoring a CSM run

A CSM run goes through three phases upon startup

and each phase has distinct properties which must be monitored.

The component build phase:

In the first phase, each CSM component is built in their respective subdirectories below $EXEDIR.

(Under construction 6/26/00)

[back to contents]

9. Debugging a CSM run

(Under construction 6/26/00)

[back to contents]