This section briefly describes how to download the CCSM2.0 source code and input datasets, how to configure the model, how to build and run the model, and what output is created. The process can be summarized as follows:
The amount of effort to get CCSM running depends largely on the similarity of your site compared to the NCAR environment. This quick start guide is most applicable to users that are running the model at NCAR. Modifying the scripts to run at other may require a number of small changes to the CCSM scripts and build procedure.
Two target architectures are supported: IBM and SGI.
CCSM2.0 is available via the web from: www.cesm.ucar.edu/models/ccsm2.0
The CCSM2.0 distribution consists of the following files:
To uncompress and untar the file ccsm2.0.tar.gz, use the Unix gunzip and tar commands:
gunzip -c ccsm2.0.tar.gz | tar -xf -To untar the data files:
tar -xf ccsm2.0.inputdata.T42_gx1v3.tar
By default, the CCSM2.0 distribution is configured to run on the NCAR IBM SP blackforest under the directory /ptmp/$LOGNAME/$CASE
$HOME | [Directories and files below this point are created by untarring the ccsm2.0 files] | | ccsm2.0/ ($CSMROOT) | +------------+-------------+------------+ | | | scripts/ doc/ models/ | | [Build/Run Scripts] [documentation] [Model Code] | | | | | | | +----------+----------------+-------+---+----------+--------+--.... | | | | | | | | cpl/ atm/ ocn/ ice/ lnd/ bld/ | | | | | | | | +-----+------+ +---+--+ +--+----+ +------+ | | | | | | | | | | | | cpl5/ cam/ datm5/ latm5/ pop/ docn5/ csim4/ dice5/ clm2/ dlnd5/ | | | +-------------------------------+--------------------+ | | | test.a1/ ($SCRIPTS) gui_run/ tools/ ($TOOLS) | | test.a1.run [Initial run script] cpl.setup.csh [cpl setup script] atm.setup.csh [atm setup script] ocn.setup.csh [ocn setup script] ice.setup.csh [ice setup script] lnd.setup.csh [lnd setup script] datm.setup.csh [data atm setup script] latm.setup.csh [alternative data atm setup script] docn.setup.csh [data ocn setup script] dice.setup.csh [data ice setup script] dlnd.setup.csh [data lnd setup script]
The directory tree for the tarfile containing the CCSM2.0 input data looks like
inputdata/ | [Input Data] | | +---------+------------+------+-------------+------------+ | | | | | cpl/ atm/ ocn/ ice/ lnd/ | | | | | | +--+--+------+ +--+--+ +---+--+ +--+--+ | | | | | | | | | | clm2/ cam2/ datm5/ latm5/ pop/ docn5/ csim4/ datm5/ clm2/ datm5/
The initial and boundary datasets for each component will be found in their respective subdirectories. The input data should be untarred in a relatively large disk area. It could take up several Gigabytes of disk space depending which input datasets are downloaded.
Two levels of c-shell scripts are used to build and run the model. A single run script (i.e. test.a1.run) coordinates the building and running the complete system while the component setup scripts (i.e. atm.setup.csh or ocn.setup.csh) are responsible for building each CCSM component model.
A CCSM run is controlled by a master c-shell script, referred to as the main run script. By convention, the name of the script is the case name ($CASE) with a suffix of ".run". For example, if the CASE name was "test.a1", the run script would be named "test.a1.run. In the ccsm2.0 distribution, the main run script is scripts/test.a1/test.a1.run
The run script has three main tasks:
The common build and run environment variables are set in the run script and are automatically propagated to each of the component model setup scripts. These variables define such things as the machine architecture, the number of CPUs to run on and common file-naming conventions.
Once the master run script defines the common environment, each of the component models are built using the component setup scripts. For each component, the setup script will:
Finally, once all the component setup scripts have successfully been run, the run script executes all components simultaneously.
Two model resolutions are supported. At high resolution, the atmosphere and land model grids are approximately 2.8 degrees latitude by longitude (T42 spectral truncation) with 26 vertical levels in the atmosphere, while the ocean and ice grids are approximately 1 by 1 degrees in the ocean. At low resolution, the atmosphere/land and ocean/ice grids are roughly 3.75 by 3.75 degrees (T31) at 26 levels and 3 by 3 degrees respectively.
To get the CCSM up and running, a new CASE should be created and some script variables need to be modified.
cd /home/\$LOGNAME/ccsm2.0/scripts mkdir newcase cp test.a1/* newcase/
cd /home/\$LOGNAME/ccsm2.0/scripts/newcase mv test.a1.run newcase.run mv test.a1.har newcase.har edit newcase.run change "job\_name" to newcase change "setenv CASE" to "newcase" change "setenv CASESTR" to a useful string change "setenv CSMROOT" to /home/\$LOGNAME/ccsm2.0 change "setenv CSMDATA" to the local path to the inputdata directory change "setenv EXEROOT" to the local directory where the model will run change "setenv ARCROOT" to the local directory for archiving model output
Operationally, a new CCSM case is started as either a startup, hybrid or branch run, depending on the science requirements. Wall-clock limits in the batch queues restrict all runs to a finite length, usually 1-3 model years. At specified intervals during the run (usually annually), restart and initial files will be created by all component models in a coordinated fashion.
After the first startup, hybrid or branch run successfully completes, the run is extended forward in time by resubmitting the run script to the batch queues as a continue run. This is done simply by changing the RUNTYPE setting to "continue" in the $SCRIPTS/case.run file and resubmitting the job. The continuation run reads in the restart files created by the previous run and steps forward in time. The continuation run is then resubmitted as many times as necessary to extend the case to the desired length. Scientific integrity requires that the continuation runs must produce exactly the same answer as if the model had been run continuously without stopping.
The CCSM must be viewed as a collection of distinct models optimized for very high-speed, parallel multi-processor computing. This results in raw output data streams from each component which do not present the raw data in the most user-friendly manner. Raw data from major CCSM integrations is usually postprocessed into user-friendly configurations, such as single files containing long time-series of each output field.
The printed output from each CCSM component is saved in a "log file" in the respective component subdirectories under $EXEROOT. Each time the CCSM is run, a single coordinated timestamp is incorporated in the filenames of all the output log files associated with that run. This common timestamp is generated by the run script and is of the form YYMMDD-hhmmss, where YYMMDD are the Year, Month, Day and hhmmss are the hour, minute and second that the run began (i.e. $EXEROOT/ocn/ocn.log.000626-082714) .
The binary output data are written from each CCSM component independently.
By default, CCSM2.0 writes monthly averaged history files for all components in netCDF format. CCSM2.0 also writes out binary restart files from all components at regular intervals. The total output volume of the model output can vary greatly depending upon the output frequencies and options selected.
The raw history data can be analyzed, but traditionally, the raw data package does not lend itself well to easy time-series analysis. For example, the atmosphere dumps all the requested variables into one large file at each requested output period. While this allows for very fast model execution, this makes it impossible to analyze time-series of individual variables without having to access the entire data volume. Thus, the raw data from major CCSM integrations is usually postprocessed into user-friendly configurations, such as single files containing long time-series of each output fields and made available to the community.