CCSM1.4 datm4.0 User's Guide
Authored by: Brian Kauffman, kauff @ ucar.edu
The Climatological Data Atmosphere Model (datm) functions as the atmosphere component in a CSM configuration. Recall that a configuration consists of various independent component models (e.g. atmosphere, land, ocean, sea-ice), each connected to a flux coupler. The datm atmosphere component interacts with the flux coupler just like any atmosphere model would, but it is not an active model, rather, it takes atmosphere data from an input data file and sends it to the coupler, ignoring any forcing data received from the coupler. Typically the input data file contains mean daily data generated by an active CSM atmosphere model (it's unlikely that real world observations exists for all the required fields). Such a "dummy" atmosphere model is useful for doing ocean + ice spinup runs.
Important Note: When assembling a CSM configuration, carefully consider the limitations and requirements of all components and be sure that the complete set of component models will interact in a meaningful way. In particular, consider whether the data provided by this model is adequate for your application.
The model cycles thru netCDF data files containing all the data the coupler expects from an atmosphere model. Each file contains one month of data consisting of one month's worth of daily mean fields. This input data must have the same domain (resolution and coordinate arrays) as the datm model. The model can cycle thru a multi-year sequence of data. Suitable data files exist on NCAR's Mass Storage System (MSS). These files can be replaced by other data files that are in the same format. Because netCDF files are self describing, one can query the file itself for specifics about the file format.
On startup, the model reads in domain data from a netCDF file. Data exchanged with the coupler will be on the model domain. This file contains x & y coordinate arrays. The model uses a rectilinear latitude/longitude grid only, with 1d coordinate arrays x(i) & y(j). Because the input atmosphere data (above) is on the same domain as the datm model, and because the input atmosphere data contains domain information, any input atmosphere data file can also serve as a domain data file. In fact, the first file in the input data file sequence is used as the domain data file.
Input parameter namelist
On startup, the model reads an input namelist parameter file from stdin. The model has very few input namelist parameters. Typically the only input parameters that are required are those that specify how often the model will communicate with the coupler, and those that specify the sequence of input atmosphere data files. See the section on input namelist variables for a complete list and description of namelist parameters.
Data received from the flux coupler
The model receives the following fields from the flux coupler (via message passing):
- zonal surface stress (N/m²)
- meridional surface stress (N/m²)
- latent heat (W/m²)
- sensible heat(W/m²)
- upward longwave heat(W/m²)
- evaporation (kg/s/m²)
- albedo: visible, direct
- albedo: near-infrared, direct
- albedo: visible, diffuse
- albedo: near-infrared, diffuse
- surface temperature (° Kelvin)
- snow height (m)
- ice fraction
- ocean fraction
- land fraction
Atmosphere/surface diagnostic quantities
- 2 meter reference air temperature (° Kelvin)
In default mode, none of these fields are used by this model. These fields are received because the coupler/atmosphere interface specification requires that this set of fields be received. The coupler has no way of knowing whether the components it is connected to are active models or data models.
The model does not create history files. The only data associated with this model is the data that is already contained in the input atmosphere data file.
he model does not need or create restart files.
The model generates some diagnostic messages which are written to stdout. This output consists mostly of brief messages that indicate how the simulation is progressing and whether any error conditions have been detected. Stdout also contains a record of the values of all model input parameters.
Data sent to the flux coupler
The model sends the following fields to the flux coupler (via message passing):
Atmosphere model states
- layer height (m)
- u: zonal velocity (m/s)
- v: meridional velocity (m/s)
- potential temperature (°Kelvin)
- specific humidity (kg/kg)
- pressure (Pa)
- temperature (Kelvin)
- longwave, downward (W/m²)
- precipitation: liquid, convective (kg/s/m²)
- precipitation: liquid, large-scale (kg/s/m²)
- precipitation: frozen, convective (kg/s/m²)
- precipitation: frozen, large-scale (kg/s/m²)
- net shortwave radiation (W/m²)
- shortwave radiation: downward, visible, direct (W/m²)
- shortwave radiation: downward, near-infrared, direct (W/m²)
- shortwave radiation: downward, visible, diffuse (W/m²)
- shortwave radiation: downward, near-infrared, diffuse (W/m²)
The fields sent to the coupler are based on the input atmosphere data (which is assumed to be daily mean data) linearly interpolated in time. This set of fields is sent to the coupler because this is part of the coupler/atmosphere interface requirement. The coupler has no way of knowing whether the components it is connected to are active models or data models.
How the output fields are derived
Data from the input data sequence is assumed to be daily average data. This data is linearly interpolated in time to get instantaneous fields, and this data is sent to the coupler. In this case all data received from the coupler is ignored.
If the user explicitly activates the albedo feedback option (by default albedo feedback is not activated, see the input parameter flux_albfb), then the net shortwave radiation read in from the input data file sequence is not used, rather a calculation is done: net shortwave = ( 1 - albedo) * ( downward shortwave ) where downward shortwave radiation is read in from the data file and the albedo data is received from the coupler. This allows the net shortwave radiation to be consistent with the surface albedos (e.g. sea ice extent).
The model reads an input namelist from stdin at runtime. Following is a list and description of available namelist input parameters.
Description: This specifies how many times per day the model communicates (exchanges data) with the coupler.
Description: This specifies the MSS directory containing the input atmosphere data file sequence. File names in this directory are assumed to be named "yyyy-mm.nc", where yyyy is the four digit year and mm is the two digit month of the data file.
Required: no (assuming files 0001-01.nc thru 00001-12.nc exists in data_dir)
Description: This specifies the first year in the input atmosphere data file sequence.
Description: This specifies the number of years in the input atmosphere data file sequence.
Description: This specifies the "offset year" for the input atmosphere data file sequence. The data file for data_year0 will coincide with the simulation year data_oyear.
Default: 0 (note: 0 <=> false)
Description: This specifies whether to remove the local copy of an input atmosphere data file after it has been read in by the model. Iff data_rmlf = 0, then the model will not remove the local file.
Default: 0 (note: 0 <=> false)
Description: If flux_albfb 0 (i.e. true), then albedo feedback is activated.
Description: Debugging information level: 0, 1, 2, or 3.
- 0 => write the least amount of debugging information to stdout
- 1 => write a small amount of debugging information to stdout
- 2 => write a medium amount of debugging information to stdout
- 3 => write a large amount of debugging information to stdout
Preparing the model for execution
The model's setup script, call atm.setup.csh, is invoked prior to the execution of the model. The setup script builds the executable code, documents the source code, and gathers the required input data files.
Each CSM component gets it's own, separate subdirectory in which it's setup script is run, in which the it's executable resides, and in which all of it's input and output files are kept. A set of environment variables set in a parent NQS script and is available for use (a model may not actually use all of these variables).
Below is an example setup script, followed by some explanation.
#! /bin/csh -f #====================================================================== # Purpose: # (a) build an executable model (datm4 climatological data atm model) # (b) document the source code used # (c) gather or create necessary input files #====================================================================== echo '=================================================================' echo ' Preparing model for execution ' echo '=================================================================' echo ' ' echo Date: `date` echo ' ' echo 'Env variables by a parent shell:' echo ' $CASE = ' $CASE echo ' $CASESTR = ' $CASESTR echo ' $RUNTYPE = ' $RUNTYPE echo ' $ARCH = ' $ARCH echo ' $CSMSHARE = ' $CSMSHARE echo ' $MAXCPUS = ' $MAXCPUS echo ' $SSD = ' $SSD echo ' $MSS = ' $MSS echo ' $MSSDIR = ' $MSSDIR echo ' $MSSRPD = ' $MSSRPD echo ' $MSSPWD = ' $MSSPWD echo ' $RPTDIR = ' $RPTDIR echo ' $MSGLIB = ' $MSGLIB echo '-----------------------------------------------------------------' echo ' (a) Build an executable ' echo '-----------------------------------------------------------------' if ( -e atm ) then echo 'Note: using an existing binary' ls -lFt atm src/Build.log.* | head else #--- create a src code sub-directory --- mkdir src cd src #--- document the build --- set BLDLOG = Build.log."`date +%y%m%d-%H%M%S`" echo 'Note: (re)building a binary' echo "See: $BLDLOG " echo "Build log " >! $BLDLOG echo "Date: `date` " >>& $BLDLOG echo "Dir : `pwd` " >>& $BLDLOG echo "User: $LOGNAME " >>& $BLDLOG #--- gather source code --- echo "/fs/cgd/csm/models/atm/datm4.0 " >! Filepath #cho "/insert/filepath/for/patches " >> Filepath foreach SRCDIR (`cat Filepath`) echo "o gathering src code from $SRCDIR" >>& $BLDLOG cat $SRCDIR/README >>& $BLDLOG ls -lF $SRCDIR >>& $BLDLOG cp -fp $SRCDIR/* . >>& $BLDLOG end #--- select resolution --- rm -f dims.h ; ln -s dims.h.ncom_x2 dims.h #--- create make's include files & invoke make --- Makeprep >>& $BLDLOG make EXEC=datm4 ARCH=$ARCH >>& $BLDLOG || exit 2 #--- link binary into /. directory --- cd .. rm -f atm ; ln -s src/datm4 atm endif echo '-----------------------------------------------------------------' echo ' (b) document the source code used ' echo '-----------------------------------------------------------------' echo "o contents of /src:" ; ls -alFt src ; echo ' ' echo "o revision control info:" ; grep 'CVS' src/*.[hF] ; echo ' ' echo '-----------------------------------------------------------------' echo ' (c) gather or create necessary input files ' echo '-----------------------------------------------------------------' cat >! atm.parm << EOF &inparm ncpl = 24 data_dir = '/BOVILLE/csm/f015.00/datm/data' data_nyear = 3 data_year0 = 5 data_oyear = 1 flux_albfb = 1 / EOF echo "o contents of atm.parm:" ; cat atm.parm ; echo ' ' echo "o contents of `pwd`:" ; ls -alF ; echo ' ' echo '=================================================================' echo ' End of setup shell script ' echo '================================================================='
Items (a) through (c) in the above setup script are now reviewed.
(a) Build an executable
- identifying a source code directory (directories),
- acquiring the source code from that directory (directories),
- selecting a resolution-dependent "dims.h" file,
- executing the Makeprep script which creates the include files required by the makefile (e.g. a list of dependencies), and
- executing the Makefile.
The model resolution (i.e. the number of x and y grid points) must be known at compile time and is specified in one resolution-dependent file: dims.h. Notice how an appropriate dims.h file is selected. Several dims.h files are provided corresponding to frequently used resolutions, but it is easy to create a dims.h file for any desired resolution. The details of how to build the executable (e.g. preprocessor and compiler options) are contained in the Makefile.
(b) Document the source code used
Here we make a detailed listing of the source code used, a list of revision control system (CVS) information, and list of the contents of the current working directory. This information can be used to identify the source code used in a particular simulation.
(c) Gather or create necessary input files
An input namelist file is constructed. The namelist variables specify how often the model will communicate with the coupler (24 times per day) and what the input atmosphere data sequence will be (three years of data, starting at year five, with the first year in the data sequence corresponding with simulation year one). See the section describing the input namelist variables for more details.
Source Code Maintenance
The distribution Fortran source code for the model comes with a Makefile which suitable for use Cray C90, Cray J90, or SGI architectures. By examining how compilation between these various machines is handled, it should be easy to port this code to other machines as well. The code is written almost entirely using standard Fortran 77.
The code must be compiled with a particular model resolution in mind. The only source code file that must change with respect to resolution (i.e. has a "hard-coded" resolution dependence) is dims.h. See the section describing the setup script for an example of how this dependency is handled at compile time. Recall the input data files (read in at runtime) must have the same resolution as the source code.
This code was developed using the CVS revision control system, but only one "tagged" version of the code is available within any one distribution. Each source code file contains detailed revision control information.