|CESM User's Guide (CESM1.1 Release Series User's Guide)|
|Prev||Chapter 5. Porting and Validating CESM on a new platform||Next|
This section describes how to set up a case using a userdefined machine name and then within that case, how to modify the scripts to get that case running on a local machine.
Run create_newcase wtih a "userdefined" machine name. Then run cesm_setup in the new case directory.
> cd $CCSMROOT/scripts > create_newcase -case test1 \ -res f45_g37 \ -compset X \ -mach userdefined > cd test1 > cesm_setup
The output from cesm_setup will indicate which xml variables you are now required to set.
ERROR: must set xml variable OS to generate Macros file ERROR: must set xml variable MAX_TASKS_PER_NODE to build the model ERROR: must set xml variable MPILIB to build the model ERROR: must set xml variable RUNDIR to build the model ERROR: must set xml variable DIN_LOC_ROOT to build the model ERROR: must set xml variable COMPILER to build the model ERROR: must set xml variable EXEROOT to build the model Correct above and issue cesm_setup again
The definition of every env variable can be found on the CASEROOT xml page. Enter appropriate settings for the above xml variables in env_build.xml, env_mach_pes.xml and env_run.xml. Calling cesm_setup again should now produce a Macros file that can be used as a starting point for your port. In addition build and run scripts will be generated.
The next step is to edit the env_mach_specific and Macros files to get ready to build the model. The string USERDEFINED in these files indicate the locations where modifications are likely. In particular env_mach_specific is where modules, paths, or machine environment variables need to be set especially related to compilers, mpi, and netcdf. Macros is where the Makefile variables are set. You can find the Makefile in the Tools directory. In the Macros, modify SLIBS to include whatever machine specific libs are desired and include the netcdf library or libraries. Then set NETCDF_PATH to the path of the netcdf directory. This might be a hardwired path or it might be an env variable set in env_mach_specfic or through modules. You might need to modify other Macros variables such as MPI_PATH, but that depends on your particular system setup. Often mpi is wrapped in the compiler commands like mpif90 automatically.
As an example, suppose your machine uses Modules (i.e. the Modules package provides for the dynamic modification of a user's environment via modulefiles). The following setting from env_mach_specific.bluewaters sets the compiler and netcdf versions.
# invoking modules sets $MPICH_DIR and $NETCDF_DIR if ( $COMPILER == "pgi" ) then module load PrgEnv-pgi module switch pgi pgi/11.10.0 endif module load torque/2.5.10 module load netcdf-hdf5parallel/4.1.3 module load parallel-netcdf/1.2.0
that produces some env variables which can then be used in the generated Macros as follows:
MPI_PATH:= $(MPICH_DIR) NETCDF_PATH:= $(NETCDF_DIR)
So in this example the system module defines a variable NETCDF_DIR, but CESM1.1 expects NETCDF_PATH to be set and that copy is made in the Macros file. While CESM1.1 supports use of pnetcdf in PIO (which requires setting PNETCDF_PATH in Macros), it is generally best to ignore that feature during initial porting. PIO works well with standard NetCDF.
Build the case
This step will often fail if paths to compilers, compiler versions, or libraries are not set properly, if compiler options are not set properly, or if machine environment variables are not set properly. Review and edit the env_mach_specific and Macros files, clean the build,
and try rebuilding again.
Finally /test1.userdefined.run is the job submission or run script. Modifications are needed to specify the local batch environment and the job launch command. Again, the string USERDEFINED will indicate where those changes are needed. Once the batch and launch commands are set, run the model using your local job submission command. qsub is used here for example.
> qsub test1.userdefined.run
The job will fail to submit if the batch commands are not set properly. The job could fail to run if the launch command is incorrect or if the batch commands are not set consistent with the job resource needs. Review the run script and try resubmitting.