The following port validation is recommended for any new machine. Carrying out these steps does not guarantee the model is running properly in all cases nor that the model is scientifically valid on the new machine. In addition to these tests, detailed validation should be carried out for any new production run. That means verifying that model restarts are bit-for-bit identical with a baseline run, that the model is bit-for-bit reproducible when identical cases are run for several months, and that production cases are monitored very carefully as they integrate forward to identify any potential problems as early as possible. These are recommended steps for validating a port and are largely functional tests. Users are responsible for their own validation process, especially with respect to science validation.
Verify functionality by performing these functionality tests.
ERS_D.f19_g16.X ERS_D.T31_g37.A ERS_D.f19_g16.B1850CN ERI.ne30_g16.X ERI.T31_g37.A ERI.f19_g16.B1850CN ERS.ne30_ne30.F ERS.f19_g16.I ERS.T62_g16.C ERS.T62_g16.D ERT.ne30_g16.B1850CN
Verify performance and scaling analysis.
Create one or two load-balanced configurations to check into Machines/config_pes.xml for the new machine.
Verify that performance and scaling are reasonable.
Review timing summaries in $
CASEROOT for load balance and throughput.
Review coupler "daily" timing output for timing inconsistencies. As has been mentioned in the section on load balancing a case , useful timing information is contained in cpl.log.$date file that is produced for every run. The cpl.log file contains the run time for each model day during the model run. This diagnostic is output as the model runs. You can search for tStamp in this file to see this information. This timing information is useful for tracking down temporal variability in model cost either due to inherent model variability cost (I/O, spin-up, seasonal, etc) or possibly due to variability due to hardware. The model daily cost is generally pretty constant unless I/O is written intermittently such as at the end of the month.
Perform validation (both functional and scientific):
Perform two, one-year runs (using the expected load-balanced configuration) as separate job submissions and verify that atmosphere history files are bfb for the last month. Do this after some performance testing is complete; you may also combine this with the production test by running the first year as a single run and the second year as a multi-submission production run. This will test reproducibility, exact restart over the one-year timescale, and production capability all in one test.
Carry out a 20-30 year 1.9x2.5_gx1v6 resolution, B_1850_CN compset simulation and compare the results with the diagnostics plots for the 1.9x2.5_gx1v6 Pre-Industrial Control (see the CCSM4.0 diagnostics ). Model output data for these runs will be available on the Earth System Grid (ESG) as well.