POP supports both binary and netCDF file formats. The formats for each type of file (e.g. restart, history, movie) are set in individual namelists for those operations. For binary output format, POP can perform parallel input/output in order to speed up IO when writing large files. Because most files read or written by POP utilize direct-access IO with a horizontal slice written to each binary record, the parallel IO routines allow several processors to write individual records to the same file. The user can specify how many processors participate in the parallel IO with some restrictions. The number of processors obviously cannot exceed the total number of processors assigned to the job. In addition, it is not productive to assign more processors than the number of vertical levels as these processors will generally remain idle (or even perform unnecessary work). Lastly, there may be some restrictions based on the particular architecture. Some architectures have a limit on the number of effective IO units that can be open simultaneously. Some architectures (e.g. loose clusters of workstations) may not have a file system accessible to all of the participating processors, in which case the user must set the number of IO processors appropriately. Lastly, note that netCDF does not support parallel I/O, so any netCDF formatted files will be read/written from a single processor regardless of the num_iotasks setting.
The POP model writes a variety of information, including model configuration and many diagnostics, to standard output. Typically in LANL POP,standard output would be redirected to a log file using a Unix redirect (>) operator. However, in some cases this is not possible, so a namelist flag lredirect_stdout can be turned on to redirect standard output to a log file. The logfile will have the name log_filename.date.time where the date and time are the actual wallclock time and not the model simulation time.
During production runs, it is not convenient to have to change the pop_in file for every run. Typically, the only changes necessary are the names of any restart input files. To avoid having to change these filenames in the pop_in file for every run, an option luse_pointer_files exists. If this flag is .true., the names of restart output files are written to pointer files with the name pointer_filename.suffix, where suffix is currently either restart or tavg to handle restart files and tavg restart files. When a simulation is started from restart, it will read these pointer files to determine the location and name of the actual restart files.
As model resolution increases and/or more fields are written to netCDF output files, it becomes increasingly likely that the files will exceed the default netCDF file size of 2Gb. netCDF version 3.6 and higher now support files larger than 2Gb, which is activated by using the NF_64BIT_OFFSET flag when opening a new netCDF file. In CESM1 POP, the io_nml variable luse_nf_64bit_offset allows a user to select large-file support for the netCDF output files. For further information, see the FAQ created by Unidata, the developer of netCDF.
|Options for controlling I/O|
|num_iotasks||1||1||(1,min(km, nprocs_clinic))||number of I/O processes for parallel binary I/O|
|lredirect_stdout||.false.||.true.||.true,.false.||flag to write stdout to log file|
|log_filename||'pop.out'||'$rundir/ocn.log.$LID'||string ≤ 256 characters||root filename (with path) of optional output log file|
|luse_pointer_files||.false.||.true.||.true.,.false.||flag to turn on use of pointer files|
|pointer_filename||'pop_pointer'||'rpointer.ocn'||string ≤ 256 characters||root filename (with path) of pointer files|
|N/A||.true.||.true., .false.||CESM1 POP2 flag for turning on 64bit netcdf output in ocn history files|
The following description of the Parallel I/O implementation in CESM1 was contributed by Mariana Vertenstein. See also http://web.ncar.teragrid.org/~dennis/pio_doc/html/.
Parallel I/O is increasingly needed in the CESM1 system for two reasons: to significantly reduce the memory footprint required to perform I/O and to address performance limitations on high resolution, high processor count simulations. Serial I/O is normally implemented by gathering the data onto one task before writing it out. As a result, it is one of the largest sources of global memory in CESM1 and will always result in a memory bottleneck as the model resolution is increased. Consequently, the absence of parallel I/O in a model component will always give rise to a resolution "cut-off" on a given computational platform. In addition, serial I/O is also associated with serious performance penalties at higher processor counts.
To address these issues, a new parallel I/O library, PIO, has been developed as a collaboratory effort by NCAR/CISL, DOE/SciDAC and NCAR/CSEG. PIO was initially designed to allow 0.1-degree POP to execute and write history and restart files on Blue Gene/L in less than 256 MB per MPI task.
Since that initial prototype version, PIO has developed into a general purpose parallel I/O library that currently supports netCDF (serial), pnetcdf and MPI_IO and has been implemented throughout the entire CESM1 system. PIO is a software interface layer designed to encapsulate the complexities of parallel IO and to make it easier to replace the lower level software backend. PIO calls are collective, an MPI communicator is set in a call to PIO_init and all tasks associated with that communicator must participate in all subsequent calls to PIO.
One of the key features of PIO is that it takes the model's decomposition and redistributes it to an I/O "friendly" decomposition on the requested number of I/O tasks. In using the PIO library for netCDF or pnetcdf I/O, the user must specify the number of iotasks to be used, the stride or number of tasks between iotasks and whether the I/O will use the serial netCDF or pnetcdf library. This information is set in the "io_pio_nml" namelist. By varying the number of iotasks, the user can easily reduce the serial I/O memory bottleneck (by increasing the number of iotasks), even with the use of serial netCDF.
PIO has been been implemented in CESM1 POP to support all netCDF I/O. It is important to note that it is now the only mechanism for producing netCDF history files, as well as for reading and writing netCDF restart files.
|options for controlling pio|
|io_pio_num_iotasks|| 1 on dipole grids
-1 on tripole grids
|-1 or (1,nproc_clinic)|