This page serves as a gateway to the results of the production testing framework. It aims to provide the experiment with an overview of how a given NOvA software release performs in terms of a number of simple metrics.
This site indexes the contents of novagpvm10.fnal.gov:/nusoft/app/web/htdoc/nova/production/testing/ . Any folders that contain test results are then displayed. The plan is to eventually have tests be run (semi-)automatically. This page itself is generated using a cron job. In case of any issues contact Matthew Tamsett.
NOvA data processing can be divided into a number of logical chains based on the type of data being handled. The currently configured chains are:
Each of these chains is subdivided into a number of tiers. Each tier is a single NOvA art job configured using a single FHiCL file. The available chains and tiers are defined in TierConfigurations.py. Tier configurations are used so that the execution of multiple tiers can be processed inside a single loop. Tier-to-tier differences are small and can be configured based on a small number of options. The currently configured tiers are:
Tests are run on the requested chain and tiers using a python wrapper program to spawn NOvA ART jobs using the sub- process module. The performance of jobs is monitored during their run-time using the psutil module to periodically record metrics such as CPU and memory usage. Simultaneously the STDOUT and STDERR generated by the job is captured and interlaced with time stamps. After the completion of the job this log is then parsed to extract information about the ART process, such as when each event began processing and when calls were made to the database.
The resultant metrics and logs are stored in a pickle file by the python job. After the fact these metrics are visualised and formatted for web display using pickle, HTML parsing, Google charts and Bokeh.