Benchmark parser notes

Here are notes about different elements of the Benchmark log parser and plot processing.

FIXTHIS - add information about metrics.json file, when that goes live in the next fuego release

files [edit section]

There are several files used:

overall flow [edit section]

A benchmark test will do the following:

In the Jenkins interface:

parser elements [edit section]

tests.info [edit section]

This is a system-wide file that defines the metrics associated with each benchmark test.

It is in json format, with each line consisting of a test name as the key, followed by a list of metric names.

ex: "Dhrystone": ["Dhrystone"],

In this example, the metric name is the same as the test name.

parser.py [edit section]

This is a python program that parses the log file for the value (otherwise known as the Benchmark 'metric'), and calls a function called 'process_data' in the common.py benchmark parser library.

The arguments to the process_data are: ref_section_pat = pattern used to find the threshold expression in the reference.log file dictionary = map of values parse from the log file. There should be key:value pair for each metric gathered by this parser 3rd arg = ??? 4th arg = ???

plib.CUR_LOG is the filename for the current log file

See parser.py and Parser module API

reference.log [edit section]

Each section in the reference.log file has a threshold specifier, which consists of 2 lines:

The metric operator line specifies the name of the metric, a vertical bar ('|'), and either 'ge' or 'le', all inside square brackets.

The threshold value is a numeric value to compare the benchmark result with, to indicate success or failure.

Example:

See reference.log

tests.info [edit section]

This is found in 5 places:

The following javascript program:

does the following lookup:

var jenurl = 'http://'+'/'+location['host'] + '/' + prefix +'/userContent/fuego.logs/';

The resulting url is: http://<host>/<prefix>/userContent/fuego.logs//tests.info

The following URL retrieves the fuego-core tests.info file:

file format [edit section]

json, with a map of:

Benchmark.[testname].info.json [edit section]

mod.js has:

The resulting url is:

file format [edit section]

list of device maps:

Each device map has:

The 3 lists have to have the same number of elements

Sample:

[
  { "device": "bbb-poky-sdk",
    "info": [
      ["1", "2", "3"],
      ["3.8.13-bone50", "3.8.13-bone50", "3.8.13-bone50"],
      ["poky-qemuarm", "poky-qemuarm", "poky-qemuarm"]
    ]
  },
  { "device": "qemu-test-arm",
    "info": [
      ["4"],
      ["4.4.18-yocto-standard"],
      ["poky-qemuarm"]
    ]
  }
]

Benchmark.[testname].[metricname].json file [edit section]

Each benchmark test run has one or more metric value json files (depending on the number of metrics defined for the test).

They are placed in the log file directory for this test, by the dataload.py program.

The have a name including the test name and the metric name:

These are used by the 'flot' plugin to do dynamic charting for benchmark tests.

file format [edit section]

Contents:

Sample:

[
  { "data": [
      ["1", 2500000.0],
      ["2", 2500000.0],
      ["3", 2500000.0]
     ]

    "label": "bbb-poky-sdk-Dhrystone.Dhrystone",
    "points": {"symbol": "circle"}
  },
  { "data": [
     ["1", 1.0],
     ["2", 1.0],
     ["3", 1.0]],
    "label": "bbb-poky-sdk-Dhrystone.Dhrystone.ref",
    "points": {"symbol": "cross"}
  },
  { "data": [
     ["4", 909090.9]],
    "label": "qemu-test-arm-Dhrystone.Dhrystone",
    "points": {"symbol": "circle"}
  },
  { "data": [
     ["4", 1.0]],
    "label": "qemu-test-arm-Dhrystone.Dhrystone.ref",
    "points": {"symbol": "cross"}
  }
]