Parser_module_API 

Fuego 1.2 wiki

Login

Parser module API

The file common.py is the python module for performing benchmark log file processing, and results processing and aggregation.

It is used by parser.py programs in each Benchmark test directory, to process the log after each test run. The data produced is used to check for Benchmark regressions (by checking against a reference threshold value) and to provide data for the plotting of multiple test runs.

Main functions:

  • parse_log()
  • process(measurements)
  • CUR_LOG

deprecated functions [edit section]

  • parse()
  • process_data()

sys.argv [edit section]

The common.py module uses the arguments passed to parser.py on it's command line. The position arguments used are:
  • $1 - JOB_NAME
  • $2 - PLATFORM
  • $3 - BUILD_ID
  • $4 - BUILD_NUMBER
  • $5 - FIRMWARE
  • $6 - SDK
  • $7 - DEVICE

(see parser.py for invocation details)

parse() [edit section]

  • input:
    • cur_search_pattern - compiled re search pattern
  • output:
    • list of regular expression matches for each line matching the specified pattern

This routine scans the current log file, using a regular expression. It returns an re match object for each line of the log file that matches the expression.

This list is used to populate a dictionary of metric/value pairs that can be passed to the process_data function.

process_data [edit section]

This is the main routine of the module. It processes the list of metrics, and populates various output files for test.

  • input:
    • ref_section_pat - regular expression used to read reference.log
    • cur_dict - dictionary of metric/value pairs
    • m - indicates the size of the plot. It should be one of: 's', 'm', 'l', 'xl'
      • if 'm', 'l', or 'xl' are used, then a multiplot is created
    • label - label for the plot

This routine has the following outline:

  • write_report_results
  • read the reference thresholds
  • check the values against the reference thresholds
  • store the plot data to a file (plot.data)
  • create the plot
  • save the plot to an image file (plot.png)

CUR_LOG [edit section]

This has the name of the current log file, which can be opened and read directly by parser.py. Use of this is not needed if the parse() function is used.

Notes about Daniel's rewrite of common.py [edit section]

functions [edit section]

  • hls - print a big warning or error message
  • parse_log(regex_str) - specify a regular expression string to use to parse lines in the log
    • this is a helper function that returns a list of matches (with groups) that the parser.py can use to populate its dictionary of measurements
  • parse(regex_compiled_object)
    • similar to parse_log, but it takes a compiled regular expression object, and returns a list of matches (with groups)
    • this is deprecated, but left to support legacy tests
  • split_tguid()
  • split_test_id()
  • get_test_case()
  • add_results()
  • init_run_data()
  • get_criterion()
  • check_measure()
  • decide_status()
  • convert_reference_log_to_criteria()
  • load_criteria()
  • apply_criteria()
  • create_default_ref()
  • prepare_run_data()
  • extract_test_case_ids()
  • update_results_json()
  • delete()
  • save_run_json()
  • process(results)
    • results is a dictionary with
      • key=test_case_id (not including measure name)
        • for a functional test, the test_case_id is usually "default.<test_name>"
      • value=list of measures (for a benchmark)
      • or value=string (PASS|FAIL|SKIP) (for a functional test)
  • process_data(ref_sections_pat, test_results, plot_type, label)

call trees [edit section]

     process_data(ref_section_pat, test_results, plot_type, label)
       process_data(measurements)
          prepare_run_data(results)
             run_data = (prepare non-results data structure)
             ref = read reference.json
                or ref = create_default_ref(results)
             init_run_data(run_data, ref)
                (put ref into run_data structure)
                (mark some items as SKIP)
             add_results(results, run_data)
                 for each item in results dictionary:
                    (check for results type: list or str)
                    if list, add measure
                    if str, set status for test_case
             apply_criteria(run_data)
                 load_criteria()
                    (load criteria.json)
                    or convert_reference_log_to_criteria()
                 check_measure()
                    get_criterion()
                 decide_status()
                    get_criterion()
          save_run_json(run_data)
          update_results_json()
          (return appropriate status)

miscellaneous notes [edit section]

  • create_default_ref_tim (for docker.hello-fail.Functional.hello_world)
    • ref={'test_sets': [{'test_cases': [{'measurements': [{'status': 'FAIL', 'name': 'Functional'}], 'name': 'default'}], 'name': 'default'}]}
  • create_default_ref
    • ref={'test_sets': [{'test_cases': [{'status': 'FAIL', 'name': 'default'}], 'name': 'default'}]}

tguid rules [edit section]

New benchmark:
  • measurements[test_case_id] = [{"name": measure_name, "measure": value}]

Old benchmark:

  • From reference.json - if single word, then use word as measure name and "default" as the test_case.
    • benchmark.arm, reference.log "short" -> test_name = aim, test_case = default, test_case_id = aim, measure = short

New functional: measurements["status"] = "PASS|FAIL"

Old functional: measurements["status"] = "PASS|FAIL"

Parser API [edit section]

Here is a list of APIS available for the parser.py program associated with a test:
  • parse_log - parse the data from a test log and return a list of tuples with matching groups from a specified regular expression
  • process - process results from a test

TBWiki engine 1.9.1 by Tim Bird