Adding_a_test 

Fuego 1.2 wiki

Login

Adding a test

Overview of Steps [edit section]

To add a new test to Fuego, you need to perform the following steps:

  • 1. Decide on a test name and type
  • 2. Make the test directory
  • 3. Get the source (or binary) for the test
  • 4. Write a test script for the test
  • 5. Add the test_specs (if any) for the test
  • 6. (if a functional test) Add "expected results" (p/n) logs for the test
  • 7. (if a benchmark) Add parser.py and reference.log files
  • 8. Create the Jenkins test configuration for the test

Decide on a test name [edit section]

The first step to creating a test is deciding the test name. There are two types of tests supported by Fuego: functional tests and benchmark tests. A functional test either passes or fails, while a benchmark test produces one or more numbers representing some performance metric for the system.

Usually, the name of the test will be a combination of the test type and a name to identify the test itself. Here are some examples: "bonnie" is a popular disk performance test. The name of this test in the fuego system is Benchmark.bonnie. A test which runs portions of the posix test suite is a functional test (it either passes or fails), and in Fuego is named Functional.posixtestsuite. The test name should be all one word (no spaces).

This name is used as the directory name where the test materials will live in the Fuego system.

Create the directory for the test [edit section]

The main test directory is located in /fuego-core/engine/tests/<test_name>

So if you just created a new Functional test called 'foo', you would create the directory:

  • /fuego-core/engine/tests/Functional.foo

Get the source for a test [edit section]

The actual creation of the test program itself is outside the scope of Fuego. Fuego is intended to execute an existing test program, for which source code or a script already exists.

This page describes how to integrate such a test program into the Fuego test system.

A test program in Jenkins is provided in source form so that it can be compiled for whatever processor architecture is used by the target under test. This source must be in the form of a tarfile, and one or more patches.

Create a tarfile for the test, by downloading the test source manually, and creating the tarfile.

The tarfile may be compressed. Supported compression schemes, and their associated extensions are:

  • uncompressed (extension='.tar')
  • compressed with gzip (extension='.tar.gz' or '.tgz')
  • compressed with bzip2 (extension='.bz2')

Test script [edit section]

The test script is a small script, written the shell scripting language. It specifies the source tarfile containing the test program, and provides implementations for the functions needed to build, deploy, execute, and evaluate the results from the test program.

The test script for a functional test should contain the following:

  • tarfile
  • function test_pre_check (optional)
  • function test_build
  • function test_deploy
  • function test_run
  • function test_processing (for a Functional test)

The test_pre_check function is optional, and is used to check that the test environment and target configuration and setup are correct in order to run the test.

Sample test script [edit section]

Here is the test script for the test Functional.hello_world. This script demonstrates a lot of the core elements of a test script. The name of this script is hello_world.sh.

    #!/bin/bash
    
    tarball=hello-test-1.0.tgz
    
    function test_build {
        make && touch test_suite_ready || build_error "error while building test"
    }
    
    function test_deploy {
        put hello  $BOARD_TESTDIR/fuego.$TESTDIR/
    }
    
    function test_run {
        report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./hello $FUNCTIONAL_HELLO_WORLD_ARG"
    }
    
    function test_processing {
        log_compare "$TESTDIR" "1" "SUCCESS" "p"
    }
    
    . $FUEGO_CORE/engine/scripts/functional.sh

Description of base test functions [edit section]

The base test functions (test_build, test_deploy, test_run, and test_processing) are fairly simple. Each one contains a few statements to accomplish that phase of the test execution.

You can find more information about each of these functions at the following links:

Test spec and plan [edit section]

Define the test spec(s) for this test, and add an entry to the testplan_default file for it.

Each test in the system can must have a test spec file. This file can be used to list customizable variables for the test.

If a test program has no customizable variables, or none are desired, then at a minimum a "default" test spec must be defined.

The test spec file is:

  • in JSON format,
  • have the same name as the test, with a '.spec' extension (e.g. Functional.hello_world.spec),
  • should be placed in the test directory
  • provides a testName attribute, and a specs attribute, which is a list,
  • may include any named spec you want, but must define at least the 'default' spec for the test
    • Note that the 'default' spec can be empty, if desired.

Here is an example one that defines no variables.

    {
        "testName": "Benchmark.openssl",
        "specs":
        [
            {
                "name":"default"
            }
        ]
    }

Next, you may want to add an entry to one of the testplan files. these files are located in the directory: /fuego-core/engine/overlays/testplans

Choose a testplan you would like to include this test, and edit the corresponding file. For example, to add your test to the list of tests executed when the 'default' testplan is used, add an entry indicating to use the 'default' test spec to the 'testplan_default.json' file.

Here is what the added entry looks like for the Functional.hello_world test:

            {
                "testName": "Functional.hello_world",
                "spec": "default"
            }

Note that you should add a comma after your entry, if it is not the last one in the list of "tests".

Please read Test Specs and Plans for more details. You should do this if you have a filesystem test, or want to create your own test specs and test plans to add flexibility to your test execution, please read

test results parsers and reference log files [edit section]

FIXTHIS - add information about results parsing
  • log_compare function
  • <name>_p.log, <name>_n.log
  • parser.py
  • reference.log
  • tests.info

Jenkins job definition file [edit section]

The last step in creating the test is to create the Jenkins job for it.

A Jenkins job describes to Jenkins what board to run the test on, what variables to pass to the test (including the test spec (or variant), and what script to run for the test.

Jenkins jobs are created using the command-line tool 'ftc'.

A Jenkin job has the name <node_name>.<spec>.<test_type>.<test_name>

You create a Jenkins job using a command like the following:

  • $ ftc add-jobs -b myboard -t Functional.mytest [-s default]

In this case, the name of the job that would be created would be:

  • myboard.default.Functional.mytest

This results in the creation of a file called config.xml, in the /var/lib/jenkins/jobs/<job_name> directory.

Publishing the test [edit section]

Tests that are of general interest should be submitted for inclusion into fuego-core.

Right now, the method of doing this is to create a commit and send that commit to the fuego mailing list, for review, and hopefully acceptance and integration by the fuego maintainers.

In the future, a server will be provided where test developers can share tests that they have created in a kind of "test marketplace". Tests will be available for browsing and downloading, with results from other developers available to compare with your own results. There is already preliminary support for packaging a test using the 'ftc package-test' feature. More information about this service will be made available in the future.

technical details [edit section]

This section has technical details about a test.

Directory structure [edit section]

The directory structure used by Fuego is documented at Fuego directories

Files [edit section]

A test consists of the following files or items:

File or item  ^ format  ^ location  ^ description  ^ test type  ^
config.xml Jenkins XML /var/lib/jenkins/jobs/{test_name} Has the Jenkins (front-end) configuration for the test all
tarfile tar format /fuego-core/engine/tests/{test_name} Has the source code for the test program all
patches patch format /fuego-core/engine/tests/{test_name} Zero or more patches to customize the test program (applied during the unpack phase all
base script shell script /fuego-core/engine/tests/{test_name} Is the shell script that implements the different test phases in Fuego all
test spec JSON /fuego-core/engine/tests/{test_name} Has groups of variables (and their values) that can be used with this test all
test plan(s) JSON /fuego-core/engine/overlays/testplans/testplan_default.json (and others) Has the testplan(s) for this test all
p/n logs text /fuego-core/engine/tests/{test_name} Are logs with the results (positive or negative) parsed out, for determination of test pass/fail functional only
parser.py python /fuego-core/engine/tests/{test_name} Python program to parse benchmark metrics out of the log, and provide a dictionary to the Fuego plotter benchmarks only
reference.log Fuego-specific /fuego-core/engine/tests/{test_name} Has the threshold values and comparison operators for benchmark metrics measured by the test benchmarks only
line added to tests.info JSON /fuego-rw/logs/tests.info provides the names of metrics to be plotted for this benchmark benchmarks only
Query

TBWiki engine 1.9.1 by Tim Bird