Test server system

This page describes a proposed "Test server system" for Fuego.

Introduction [edit section]

One of the long-term goals of Fuego is to allow a network of boards to server as a distributed board farm, where an engineer can design a test, and schedule it to be run on boards that have certain characteristics, in order to validate something that the engineer does not have in front of them.

One of the significant problems with automated test frameworks, is that people don't look at the results. That is, if someone sets up a continuous integration test, it is quite common for the test to be running on every iteration of the software (or every day), but to have no one dedicated to examining the results and following up with bug fixes to solve the problem.

A test server I envision as a "test hub" (kind of like an app store), where people can place new test packages, and individual sites can select the tests they want to use with their system. There would be facilities for browsing the tests, downloading and installing the tests, and possibly rating tests or reporting issues about tests, to support the "app store"-like functionality.

A company could set up it's own test hub, that it's own internal nodes interacted with, if they want to do private testing. The goal, however, is to get thousands of nodes registered and interacting with the main open test hub for fuego, to provide unrelated developers opportunities for testing their software on a variety of hardware, platforms and toolchains.

Specification [edit section]

I believe the following items are needed to create a test server system:

Architecture [edit section]

How much of this does Jenkins already have?

work in progress [edit section]

Tim is working on the test drtbrt system, as of February 2017. Here are some notes about that:

to do [edit section]

Principles:

questions [edit section]

existing test server analysis [edit section]

For each of these, describe the:

The purpose is to evaluate fields used, format of each item, etc.

jenkins [edit section]

Build results: Here is a Jenkins build.xml file (for Functional.hello_world)

Simplifying all this junk, we would get the following attributes for a build (test run):

variable value notes
Device bbb-poky-sdk <empty cell>
Reboot false <empty cell>
Rebuild false <empty cell>
Target_Cleanup true <empty cell>
TESTPLAN testplan_default <empty cell>
CauseAction UserIdCause <empty cell>
iconPath help.gif <empty cell>
textBuilder Firmware revision 3.8.13-bone50 <empty cell>
GroovyPostbuildSummaryAction.text bbb-poky-sdk / 3.8.13-bone50 <empty cell>
number 1 <empty cell>
startTime 1487118317469 in seconds since epoch
result SUCCESS <empty cell>
description Example hello-world test<br> <empty cell>
duration 7702 <empty cell>
charset US-ASCII <empty cell>
keepLog false <empty cell>
builtOn bbb-poky-sdk <empty cell>
workspace /home/jenkins/buildzone <empty cell>
hudsonVersion 1.509.2 <empty cell>
scm_class "hudson.scm.NullChangeLogParser" <empty cell>
culprits_class "com.google.common.collect.EmptyImmutableSortedSet" <empty cell>

lava [edit section]

Here is a test definition in Lava:

Here's a job description for lava:

It's in json.

Results are in the form of something called a Bundle Stream.

kernelci [edit section]

server elements [edit section]

The main components of the kernelci server are divided into two parts, the front end and the back end:

Build results and boot results are sent back to the server using the Web App API.

Uploads are stored on a storage server.

Here are the objects in the kernelci system:

Here are some other things that have schemas in kernelci:

Here is the test case schema:

powerci.org [edit section]

avacado [edit section]