tbwiki.footer >> Notes_on_individual_tests >> Functional.scifab >> function_run_python >> Metrics 

Fuego 1.2 wiki

Login

LTP-Notes in split format

{{TableOfContents}}
Here are some miscellaneous notes about LTP
Here are some miscellaneous notes about LTP
See http://ltp.sourceforge.net/documentation/how-to/ltp.php
See http://ltp.sourceforge.net/documentation/how-to/ltp.php

Fuego LTP execution [edit section]

= Fuego LTP execution =
== Sequence of events ==
Sequence of events:
 - Jenkins launches the jobs (something like bbb.default.Functional.LTP)
   - Jenkins starts fuego_test.sh
   - fuego_test.sh reads the desired spec from Functional.LTP/spec.json
     - the spec has variables which select the LTP sub-tests to run, and whether to do a buildonly or runonly of the tests
 - Fuego does a build in /fuego-rw/buildzone/<test_name>-<platform>/
   - materials for the target board are put in <LTP_DIR>/target_bin
 - Fuego does a deploy
   - it installs the materials to the board
   - this is usually about 400M of binaries on the target
   - in directory $TEST_HOME
     - usually something like /home/a/fuego.Functional.LTP
 - fuego_test.sh then runs (on the target board)
   - ltp_target_run.sh
   - which runs:
     - for regular tests:
       - ltprun
       - which runs:
         - ltp-pan
         - which runs an individual test program
           - e.g. abort01
     - for posix tests:
       - run-posix-option-group-test.sh
     - for realtime tests:
       - test_realtime.sh
  • Fuego then fetches results from the target board - it collects the testlog for the test - it calls test_fetch_results in fuego_test.sh - this gathers the results from the board and puts it into a 'result' directory in the log directory for the run - Fuego then processes the results: - it calls ltp_process.py to create the results.xlxs file and/or rt.log - the parser.py is called to process regular test results output and create the run.json file - the parser also creates the files used for charting (results.json, flat_plot_data.txt, flot_chart_data.json)
  - Fuego then fetches results from the target board
    - it collects the testlog for the test
    - it calls test_fetch_results in fuego_test.sh
      - this gathers the results from the board and puts it into a 'result' directory in the log directory for the run
  - Fuego then processes the results:
    - it calls ltp_process.py to create the results.xlxs file and/or rt.log
    - the parser.py is called to process regular test results output and create the run.json file
       - the parser also creates the files used for charting (results.json, flat_plot_data.txt, flot_chart_data.json)

defining a new spec [edit section]

== defining a new spec ==
Variables - the following variables can be defined for an LTP test in a test spec:
 - tests - this is a space-separated list of LTP sub-tests to execute on the board
   - tests come in 3 categories (regular, posix and realtime).  The end user should not have to know about these categories, as fuego_test.sh will filter the entries in the 'tests' list into the right groups for execution on the board.
   - a full list of all available tests is in fuego_test.sh in the variable ALL_TESTS
 - buildonly - if defined and set to true, then the build is performed and the software deployed to the target, but no run phase is executed
 - runonly - if defined and set to true, then LTP is not built, but only run on the target
 - runfolder - if specified, this specifies the location on the target where LTP has been installed.
   * currently, the deploy step does not use this - this is intended for people who manually install LTP to somewhere besides the default Fuego test directory on the target board.
 - skiplist - contains a space-separated list of individual LTP test programs that should not be executed on the target

invocation lines [edit section]

== invocation lines ==
Here are different invocation lines for these scripts and programs, in Fuego:
=== ltp_target_run.sh ===
 * report 'cd /fuego-rw/tests/fuego.Functional.LTP; TESTS="quickhit "; PTSTESTS=""; RTTESTS=""; . ./ltp_target_run.sh'
ltp_target_run.sh looks for the environment variables TESTS, PTSTESTS and RTTESTS, and executes each test in each list. * In the TESTS list, it executes runltp for each test * results for each test go into $TEST_HOME/results/<test>/*.log * in the RTTESTS list, it executes test_realtime.sh for each * pts.log is generated and put into $TEST_HOME/results/pts.log * in the PTSTESTS list, it executes run-posix-option-group-test.sh for each test * log files under testcases/realtime/logs are combined into $TEST_HOME/results/rt.log
ltp_target_run.sh looks for the environment variables TESTS, PTSTESTS and RTTESTS, and executes each test in each list.
  * In the TESTS list, it executes runltp for each test
    * results for each test go into $TEST_HOME/results/<test>/*.log
  * in the RTTESTS list, it executes test_realtime.sh for each
    * pts.log is generated and put into $TEST_HOME/results/pts.log
  * in the PTSTESTS list, it executes run-posix-option-group-test.sh for each test
    * log files under testcases/realtime/logs are combined into $TEST_HOME/results/rt.log

ltprun [edit section]

== ltprun ==
ltp_target_run.sh calls runltp as follows:
Note that OUTPUT_DIR is $(pwd)/result (usually $TEST_HOME/result).
Note that OUTPUT_DIR is $(pwd)/result (usually $TEST_HOME/result).
{{{#!YellowBox
    ./runltp -C ${OUTPUT_DIR}/${i}/failed.log \
             -l ${OUTPUT_DIR}/${i}/result.log \
             -o ${OUTPUT_DIR}/${i}/output.log \
             -d ${TMP_DIR} \
             -S ./skiplist.txt \
             -f $i > ${OUTPUT_DIR}/${i}/head.log 2>&1
}}}
This says to: * put list of commands that failed to failed.log (-C) * put the machine-readable list of program executions and statuses to result.log (-l) * this does not have full testcase status in it, just the exit code, duration, utime, of each test program run * put the human-readable output to output.log (-o) * use $TMP_DIR for temporary data * use skiplist.txt to skip specific tests * run the $i test (-f) * put runltp output and all errors into head.log (>, 2>&1)
This says to:
 * put list of commands that failed to failed.log (-C)
 * put the machine-readable list of program executions and statuses to result.log (-l)
   * this does not have full testcase status in it, just the exit code, duration, utime, of each test program run
 * put the human-readable output to output.log (-o)
 * use $TMP_DIR for temporary data
 * use skiplist.txt to skip specific tests
 * run the $i test (-f)
 * put runltp output and all errors into head.log (>, 2>&1)

ltp-pan [edit section]

== ltp-pan ==
From $TEST_HOME/result/quickhit/output.log (the output from runltp), here is a sample ltp-pan invocation:
{{{#!YellowBox
COMMAND:    /fuego-rw/tests/fuego.Functional.LTP/bin/ltp-pan  -e -S  \
  -a 16855 -n 16855 \
  -f /fuego-rw/tests/fuego.Functional.LTP/tmp/ltp-Ua4obsCcQw/alltests \
  -l /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/result.log \
  -o /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/output.log \
  -C /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/failed.log \
  -T /fuego-rw/tests/fuego.Functional.LTP/output/LTP_RUN_ON-output.log.tconf
LOG File: /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/result.log OUTPUT File: /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/output.log FAILED COMMAND File: /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/failed.log TCONF COMMAND File: /fuego-rw/tests/fuego.Functional.LTP/output/LTP_RUN_ON-output.log.tconf Running tests....... INFO: ltp-pan reported all tests PASS LTP Version: 20170116 }}}
LOG File: /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/result.log
OUTPUT File: /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/output.log
FAILED COMMAND File: /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/failed.log
TCONF COMMAND File: /fuego-rw/tests/fuego.Functional.LTP/output/LTP_RUN_ON-output.log.tconf
Running tests.......
INFO: ltp-pan reported all tests PASS
LTP Version: 20170116
}}}
Let's break this out: * -e - exit non-zero it any command exits non-zero * -S - run tests sequentially (not multi-threaded) * -a 16855 - specify an active-file of 16855 * -n 16855 - specify a tag name of 16855 * -f <tmp>/alltests - specify a command file of 'alltests' * -l - specify the result.log file (machine-readable) * -o - specify the output.log file (human readable test output) * -C - specify the failed.log file (list of failed tests) * -T - specify the TCONF file - the source says that test cases that are not fully tested will be recorded in this file
Let's break this out:
 * -e - exit non-zero it any command exits non-zero
 * -S - run tests sequentially (not multi-threaded)
 * -a 16855 - specify an active-file of 16855
 * -n 16855 - specify a tag name of 16855
 * -f <tmp>/alltests - specify a command file of 'alltests'
 * -l - specify the result.log file (machine-readable)
 * -o - specify the output.log file (human readable test output)
 * -C - specify the failed.log file (list of failed tests)
 * -T - specify the TCONF file
   - the source says that test cases that are not fully tested will be recorded in this file
The ltp man page says the usage is: {{{#!YellowBox ltp-pan -n tagname [-SyAehp] [-t #s|m|h|d time] [-s starts] [-x nactive] [-l logfile] [-a active-file] [-f command-file] [-d debug- level] [-o output-file] [-O buffer_directory] [-r report_type] [-C fail-command-file] [cmd] }}}
The ltp man page says the usage is:
{{{#!YellowBox
ltp-pan  -n  tagname  [-SyAehp]  [-t  #s|m|h|d  time]  [-s  starts] [-x
       nactive] [-l logfile] [-a active-file]  [-f  command-file]  [-d  debug-
       level]  [-o  output-file]  [-O  buffer_directory]  [-r report_type] [-C
       fail-command-file] [cmd]
}}}

output logs [edit section]

== output logs ==
Output is placed in the following files, on target (for regular tests):
 * $TEST_HOME/output/LTP_RUN_ON-output.log.tconf
 * $TEST_HOME/result/syscalls/failed.log - list of failed tests
 * $TEST_HOME/result/syscalls/head.log - meta-data about full run
 * $TEST_HOME/result/syscalls/output.log - full test output and info
 * $TEST_HOME/result/syscalls/result.log - summary in machine-readable format

output key [edit section]

here) * TPASS - Indicates that the test case had the expected result and passed * TFAIL - Indicates that the test case had an unexpected result and failed. * TBROK - Indicates that the remaining test cases are broken and will not execute correctly, because some precondition not met, such as a resource not being available. * TCONF - Indicates that the test case was not written to run on the current harware or software configuration such as machine type, or kernel version. * TRETR - Indicates that the test cases has been retired and should not be executed any longer. * TWARN - Indicates that the test case experienced an unexpected or undesirable event that should not affect the test itself such as being unable to cleanup resources after the test finished. * TINFO - Specifies useful information about the status of the test that does not affect the result and does not indicate a problem.
=== output key ===
Certain keywords are used in test program output:
(copied from [[http://www.lineo.co.jp/ltp/linux-3.10.10-results/result.html|here]])
 * TPASS - Indicates that the test case had the expected result and passed
 * TFAIL - Indicates that the test case had an unexpected result and failed.
 * TBROK - Indicates that the remaining test cases are broken and will not execute correctly, because some precondition not met, such as a resource not being available.
 * TCONF - Indicates that the test case was not written to run on the current harware or software configuration such as machine type, or kernel version.
 * TRETR - Indicates that the test cases has been retired and should not be executed any longer.
 * TWARN - Indicates that the test case experienced an unexpected or undesirable event that should not affect the test itself such as being unable to cleanup resources after the test finished.
 * TINFO - Specifies useful information about the status of the test that does not affect the result and does not indicate a problem.

LTP status [edit section]

= LTP status =
== in Tim's lab on August 3, 2017 ==
How long to execute: * bbb.default.Functional.LTP it hangs in inotify06 * some long tests: * fsync02 * fork13 * min1.default.Functional.LTP - 45 minutes with build * docker.docker.Functional.LTP - 22 minutes (not sure if it includes the build)
How long to execute:
 * bbb.default.Functional.LTP it hangs in inotify06
   * some long tests:
     * fsync02
     * fork13
 * min1.default.Functional.LTP - 45 minutes with build
 * docker.docker.Functional.LTP - 22 minutes (not sure if it includes the build)

how to find the long-running tests [edit section]

= how to find the long-running tests =
 * use: sed s/dur=// result.log | sort -k3 -n
Missing column output list for table
{{{#!Table:ltp_duration
||board||test       ||duration ||user||system||test date||
||bbb||creat06            ||30 ||5   ||21    ||2017-08-03|| 
||bbb||gettimeofday02     ||30 ||647 ||2324  ||2017-08-03||                                               
||bbb||ftruncate04_64     ||31 ||5   ||22    ||2017-08-03||                                                  
||bbb||chown04            ||32 ||6   ||23    ||2017-08-03||                                                        
||bbb||fchown04_16        ||32 ||3   ||14    ||2017-08-03||                                                     
||bbb||fchown04           ||32 ||2   ||12    ||2017-08-03||                                                        
||bbb||access04           ||33 ||5   ||26    ||2017-08-03||                                                       
||bbb||ftruncate04        ||33 ||5   ||23    ||2017-08-03||                                                    
||bbb||fchmod06           ||34 ||5   ||20    ||2017-08-03||                                                      
||bbb||chown04_16         ||35 ||4   ||25    ||2017-08-03||                                                      
||bbb||chmod06            ||36 ||4   ||21    ||2017-08-03||                                                         
||bbb||acct01             ||37 ||4   ||29    ||2017-08-03||                                                         
||bbb||clock_nanosleep2_01||51 ||2   ||1     ||2017-08-03||                                           
||bbb||fsync02            ||292 ||1  ||42    ||2017-08-03||                                                         
||bbb||fork13             ||806 ||946||24970 ||2017-08-03||  
}}}

inotify06 oops [edit section]

== inotify06 oops ==
inotify06 causes the kernel to oops, with the following report:
{{{#!YellowBox
[57540.087504] Kernel panic - not syncing: softlockup: hung tasks                                                              
[57540.093608] [<c00111f1>] (unwind_backtrace+0x1/0x9c) from [<c04c8955>] (panic+0x59/0x158)                                   
[57540.102136] [<c04c8955>] (panic+0x59/0x158) from [<c00726d9>] (watchdog_timer_fn+0xe5/0xfc)                                 
[57540.110862] [<c00726d9>] (watchdog_timer_fn+0xe5/0xfc) from [<c0047b4b>] (__run_hrtimer+0x4b/0x154)                         
[57540.120307] [<c0047b4b>] (__run_hrtimer+0x4b/0x154) from [<c00483f7>] (hrtimer_interrupt+0xcf/0x1fc)                        
[57540.129842] [<c00483f7>] (hrtimer_interrupt+0xcf/0x1fc) from [<c001e98b>] (omap2_gp_timer_interrupt+0x1f/0x24)              
[57540.140275] [<c001e98b>] (omap2_gp_timer_interrupt+0x1f/0x24) from [<c0072dd3>] (handle_irq_event_percpu+0x3b/0x188)        
[57540.151246] [<c0072dd3>] (handle_irq_event_percpu+0x3b/0x188) from [<c0072f49>] (handle_irq_event+0x29/0x3c)                
[57540.161505] [<c0072f49>] (handle_irq_event+0x29/0x3c) from [<c007489b>] (handle_level_irq+0x53/0x8c)                        
[57540.171026] [<c007489b>] (handle_level_irq+0x53/0x8c) from [<c00729ff>] (generic_handle_irq+0x13/0x1c)                      
[57540.180728] [<c00729ff>] (generic_handle_irq+0x13/0x1c) from [<c000d0df>] (handle_IRQ+0x23/0x60)                            
[57540.189902] [<c000d0df>] (handle_IRQ+0x23/0x60) from [<c00085a9>] (omap3_intc_handle_irq+0x51/0x5c)                         
[57540.199339] [<c00085a9>] (omap3_intc_handle_irq+0x51/0x5c) from [<c04cea9b>] (__irq_svc+0x3b/0x5c)                          
[57540.208679] Exception stack(0xcb7e5e98 to 0xcb7e5ee0)                                                                       
[57540.213943] 5e80:                                                       de03ef54 df34f180                                   
[57540.222478] 5ea0: 68836882 00000000 de03ef54 cb7e4000 de03ef54 ffffffff df34f180 df34f1dc                                   
[57540.231008] 5ec0: 00000010 df016450 cb7e5ee8 cb7e5ee0 c00e2e33 c025eb88 60000033 ffffffff                                   
[57540.239544] [<c04cea9b>] (__irq_svc+0x3b/0x5c) from [<c025eb88>] (do_raw_spin_lock+0xa4/0x114)                              
[57540.248534] [<c025eb88>] (do_raw_spin_lock+0xa4/0x114) from [<c00e2e33>] (fsnotify_destroy_mark_locked+0x17/0xec)           
[57540.259240] [<c00e2e33>] (fsnotify_destroy_mark_locked+0x17/0xec) from [<c00e316b>] (fsnotify_clear_marks_by_group_flags+0x5
7/0x74)                                                                                                                        
[57540.271577] [<c00e316b>] (fsnotify_clear_marks_by_group_flags+0x57/0x74) from [<c00e2869>] (fsnotify_destroy_group+0x9/0x24)
[57540.283284] [<c00e2869>] (fsnotify_destroy_group+0x9/0x24) from [<c00e3c91>] (inotify_release+0x1d/0x20)                    
[57540.293182] [<c00e3c91>] (inotify_release+0x1d/0x20) from [<c00bbb69>] (__fput+0x65/0x16c)                                  
[57540.301808] [<c00bbb69>] (__fput+0x65/0x16c) from [<c004323d>] (task_work_run+0x6d/0xa4)                                    
[57540.310238] [<c004323d>] (task_work_run+0x6d/0xa4) from [<c000f3eb>] (do_work_pending+0x6f/0x70)                            
[57540.319402] [<c000f3eb>] (do_work_pending+0x6f/0x70) from [<c000c893>] (work_pending+0x9/0x1a)                              
[57540.328394] drm_kms_helper: panic occurred, switching back to text console                                                  
[57572.431529] CAUTION: musb: Babble Interrupt Occurred                                                                        
[57572.503646] CAUTION: musb: Babble Interrupt Occurred                                                                        
[57572.590984]  gadget: high-speed config #1: Multifunction with RNDIS                                                         
[57577.609329] CAUTION: musb: Babble Interrupt Occurred                                                                        
[57577.683922] CAUTION: musb: Babble Interrupt Occurred                                                                        
[57577.772461]  gadget: high-speed config #1: Multifunction with RNDIS 
}}}
{{{#!YellowBox
[ 3592.037885] BUG: soft lockup - CPU#0 stuck for 23s! [inotify06:1994]                                                        
[ 3592.045691] BUG: scheduling while atomic: inotify06/1994/0x40010000                                                         
[ 3592.069728] Kernel panic - not syncing: softlockup: hung tasks                                                              
[ 3592.076169] [<c00111f1>] (unwind_backtrace+0x1/0x9c) from [<c04c8955>] (panic+0x59/0x158)                                   
[ 3592.085129] [<c04c8955>] (panic+0x59/0x158) from [<c00726d9>] (watchdog_timer_fn+0xe5/0xfc)                                 
[ 3592.094308] [<c00726d9>] (watchdog_timer_fn+0xe5/0xfc) from [<c0047b4b>] (__run_hrtimer+0x4b/0x154)                         
[ 3592.104241] [<c0047b4b>] (__run_hrtimer+0x4b/0x154) from [<c00483f7>] (hrtimer_interrupt+0xcf/0x1fc)                        
[ 3592.114276] [<c00483f7>] (hrtimer_interrupt+0xcf/0x1fc) from [<c001e98b>] (omap2_gp_timer_interrupt+0x1f/0x24)              
[ 3592.125246] [<c001e98b>] (omap2_gp_timer_interrupt+0x1f/0x24) from [<c0072dd3>] (handle_irq_event_percpu+0x3b/0x188)        
[ 3592.136766] [<c0072dd3>] (handle_irq_event_percpu+0x3b/0x188) from [<c0072f49>] (handle_irq_event+0x29/0x3c)                
[ 3592.147522] [<c0072f49>] (handle_irq_event+0x29/0x3c) from [<c007489b>] (handle_level_irq+0x53/0x8c)                        
[ 3592.157518] [<c007489b>] (handle_level_irq+0x53/0x8c) from [<c00729ff>] (generic_handle_irq+0x13/0x1c)                      
[ 3592.167710] [<c00729ff>] (generic_handle_irq+0x13/0x1c) from [<c000d0df>] (handle_IRQ+0x23/0x60)                            
[ 3592.177328] [<c000d0df>] (handle_IRQ+0x23/0x60) from [<c00085a9>] (omap3_intc_handle_irq+0x51/0x5c)                         
[ 3592.187235] [<c00085a9>] (omap3_intc_handle_irq+0x51/0x5c) from [<c04cea9b>] (__irq_svc+0x3b/0x5c)                          
[ 3592.197024] Exception stack(0xdf74dec8 to 0xdf74df10)                                                                       
[ 3592.202568] dec0:                   de7f1e50 de3786c0 00000000 de7f1e54 de7f1e50 de7f1e50                                
[ 3592.211507] dee0: de378748 ffffffff de3786c0 de37871c 00000010 df016450 df74dee8 df74df10                                   
[ 3592.220470] df00: c00e3171 c00e2d16 80000033 ffffffff                                                                       
[ 3592.226025] [<c04cea9b>] (__irq_svc+0x3b/0x5c) from [<c00e2d16>] (fsnotify_put_mark+0xa/0x34)                               
[ 3592.235380] [<c00e2d16>] (fsnotify_put_mark+0xa/0x34) from [<c00e3171>] (fsnotify_clear_marks_by_group_flags+0x5d/0x74)     
[ 3592.247195] [<c00e3171>] (fsnotify_clear_marks_by_group_flags+0x5d/0x74) from [<c00e2869>] (fsnotify_destroy_group+0x9/0x24)
[ 3592.259493] [<c00e2869>] (fsnotify_destroy_group+0x9/0x24) from [<c00e3c91>] (inotify_release+0x1d/0x20)                    
[ 3592.269880] [<c00e3c91>] (inotify_release+0x1d/0x20) from [<c00bbb69>] (__fput+0x65/0x16c)                                  
[ 3592.278939] [<c00bbb69>] (__fput+0x65/0x16c) from [<c004323d>] (task_work_run+0x6d/0xa4)                                    
[ 3592.287811] [<c004323d>] (task_work_run+0x6d/0xa4) from [<c000f3eb>] (do_work_pending+0x6f/0x70)                            
[ 3592.297458] [<c000f3eb>] (do_work_pending+0x6f/0x70) from [<c000c893>] (work_pending+0x9/0x1a)                              
[ 3592.306894] drm_kms_helper: panic occurred, switching back to text console                                                  
[ 3624.208005] CAUTION: musb: Babble Interrupt Occurred                                                                        
[ 3624.280120] CAUTION: musb: Babble Interrupt Occurred                                                                        
[ 3624.367738]  gadget: high-speed config #1: Multifunction with RNDIS                                                         
[ 3629.389807] CAUTION: musb: Babble Interrupt Occurred                                                                        
[ 3629.464524] CAUTION: musb: Babble Interrupt Occurred                                                                        
[ 3629.552870]  gadget: high-speed config #1: Multifunction with RNDIS
}}}

Examples of LTP analysis [edit section]

https://github.com/foss-for-synopsys-dwc-arc-processors/ltp/blob/master/README.ARC
= Examples of LTP analysis =
== ARC LTP instructions ==
See https://github.com/foss-for-synopsys-dwc-arc-processors/ltp/blob/master/README.ARC
It lists things like: * tests that don't build with the ARC toolchain * tests that use BSD signal calls, which are not configured for ARC by default * tests that require different default parameters because they take too much time on an embedded system * tests that hang the system * tests that are not applicable to ARC, but fail, so should be disabled
It lists things like:
 * tests that don't build with the ARC toolchain
 * tests that use BSD signal calls, which are not configured for ARC by default
 * tests that require different default parameters because they take too much time on an embedded system
 * tests that hang the system
 * tests that are not applicable to ARC, but fail, so should be disabled
It has a section of notes indicating requirements for the tests, including: * tests that require a loopback device * kernel config options (that enable a loopback device) * binaries required (util-linux, e2fsprogs, bash) * the buildroot config options to enable those binaries
It has a section of notes indicating requirements for the tests, including:
 * tests that require a loopback device
 * kernel config options (that enable a loopback device)
 * binaries required (util-linux, e2fsprogs, bash)
   * the buildroot config options to enable those binaries

Examples of LTP visualization [edit section]

http://www.lineo.co.jp/ltp/linux-3.10.10-results/result.html
= Examples of LTP visualization =
== Linaro ==
Linaro has some nice color-coded tables with LTP results:
http://www.lineo.co.jp/ltp/linux-3.10.10-results/result.html

Bugs or issues [edit section]

= Bugs or issues =
 * rmdir05 - has some tests that are not implemented for Linux (TCONF)
 * f00f - only applies to i386 (TCONF)
 * select01,2,3,4 - on bbb, expects a different C library than is present
   * expects GLIBC_2.15 but bbb has GLIBC_2.13
 * access02, access04 - requires root

Questions [edit section]

= Questions =
 * Q: Where do parameters for test command line come from?
   * A: LTP docs say it comes from the runtest directory (see runtest/syscalls)
      * runtest/syscalls fork13 entry has: "fork13 fork13 -i 1000000" (which is too long on bbb)
TBWiki engine 1.9.2 by Tim Bird