Lab Reduction for z-3-01-poweronoff-fail
v102103 z-3-01-poweronoff-fail.txt s_z-3-01-poweronoff-fail.html ../logev

Lab Reduction is a companion web based tool to Data Reduction except that it operates on real test data rather than simulated test data. The data reduction is performed in real time as the test executes. Anyone on the network can view the test results in real time during the testing.

Running Real Time Test Results Script

This tool is started as a DOS application in a directory which contains the test data. The syntax is:

perl labreduction.pl [filename.txt] [delayinsecs] [TocCor] [TocCov] [LabData] [LeFind] [LeNoFind] [Results] [CPC]

where all [fields] are optional and [blank] or [-] is default or are set as follows:
filename.txtIs the test file name, the default is testlog.txt. The report name is derived from this file name. The default report name is testlog.html. When entering a file name the .txt extension must be used (e.g. AAAa.txt, AAAb.txt).
delayinsecsThe script executes 100 times before ending, this is the delay between executions, the default is 20 seconds.
TocCorIf set to [y] correlates the log events to requirements.
TocCovIf set to [y] shows which requirements were addressed and which requirements were not addressed by the test results.
LabDataIf set to [y] shows the raw test data.
LeFindIf set to [y] shows the log events that were found. The default is [y].
LeNoFindIf set to [y] shows the log events NOT found in these test results.
ResultsIf set to [y] shows the test results correlated to the original log events. The deafult is [y].
CPCIf set to [all], [red], or [a cpc] shows req's satisfied and NOT satisfied by these test results.
FilterIf set to [y], filters items containing certain words from the reports. The deafult is [y].

The file containing the test results is automatically updated by the BOOT application for some test cases. In all other BOOT and RED test cases the file is manually updated by the testers by either copying log events from the emulator window or entering log events based on emulator profile points. Using appropriate file naming conventions, parallel testing can be performed.

This application requires the inclusion of a special file (translate.dat) that is created by the logevent application and stored with the instrumentation analysis results. This file translates the log event codes to the original english equivalents. It also requires a version of the TOC (s_toc.dat) to correlate the TOC requirements with the lab test results.

Analyzing The Test Results

The test results analysis begins during informal testing by examining the real time test results.

  1. Are the start and stop log events as expected
  2. Are any log events grossly out of place
  3. Do the log events and sequence look resonable
  4. Do the number of log events match previous test runs
  5. Do the source files and function names in the log events look resonable
  6. Does the layout of the log event sequence appear to be similar to previous test runs

If the results appear to be reasonable, copy and paste them into the test procedures. They will form the factory template that will be used during formal testing.

As part of an offline analysis, use the IAT reports and software source to locate the test result log events. Examine the software and look for potential missing log events or wrong long events. This is a final examination to confirm that the test results are as expected.

For dry run, create a single complete test results report following the sequence that is expected during formal test. Use these dry run test results to support the formal test.

At the conclusion of all the DRY RUN tests enable the log events found, log events not found, requirements correlation, and requirements coverage reports. Look for log events and requirements that were not captured during the all the test activity. If needed, create new tests to capture the missing requirements.

Analysis Reports Script Option Comment
Test CommentsComments Summary of the comments entered by testers into the test file.
PUIs VerifiedComments The list of PUIs verified in this test sequence. Use it in the filter service to compared different test sequences.
Req CoverageComments This is a list of PUIs and the number of times the PUI was encountered in a log event with this test sequence. Use this to gain confidence in the test coverage. For example, a PUI that has been hit 50 times was probably the result of slightly different code paths.
Test ResultsResults This is the test results in log event format. This analysis connects the raw numerical log events captured during lab testing with the original source code comment statements, source file, and C-function.
Missing Req'sResults This is a list of req's that should have been in this test but are missing. It is based on the IAT filter service where one set of reqs's are subtracted from another set of req's.

NOTE: These reports are option sensitive. If an option was not selected, the report link is not present.

Other Analysis Reports . TOC req's . simulated logevents . translate file . Error_Codes.txt


Settings

Paths
Main Path
Simulation Directory
Analysis Directory
Analysis Report Name
Test File Name

Settings case sensitive
Script Rerun Delay (sec)
CPC, Release, or Test Case Considered
Trace Events

Requirements Accounting - Fixed case sensitive
Future Events
De-Instrumented
Negative Paths

Requirements Accounting - User Defined case sensitive Categories
Name             Description                                         PUI's








Display Filters - flaten display case insensitive
Raw LCD Log Events
Raw PAD Log Events
Raw PIN Log Events
Raw LOC Log Events
Raw Any Log Events
Text Results

Ignore Data - for very fast bypassing of noise case insensitive
Bypass Noise Raw Events
Bypass Noise Text Results

Keywords copy paste REQs from word table column get converted to OR format - case insensitive
Red
Blue
Navy
Purple
Green
Maroon
case sensitive
Orange

Expected Req's From Test Procedure - copy & paste from doc case insensitive
Orange LE SV

LabData Test Comments Test Results LE Found LE NOT Found Show Only Keyword Events
TOC Correlation TOC Coverage Not in Software Baseline Missed Req's Func Call Seq Show Only Expected Req's
Filter Classified Server Config

11/16/2003 00:55:26


Data Reduction Reports for z-3-01-poweronoff-fail


Test Comments Summary Comments entered by testers into test data file

Log Events Data

3-1 Power On Off zz-iatdemo

Tester: ws

1. Power On
1. Done

2. Power Off
2. Done

     


Test Results Lab Data Connected To File Name, C-function, and Comment Statements 30 z-3-01-poweronoff-fail.txt 11/16/2003 00:55:26
1 0000 time I Translate File Created 11/14/2003 23:55:36
3-1 Power On Off zz-iatdemo

Tester: ws

1. Power On
1 1000 1001 I badcoding.c function_a LE SV TOC-001 this is a real rocket scientist - will not compile here
2 1001 1002 I badcoding.c function_a LE SV TOC-001 this is a real rocket scientist - will not compile here
3 1002 1003 I badcoding.c function_b LE SV TOC-002 this is paraphrase of req 2
4 1003 1004 I badcoding.c function_b LE SV TOC-003 this is paraphrase of req 3
5 1004 1005 I badcoding.c function_b LE SV r e q 7 a b o u t i t e m 7 s t u f f AUTO COMMENT // too lazy to paraphrase req
6 1005 1006 I badcoding.c function_b LE SV TOC-008 we really should log all ERROR calls
7 1006 1001 I badcoding.c function_a LE SV TOC-001 this is a real rocket scientist - will not compile here
8 1007 1003 I badcoding.c function_b LE SV TOC-002 this is paraphrase of req 2
9 1008 1002 I badcoding.c function_a LE SV TOC-001 this is a real rocket scientist - will not compile here
10 1009 1005 I badcoding.c function_b LE SV r e q 7 a b o u t i t e m 7 s t u f f AUTO COMMENT // too lazy to paraphrase req
11 1010 1003 I badcoding.c function_b LE SV TOC-002 this is paraphrase of req 2
12 1011 1001 I badcoding.c function_a LE SV TOC-001 this is a real rocket scientist - will not compile here
13 1012 1002 I badcoding.c function_a LE SV TOC-001 this is a real rocket scientist - will not compile here
14 1013 1005 I badcoding.c function_b LE SV r e q 7 a b o u t i t e m 7 s t u f f AUTO COMMENT // too lazy to paraphrase req
15 1014 1005 I badcoding.c function_b LE SV r e q 7 a b o u t i t e m 7 s t u f f AUTO COMMENT // too lazy to paraphrase req
16 1015 1005 I badcoding.c function_b LE SV r e q 7 a b o u t i t e m 7 s t u f f AUTO COMMENT // too lazy to paraphrase req
17 1016 1004 I badcoding.c function_b LE SV TOC-003 this is paraphrase of req 3
18 1017 1005 I badcoding.c function_b LE SV r e q 7 a b o u t i t e m 7 s t u f f AUTO COMMENT // too lazy to paraphrase req
19 1018 1002 I badcoding.c function_a LE SV TOC-001 this is a real rocket scientist - will not compile here
20 1019 1001 I badcoding.c function_a LE SV TOC-001 this is a real rocket scientist - will not compile here
1. Done

2. Power Off
2. Done

99XX series reserved for IAT self tests


PUIs Verified - All in this thread use this in the filter service to determine test run differences

s_z-3-01-poweronoff-fail.html |TOC-001 |TOC-002 |TOC-003 |TOC-008 |done

Total Verified Req's 4


Req Coverage - All in this thread

8 TOC-001 3 TOC-002 2 TOC-003 1 TOC-008

Total Verified Req's 4


Req's Missed In This Test Run This report with the default settings shows the req's that were MISSED in this test. It is based on the filter tool where an initial set of items are found and a second set is subtracted from the intial set. In this case the initial set are the expected req's as defined in the test procedure, and the second set are the req's found in the actual test data. Req's are Marked if they are not in the software. Req's are also Marked if they are turned off in the debugger.

Don't Show Located Items Show Removed Items Flip Locate & Remove

Locate: nnoonnee

Remove: s_z-3-01-poweronoff-fail.html |TOC-001 |TOC-002 |TOC-003 |TOC-008 |done

Marked: Bx_xx|TOC-007 |TOC-008 |done

Check if it was Manually verified

Reqs Verification Summary . . . Missed 0 . . . Marked 0 . . . DeInst 0 . . . Manual 0 . . . Found 0


11/16/2003 00:55:26 start
11/16/2003 00:55:26 end
0 secs
done