The IAT produces metrics reports automatically and transparently as its services
are provide to support peer reviews, code walk throughs, and instrumentation.
The reports include automatic comparisons to a previous baseline and to a
selected "gold standard" baseline. This allows an analyst to quickly determine
if things are getting better, worse, or staying the same. There is a great
deal of data that is collected and categorized and visually organized so
that a user can quickly spot potential issues. There is also a big picture
summary at the end of the detailed metrics areas which offers an "IAT
judgment" on the collection of code. The IAT judgment call is marked as
WORSE,
BETTER, or
SAME.
Analysis Group Stats This area gathers traditional metrics
such as LOC and number of Req's. But it provides different views of those
fundamental metrics such as ratio of "comments" to LOC.
Possible Problems This area is a remnant from the stats-problems
report, but it provides a description of each metric item in the graphical
sections of the report.
Metrics - Percents Within Collections This area provides metrics
in terms of percent of the code analyzed. This is a great way to compare
peer reviews (small collections of code) against a previous baseline release
(the whole thing). It is also a great way to compare different projects which
follow similar, but not exact, coding standards. For Example, assume version
A has 10 items flagged and is 10,000 LOC while version B has 20 items flagged
but is 50,000 LOC. Even though version B has more items flagged one could
argue that it is better than Version A since as a percentage, the flagged
items are lower.
Metrics - Counts Within Collections This area provides metrics
in terms of absolute counts. It is used to compare version releases on the
same project and to get an assessment of the exact number of problems.
In all cases the bar charts represent a deviation from the norm. The norm
is the green area. In general, for the detailed problems report area anything
to the left on the bar chart is better and anything to the right is worse.
A pegged needle is red. The bar charts at the top of the report are not in
terms of good or bad, just different. The up down triangles are another
indication of how the software is progressing. Green tends to indicate progress
and red tends to indicate entropy, hence the names of the reports, entropy.html
:)
The following is a brief description of the Analysis Group Stats metric
items.
-
Total Source Lines This is a count of all the lines.
-
Total Non Blank Lines This is a count of all the comments,
headers, and code.
-
Total Logical This is a subset of the total LOC.
-
Total SemiColons This is a subset of the total LOC.
-
Total LOC This is the sum of the total logical and semi colon
count.
-
All Files Count of all the files analyzed.
-
Files - C Count of all the C files analyzed.
-
Files - H Count of all the H files analyzed.
-
Files - ASM Count of all the ASM files analyzed.
-
Functions - C Count of all the functions located by the IAT.
The IAT uses the function headers to locate functions.
-
Log Events in Code Count of the log events.
-
TOC Reqs in Code Count of all the TOC and PS req's located
in the files analyzed.
-
TOC Reqs in SRDB Count of the req's in the SRDB file fed to
the IAT. Rules are used to separate a requirement from a description.
-
Debug Events Count of the debug log events.
-
Code Req per SRDB Req The number of req's located in the code
divided by the number of req's located in the SRDB.
-
LOC per File Average Lines of code per file.
-
LOC per Function Average Lines of code per function.
-
Functions per File - C Average number of functions per file.
-
Non Blank Lines Per LOC If headers are considered comments,
this is a comment to LOC ratio.
-
LOC Per Logical Indicates LOC for each decision.
-
C Per H Ratio of C to H files. It should be greater than 1.
-
LOC per Req Ratio of the lines of code to req's found in the
code.
-
Functions per Req Ratio of functions to req's found in
the code.
-
Logevents per Req Ratio of log events to req's found in the
code.
The following is a brief description of the Analysis Summary metric
items. These items are reported in terms of flagged LOC, files, and/or functions.
-
Fatal Errors These are artifacts of instrumentation that will
lead to compile errors most of the time, such as printf's, breaking if-else
sequences, splitting declaration areas, and instrumenting previously instrumented
code.
-
C Header The .c .h and assembly headers are separately checked
for the appropriate fields. Missing fields, wrong sequence of the fields,
or missing headers are detected. Extra fields are ignored. Valid format:
-
H Header The .c .h and assembly headers are separately checked
for the appropriate fields. Missing fields, wrong sequence of the fields,
or missing headers are detected. Extra fields are ignored. Thread ID is optional
and will only appear in the extra fields note if there is another header
problem. Valid format:
-
ASM Header The .c .h and assembly headers are separately checked
for the appropriate fields. Missing fields, wrong sequence of the fields,
or missing headers are detected. Extra fields are ignored. Valid format:
-
Must Fix Classification The header classification field must
be marked, no other markings are permitted. The file naming convention is
checked against the header classification marking. Keywords that may trigger
a higher classification are compared against the file naming convention.
Missing headers against classified file names are detected.
-
Possible Classification Issues The header classification field
must be marked, no other markings are permitted. The file naming convention
is checked against the header classification marking. Keywords that may trigger
a higher classification are compared against the file naming convention.
Missing headers against classified file names are detected.
-
SV Marking The SV field is checked for YES or NO marking. Blank
or other markings are detected and noted in the detailed message. The SV
header field must be present for this check to detect an error. If the SV
header field is missing, that is noted in the bad header area.
-
CV Marking The CV field is checked for YES or NO marking. Blank
or other markings are detected and noted in the detailed message. The CV
header field must be present for this check to detect an error. If the CV
header field is missing, that is noted in the bad header area.
-
Fixed Keywords Code Although there is a separate keywords report,
there are several keywords that must be explained or removed. The bad fixed
keywords form this set. Examples include tbd, tbs, tbsl, demo, ?'s etc.
-
Fixed Keywords Prologue Although there is a separate keywords
report, there are several keywords that must be explained or removed. The
bad fixed keywords form this set. Examples include tbd, tbs, tbsl, demo,
?'s etc.
-
Conditional Compiles ifdefs Conditional compiles can become
a problem if they are not properly managed.
-
Missing Curly Braces If and Else statements should enclose
their operations with curly braces. Missing curly braces can lead to potential
problems as the code changes and if or else blocks are potentially modified
to include more than one code statement.
-
Switch Default balance green is good red is bad All switch
statements must be paired with a default statement. Missing default statements
are detected exclusive of comments.
-
Default gsiSwError balance green is good red is bad All default
statements must call gsiSwError. Missing gsiSwError calls and calls that
are commented out are detected.
-
Case Break balance All case statements with code should be
paired with a break or return statement. Missing break statements are detected
exclusive of comments. Without a break the code is less robust and prone
to disintegration when modified.
-
Nested Switches Some case statements have a nested switch sequence.
-
Stacked Case Statements Some case statements are stacked.
-
Calling Rules The software is checked to determine if calling
rules are violated.
-
File with: No Error Exits Most of the software should include
an error exit. This is not a hard and fast rule, but may need to be justified.
-
Files with: 15 or more Functions Good practices limit the number
of functions to in any given source file.
-
Files with: 500 or more than LOC Good practices limit the number
of LOC to in any given source file.
-
Functions with: 100 more than LOC For proper analysis there
must be a valid header termination before the start of a function. Good practices
limit the LOC in any given function.
-
C Functions with: less than 5 LOC For proper analysis there
must be a valid header termination before the start of a function. Good practices
limit the LOC in any given function.
-
Dead Code This is a very simple detector looking for single
line commented out code. Code that is block commented out is not detected.
Dead code that is the result of logical errors is not detected.
-
Log Event Logevents are subjected to several checks to determine
if there may be a potential problem. The checks include redundant text,
consecutive events without program code in between events, encoding problems
(LE SV not LE, not SV, not TE, etc), potentially placing a logevent at the
start of a procedure rather than at the conclusion, potentially placing a
logevent within a loop.
-
Line Length Report lines greater than characters where characters
are CR/LF. Ignore lines generated by clear case, they exceed .
-
do Loops do loops have long been considered a bad practice
since they tend to invite endless loops into into the code.
-
goto Statements goto statements have long been considered a
bad practice and result in unstructured code.
-
?: Operator This operator leads to confusion many times and
should be avoided.
-
Storage malloc free Requesting storage using malloc, alloc,
calloc should be minimized and include a symetrical free cfree sequence.
This can lead to memory leaks if not properly freed. There are also SV related
issues.
-
Type ReCasting Recasting variables should not be practiced
since it can lead to truncation of data.
The following is a brief description of the Analysis Summary metric
items at the end of the metric details. They represent a total of the metrics
and include the following items.
-
LOC Total number of LOC flagged from the individual metric
item areas.
-
Functions Total number of functions flagged from the individual
metric item areas.
-
Files Total number of files flagged from the individual metric
item areas.
-
Total Up Flags Total number of up tic's. As the up tic's increase,
things get worse.
-
Total Down Flags Total number of down tic's. As the down
tic's decrease, things get better.
-
Total Same Flags Total number of neutral tic's. As the neutral
tic's increase or decrease, examine the up and down tic's to get a feel for
what happened.
-
Final Tally of Flags This is the IAT judgment call indicating
if this release is better or worse than the baseline. This result appears
both in the percent and count areas. Remember that the percent area is an
attempt to normalize against the size of the analysis package. If both areas
agree then IAT is firm on its judgment. If there is a discrepancy between
the 2 areas, then using the normalized result in the percent area is the
more prudent answer.