IAT

Instrumentation & Analysis Tool
Overview

Software Instrumentation and Analysis
Using The Analysis Reports
Running Your Own Software Analysis
Testing and LabReduction
Regression Analysis
Process and Traceability
Other Serives
Lessons Learned
Internal Operations
Regular Expressions

The IAT produces metrics reports automatically and transparently as its services are provide to support peer reviews, code walk throughs, and instrumentation. The reports include automatic comparisons to a previous baseline and to a selected "gold standard" baseline. This allows an analyst to quickly determine if things are getting better, worse, or staying the same. There is a great deal of data that is collected and categorized and visually organized so that a user can quickly spot potential issues. There is also a big picture summary at the end of the detailed metrics areas which offers an "IAT judgment" on the collection of code. The IAT judgment call is marked as WORSE, BETTER, or SAME.

Analysis Group Stats This area gathers traditional metrics such as LOC and number of Req's. But it provides different views of those fundamental metrics such as ratio of "comments" to LOC.

Possible Problems This area is a remnant from the stats-problems report, but it provides a description of each metric item in the graphical sections of the report.

Metrics - Percents Within Collections This area provides metrics in terms of percent of the code analyzed. This is a great way to compare peer reviews (small collections of code) against a previous baseline release (the whole thing). It is also a great way to compare different projects which follow similar, but not exact, coding standards. For Example, assume version A has 10 items flagged and is 10,000 LOC while version B has 20 items flagged but is 50,000 LOC. Even though version B has more items flagged one could argue that it is better than Version A since as a percentage, the flagged items are lower.

Metrics - Counts Within Collections This area provides metrics in terms of absolute counts. It is used to compare version releases on the same project and to get an assessment of the exact number of problems.

Metrics - Compared with prev release Metrics - Compared with release 1.08
ft-red-B1D5B2D6_Preliminary/entropy.html
ft-red-B1B2D6_R1/entropy.html
ft-red-B1B2D6_R2/entropy.html
ft-red-B1B2D6_R3/entropy.html
ft-red-B1B2D6_R3-012902/entropy.html
ft-red-AAA_REL_1/entropy.html
ft-red-AAA_REL_1.08/entropy.html
ft-red-AAA_REL_1.52/entropy.html
ft-red-AAA_REL_1.54/entropy.html
ft-red-AAA_REL_1.56/entropy.html
ft-red-EngRel_1.7/entropy.html
ft-red-B1D5B2D6_Preliminary/entropy-REL_1.08.html
ft-red-B1B2D6_R1/entropy-REL_1.08.html
ft-red-B1B2D6_R2/entropy-REL_1.08.html
ft-red-B1B2D6_R3/entropy-REL_1.08.html
ft-red-B1B2D6_R3-012902/entropy-REL_1.08.html
ft-red-AAA_REL_1/entropy-REL_1.08.html
ft-red-AAA_REL_1.08/entropy-REL_1.08.html
ft-red-AAA_REL_1.52/entropy-REL_1.08.html
ft-red-AAA_REL_1.54/entropy-REL_1.08.html
ft-red-AAA_REL_1.56/entropy-REL_1.08.html
ft-red-EngRel_1.7/entropy-REL_1.08.html

In all cases the bar charts represent a deviation from the norm. The norm is the green area. In general, for the detailed problems report area anything to the left on the bar chart is better and anything to the right is worse. A pegged needle is red. The bar charts at the top of the report are not in terms of good or bad, just different. The up down triangles are another indication of how the software is progressing. Green tends to indicate progress and red tends to indicate entropy, hence the names of the reports, entropy.html :)

The following is a brief description of the Analysis Group Stats metric items.

  1. Total Source Lines This is a count of all the lines.
  2. Total Non Blank Lines This is a count of all the comments, headers, and code.
  3. Total Logical This is a subset of the total LOC.
  4. Total SemiColons  This is a subset of the total LOC.
  5. Total LOC This is the sum of the total logical and semi colon count.
  6. All Files Count of all the files analyzed.
  7. Files - C Count of all the C files analyzed.
  8. Files - H Count of all the H files analyzed.
  9. Files - ASM Count of all the ASM files analyzed.
  10. Functions - C Count of all the functions located by the IAT. The IAT uses the function headers to locate functions.
  11. Log Events in Code Count of the log events.
  12. TOC Reqs in Code Count of all the TOC and PS req's located in the files analyzed.
  13. TOC Reqs in SRDB Count of the req's in the SRDB file fed to the IAT. Rules are used to separate a requirement from a description.
  14. Debug Events Count of the debug log events.
  15. Code Req per SRDB Req The number of req's located in the code divided by the number of req's located in the SRDB.
  16. LOC per File Average Lines of code per file.
  17. LOC per Function Average Lines of code per function.
  18. Functions per File - C Average number of functions per file.
  19. Non Blank Lines Per LOC If headers are considered comments, this is a comment to LOC ratio.
  20. LOC Per Logical Indicates LOC for each decision.
  21. C Per H Ratio of C to H files. It should be greater than 1.
  22. LOC per Req Ratio of the lines of code to req's found in the code.
  23. Functions per Req Ratio of functions  to req's found in the code.
  24. Logevents per Req Ratio of log events to req's found in the code.

The following is a brief description of the Analysis Summary metric items. These items are reported in terms of flagged LOC, files, and/or functions.

  1. Fatal Errors These are artifacts of instrumentation that will lead to compile errors most of the time, such as printf's, breaking if-else sequences, splitting declaration areas, and instrumenting previously instrumented code.
  2. C Header The .c .h and assembly headers are separately checked for the appropriate fields. Missing fields, wrong sequence of the fields, or missing headers are detected. Extra fields are ignored. Valid format:
  3. H Header The .c .h and assembly headers are separately checked for the appropriate fields. Missing fields, wrong sequence of the fields, or missing headers are detected. Extra fields are ignored. Thread ID is optional and will only appear in the extra fields note if there is another header problem. Valid format:
  4. ASM Header The .c .h and assembly headers are separately checked for the appropriate fields. Missing fields, wrong sequence of the fields, or missing headers are detected. Extra fields are ignored. Valid format:
  5. Must Fix Classification The header classification field must be marked, no other markings are permitted. The file naming convention is checked against the header classification marking. Keywords that may trigger a higher classification are compared against the file naming convention. Missing headers against classified file names are detected.
  6. Possible Classification Issues The header classification field must be marked, no other markings are permitted. The file naming convention is checked against the header classification marking. Keywords that may trigger a higher classification are compared against the file naming convention. Missing headers against classified file names are detected.
  7. SV Marking The SV field is checked for YES or NO marking. Blank or other markings are detected and noted in the detailed message. The SV header field must be present for this check to detect an error. If the SV header field is missing, that is noted in the bad header area.
  8. CV Marking The CV field is checked for YES or NO marking. Blank or other markings are detected and noted in the detailed message. The CV header field must be present for this check to detect an error. If the CV header field is missing, that is noted in the bad header area.
  9. Fixed Keywords Code Although there is a separate keywords report, there are several keywords that must be explained or removed. The bad fixed keywords form this set. Examples include tbd, tbs, tbsl, demo, ?'s etc.
  10. Fixed Keywords Prologue Although there is a separate keywords report, there are several keywords that must be explained or removed. The bad fixed keywords form this set. Examples include tbd, tbs, tbsl, demo, ?'s etc.
  11. Conditional Compiles ifdefs Conditional compiles can become a problem if they are not properly managed.
  12. Missing Curly Braces If and Else statements should enclose their operations with curly braces. Missing curly braces can lead to potential problems as the code changes and if or else blocks are potentially modified to include more than one code statement.
  13. Switch Default balance green is good red is bad All switch statements must be paired with a default statement. Missing default statements are detected exclusive of comments.
  14. Default gsiSwError balance green is good red is bad All default statements must call gsiSwError. Missing gsiSwError calls and calls that are commented out are detected.
  15. Case Break balance All case statements with code should be paired with a break or return statement. Missing break statements are detected exclusive of comments. Without a break the code is less robust and prone to disintegration when modified.
  16. Nested Switches Some case statements have a nested switch sequence.
  17. Stacked Case Statements Some case statements are stacked.
  18. Calling Rules The software is checked to determine if calling rules are violated.
  19. File with: No Error Exits Most of the software should include an error exit. This is not a hard and fast rule, but may need to be justified.
  20. Files with: 15 or more Functions Good practices limit the number of functions to in any given source file.
  21. Files with: 500 or more than LOC Good practices limit the number of LOC to in any given source file.
  22. Functions with: 100 more than LOC For proper analysis there must be a valid header termination before the start of a function. Good practices limit the LOC in any given function.
  23. C Functions with: less than 5 LOC For proper analysis there must be a valid header termination before the start of a function. Good practices limit the LOC in any given function.
  24. Dead Code This is a very simple detector looking for single line commented out code. Code that is block commented out is not detected. Dead code that is the result of logical errors is not detected.
  25. Log Event Logevents are subjected to several checks to determine if there may be a potential problem. The checks include redundant text, consecutive events without program code in between events, encoding problems (LE SV not LE, not SV, not TE, etc), potentially placing a logevent at the start of a procedure rather than at the conclusion, potentially placing a logevent within a loop.
  26. Line Length Report lines greater than characters where characters are CR/LF. Ignore lines generated by clear case, they exceed .
  27. do Loops do loops have long been considered a bad practice since they tend to invite endless loops into into the code.
  28. goto Statements goto statements have long been considered a bad practice and result in unstructured code.
  29. ?: Operator This operator leads to confusion many times and should be avoided.
  30. Storage malloc free Requesting storage using malloc, alloc, calloc should be minimized and include a symetrical free cfree sequence. This can lead to memory leaks if not properly freed. There are also SV related issues.
  31. Type ReCasting Recasting variables should not be practiced since it can lead to truncation of data.

The following is a brief description of the Analysis Summary metric items at the end of the metric details. They represent a total of the metrics and include the following items.

  1. LOC Total number of LOC flagged from the individual metric item areas.
  2. Functions Total number of functions flagged from the individual metric item areas.
  3. Files Total number of files flagged from the individual metric item areas.
  4. Total Up Flags Total number of up tic's. As the up tic's increase, things get worse.
  5. Total Down Flags Total number of down tic's. As the  down tic's decrease, things get better.
  6. Total Same Flags Total number of neutral tic's. As the neutral tic's increase or decrease, examine the up and down tic's to get a feel for what happened.
  7. Final Tally of Flags This is the IAT judgment call indicating if this release is better or worse than the baseline. This result appears both in the percent and count areas. Remember that the percent area is an attempt to normalize against the size of the analysis package. If both areas agree then IAT is firm on its judgment. If there is a discrepancy between the 2 areas, then using the normalized result in the percent area is the more prudent answer.