Is there evaluation and quality assurance in your code during development and after the deploy?

Mariana Azevedo
8 min readMar 15, 2019

--

If you’ve been a software developer for some time, you’ve probably caught a glimpse of that old code in your IDE, in your Github or Bitbucket projects and you felt a quasi-compulsive urge to ̶d̶e̶l̶e̶t̶e̶ ̶i̶t̶ refactor it, whether for making it more readable, eliminate some code smells or even update it to use new features of the chosen language.

In the first text on code quality with Java, we talked about a set of very simple good practice tips that should be taken into account at the time you are developing your code. However, we know that this level of knowledge and fluency in writing good code is only acquired with time, effort, practice and enough study.

You may be wondering: is there a way to speed up this learning process for a software development team? Yes, in fact, many teams use the code review step for that purpose as well. But should the code review be used purely and exclusively for this type of evaluation? Are not there better tools to evaluate the code and ensure that the developer is adhering to established conventions?

YES! In this article, we’ll talk about the types of tools you can use to find out if your code is well built or not: static code analysis tools(SCAT).

Static code analysis tools (SCAT)

Static analysis is a reference to automated analysis, which aims, through software metrics, to identify errors and defects in the source code. Among the benefits of using a SCAT are:

  1. You can more easily find errors, snippets and code smells;
  2. The analyzers promote a more objective analytical view of the code, allowing developers to recognize points of inattention;
  3. Project leaders can study the code with greater care and identify, through a differentiated perspective, factors or technical failure points that need to be improved in the team;
  4. Once problems are identified and excluded from the project, the team can focus on other types of improvements and evolutions.

There are paid tools and MANY free tools that do a good job on quality analysis. In general, we can divide the tools functions into three: metric-based code analyzers (Metric Tools), Style Checkers, and Linters.

Metric-based code analyzers (Metric Tools)

They are software tools or plugins for a specific IDE that measure various quality aspects of the project analyzed, in general, in relation to:

  • Complexity: evaluates the number of lines of code, classes, methods, attributes, and the depth of the inheritance tree. They serve to identify if a system has many deviations from the main flow (there are many if, switch, for or while — cyclomatic complexity) if we have very extensive classes and methods or the use of inheritance (maximum inheritance tree depth and the number of children, for example). The most common measures used for this purpose are LOC (Lines of Code), NOM (Number of Methods), NOA (Number of Attributes), DIT (Depth of Inheritance Tree), CC (Cyclomatic Complexity), and WMC (Weight Methods per Class).
  • Coupling: evaluates the dependency relationship between the software modules. Basically, it checks how one module depends on another to function. Ideally, the software parts should be as decoupled as possible, that is, they should not rely on others to do their basic functionality (low coupling level). The most common measures are RFC (Response for a Class) and CBO (Coupling between objects).
  • Cohesion: refers to the relationship that the members of a module possess, if they have a more direct and important relationship. A code of high cohesion is a code in which its members are closely linked by a common goal, as told by Uncle Bob, for a single responsibility. The most common measures are LCOM (Lack of Cohesion of Methods), in several versions, TCC (Tight Class Cohesion), and LCC (Loose Class Cohesion).

Most of the tools available are open-source, can evaluate the source code and/or bytecode and be installed as a plugin in Eclipse IDE. Others are paid apps, with trial versions. Below, we have a list of most popular tools on the market and come from academic work in Software Engineering:

  • Metrics: is the oldest of this list. With the first implementations made available in the early 2000s, Metrics is a plugin for Eclipse that analyzes Java code. Calculates the value of 23 metrics at class and project level, with mean and standard deviation, and has a cyclic dependency analyzer between packages. The author’s inspiration for the implementation of the metrics were: Object-Oriented Metrics, measures of Complexity, a book written by Brian Henderson-Sellers (1996); and the article OO Design Quality Metrics, An Analysis of Dependencies (1994), and Agile Software Development, Principles, Patterns and Practices (2002), both by Robert C. Martin (a.k.a Uncle Bob).
Grid example with Metrics results (Source: http://metrics.sourceforge.net/)
  • SonarQube: is a software quality platform, written in Java, with integration with Eclipse and Maven, which uses several tools to obtain software metrics. We have in Sonar, not counting the test-oriented metrics, 40 metrics available to evaluate Java projects. Among these metrics, the most well known are LOC (Lines of Code) and LCOM4 Lack of Cohesion of Methods, version 4, which assesses suspicions of SRP’s violation (Single Responsibility Principle that I quoted above, in cohesion item), verifying how many “connected components” exist within a class. In addition, it has a complete dashboard that demonstrates the number of bugs, vulnerabilities, code smells, and project test coverage.
  • Understand: is a tool developed by Scitools. It provides 29 object-oriented metrics such as CBO and DIT, and 27 code complexity metrics such as cyclomatic complexity. The company provides a trial version of 15 days, and consequently, the most complete version is paid (and license it’s VERY expensive). It also runs on all three major OS’s: Linux, Mac, and Windows.
  • JDepend: a tool developed by Mike Clark at Clarkware Consulting, Inc., which analyzes dependencies of a Java package in terms of extensibility, reusability, and maintainability. The metrics implemented in JDepend are called “Martin Metrics”, referenced in the book Designing Object-Oriented C++ Applications using the Booch Method (Prentice Hall, 1995) of our beloved and always cited Uncle Bob: Afferent Coupling (Ca), Efferent Coupling (Ce), Abstractness (A), Instability (I), and Distance from the Main Sequence (D).
  • o3smeasures: is a tool that I developed in my master’s degree project. o3smeasures, an abbreviation for object-oriented open (o3) source measures, implements the 15 most commonly used metrics for internal quality efficacy of Java projects (for example, a result of a systematic review), e.g., LOC (Lines of Code), NOM (Number of Methods), DIT (Depth of Inheritance Tree), WMC (Weight Methods per Class), TCC (Tight Class Cohesion), RFC (Response for a Class), CBO (Coupling between objects), and also the Number of Classes measure. After measuring the project, it is possible to extract the measurement result in .csv and .xml files, visualize which classes have the most technical problems and identify quality failures by factors.

Style Checkers

They are the rule checkers of programming style. They analyze whether a given code conforms to the language conventions. For example, for Java, the verification should be done based on Java Code Conventions, such as opening keys, declaration order, Javadoc, and others.

These conventions are defined in files or guides known as profiles that can be imported into IDEs such as Eclipse, Netbeans, or IntelliJ. Each of these IDE already has its default Style Guide. If you want to use a specific style guide, you can import it into your IDE. One of the most used external style guides for devs is the Google Java Style Guide. If you want to customize a style guide, we can create a profile by going to Window -> Preferences -> Java -> Code Style -> Formatter. So, every time we use the Ctrl + Shift + F command in the class, the code is formatted following the settings made by your own.

Among the most popular tools in this category is Checkstyle. In this tool are checked the method declarations order, standard nomenclature, the position of the default clause in switch statements, among others. However, it should be noted that these violations of style rules are not always necessarily mistakes. However, they can be important to avoid problems of file format conflicts during merges and surpluses of life

Linters

They are structural quality checkers of the code. Linters help identifies and fixes problems without having to run the application. The tools of this category are to report the problems found based on the level of severity and describing what the problem is about.

These problems reported in linters are generally classified into 7 categories: Architecture and Design, Comments, Code Duplication, Coding Standards, Testing (Code Coverage), Cyclomatic Complexity, and Potential Bugs.

Among the tools most used in this category are:

  • SpotBugs: the old FindBugs, is an error-checking tool developed by the University of Maryland. SpotBugs supports verifying more than 400 error patterns and the latest version, 3.0.0, is already supported for Java 8. It has integration with Ant, Maven, Gradle, and Eclipse (as a plug-in).
  • PMD: is a tool that analyzes common programming faults, such as unused variables, empty catch blocks, unnecessary creation of objects, and so on. Also has a detector of ctrl + c and ctrl + v (very good it!), to verify duplicate code snippets. It has integration with various tools known as Eclipse, Netbeans, JBuilder, JDeveloper, IntelliJ, and as a Maven plugin (can be enabled when executing the goal “test”) and Gradle.
  • SonarLint: from this list of linters (and also from SCAT), Sonar is undoubtedly the most complete tool available. In its Lint version, it has several functionalities very useful for your day-to-day, which other tools do not have. For example, in On-the-fly analysis, it shows the problems as you code. Underline the new issues so that you can still focus on the code being written. In the On changes analysis, the problems encountered are listed in all files that you have added and updated. In summary, with SonarQube + Lint is the most accurate combo in errors and technical debt identifications your project. In addition, it has integration with the IDE’s such as Eclipse, Visual Studio, VS Code, Atom, and IntelliJ.
SonarLint on-the-fly example (Source: https://www.sonarlint.org/eclipse/index.html)
SonarQube dashboard example (Source: https://www.sonarqube.org/)

Although the focus of the article has been to evaluate and guarantee the code quality in Java, in Github we have several projects to evaluate code in different languages. Some of the tools I cited in the text also provide metrics and functionality for other languages like C, C ++, C #, Groovy, Javascript, PHP, and Python. In this link, we have a collection of very well evaluated projects that can be very useful on a daily basis.

As the title of the article itself says, in every project and team that values quality, there should be quality assessment and assurance in your code during development and after deploy. The automation of this via one or more of these tools (will depend on your goals) is the way to achieve this quality. Think about it!

I hope you enjoyed the post. Hugs and until the next text!

References

  1. Clean Code: A Handbook of Agile Software Craftsmanship (Robert C. Martin, 2011)
  2. Github: https://github.com/
  3. SonarQube: https://www.sonarqube.org/
  4. PMD: https://github.com/
  5. SpotBugs: https://spotbugs.github.io/
  6. Understand: https://scitools.com/
  7. Checkstyle: http://checkstyle.sourceforge.net/
  8. JDepend: https://github.com/clarkware/jdepend
  9. Metrics: http://metrics.sourceforge.net/update

--

--

Mariana Azevedo
Mariana Azevedo

Written by Mariana Azevedo

Senior Software Developer/Tech Lead, master in Computer Science/Software Engineering, Java, open source, and software quality enthusiast.

Responses (1)