嘿, 我是Mofei!
Writing Code Like a Guru: Code Quality Control

Mofei is a huge fan of open source and has recently come across several incredible open source projects. I particularly envy the code quality control in these projects. Moreover, our company has also been emphasizing code quality lately. In the process of digging and filling pitfalls, I've summarized some experiences to share with you.

Today I want to discuss continuous integration, unit testing, and code coverage.

article image from 'zhuwenlong.com'

From my personal perspective, while unit testing and code coverage may reduce the efficiency of code development to some extent, the benefits they bring to multi-person collaborative projects far outweigh the development costs. After all, I often encounter frustrating issues, and I don’t want you to be in a situation where changing a piece of code you thought was safe leads to a completely unrelated part not working. It’s like chopping down a tree in front of your house and then being wanted by the FBI – it’s incredible! With code quality control, you can know in advance whether your code will bring about unbelievable consequences before uploading it. Today, we take a boring JS library as an example https://github.com/zmofei/emojily. The reason I call this library boring is that it can convert any string into emoji, for example, 你好=>????????????????????????????????????????, and you can convert the emoji back to text, but! Even its author Mofei doesn’t know what use such a library has...

We will start by discussing what happens after code submission.

0. Continuous Integration with Travis

If you think your job is done after submitting code to Github, then great! You might be the one I want to fool????.

0.1 What is Continuous Integration

In fact, writing code is just a small part of our development work. After the code is completed, you need to consider how to build, test, deploy, and other issues. Continuous integration refers to these things that happen after you submit your code. There are many continuous integration tools available on the market. For our discussion, we will use Travis CI which is favored by GitHub authors as our basic tool. Of course, you can easily replace it with a CI tool you are familiar with. In our example, we are using Travis to complete unit testing and code coverage checks after code submission.

Travis from 'zhuwenlong.com'

0.2 How to Use Travis

Visit the Travis website and log in with your GitHub account to link some of your projects on GitHub. Travis CI will check the .travis.yml file in the root directory of your project. This file is known as the configuration file for Travis, used to instruct the tool on how to execute integration tasks after you submit code. You can refer to Travis’s official documentation for details on the specific syntax.

For example, we can look at the .travis.yml file in our project:

language: node_js
node_js:
  - 8
after_script:
  - npm run upload-coverage

Here we have a very simple configuration that tells Travis we are using node.js version 8, and after the script finishes running, it will run the npm run upload-coverage script, which we will introduce later.

With this configuration file, Travis will take over the subsequent tasks every time we upload code. Now everything is ready, just waiting for the east wind! Let’s look at unit testing.

1. Unit Testing with npm test

As the name suggests, unit testing is about checking and validating the smallest testable units in software. For example, if we want to conduct unit testing on a bicycle, we can disassemble the bicycle into several small parts such as wheels, seats, chains, and so on, then test whether each of these parts works normally. If all these components operate normally, we can say the bicycle is most likely functional. Note that I say "most likely," not definitely; what if someone installed the front and rear wheels backward?

Now that we know what unit testing is, let's focus on how to perform unit tests.

1.1 How to Run Unit Tests

In a Node.js environment, we can run npm test directly in the root directory of the project to start our test script. Node.js conveniently provides an alias tst for test, so running npm tst will also work. Now congratulations, you know how to start testing! But don’t celebrate too early; if you run npm test in a newly initialized project (initialized with npm init), you will probably see the following result:

> echo "Error: no test specified" && exit 1
Error: no test specified
npm ERR! Test failed.  See above for more details.

Error?!! This is because we haven’t written any test scripts in the initialized project, and the test script initialized by the system is like this:

 "test": "echo \\"Error: no test specified\\" && exit 1"

After running it, it will naturally return this error. So we need to change it! But where is this test initialization script? We can open the package.json file in the root directory, where there is a field called scripts, and the test command is also there. So, yes! You guessed it! The npm test command is here.

npm test is actually shorthand for npm run test, and npm run test (where test can be replaced by any variable) means finding the scripts field in package.json and executing the corresponding command in test (or whatever name you wrote).

1.2 How to Write Unit Tests

Now that we know how to run unit tests, we can proceed to write them. The principle of running unit tests is simple; as long as the executed script does not throw an exception and returns normally, the unit test is considered successful.

Therefore, a normal unit test does the following: we take the corresponding method, class, or other minimal execution units, pass parameters to run it, and then check whether the execution result matches our expectations. The principle is straightforward, but we know that the return value can be highly variable; sometimes it is null, sometimes JSON, sometimes an array, or other formats. So we need a mechanism or tool that can easily determine whether this return value matches our expectations. In this project, we chose tape (npm install tape) as our testing tool. It provides various assertion interfaces, allowing us to easily write various test methods. Of course, if you have enough time and energy and want to write every condition check and every test prompt yourself, that is also possible, but in many cases, this is highly inefficient.

In this project, we wrote a simple test file index.test.js in the /test directory. Let me show you a part:

test('test decode with error input', (assert) => {
    assert.equal('Error Input, Please do not try to change any character!', decode('????????????????????????'));
    assert.end();
});

In test('description', callback), the test indicates this is a test task, and description is the explanation for this task; you can write any descriptive text you want. The callback contains the body of the test and receives one parameter, using which we can call methods like equal or end to describe this task. I won’t go into the details here (if anyone needs it, feel free to leave me a message. If there are enough requests, I can write a separate article about tape).

After writing the test, we try running it:

node test/index.test.js

Then you will get results like this:

ok 1 should be equal
# test decode with error input
ok 2 should be equal
# test decode with error input
ok 3 should be equal
# test decode with changed input
ok 4 should be equal
# test commend from slack
ok 5 should be equal

1..5
# tests 5
# pass  5

# ok

If there is an error, you will see a result like this:

ok 1 should be equal
# test decode with error input
not ok 2 should be equal
  ...
# test decode with error input
ok 3 should be equal
# test decode with changed input
ok 4 should be equal
# test commend from slack
ok 5 should be equal

1..5
# tests 5
# pass  4
# fail  1

In this example, we let the second test return as failed. From these results, we easily see that two test tasks failed, indicated by not ok (in the last few lines, we see that there are a total of 5 test tasks tests 5, 4 passed pass 4, and 1 failed fail 1).

Now we can put this script into the test in package.json and run it with npm test:

"scripts": {
    "test": "node test/index.test.js",
}

If you are curious, you may wonder what to do if we have multiple test files and want to execute them all? We can also solve this problem using tape by changing the command to:

"scripts": {
    "test": "tape test/*.test.js",
},

Now, when you run npm test, tape will look for all files ending with test.js in the test directory.

Once you’ve written the test scripts, every time you upload code to GitHub, Travis will automatically run the test scripts. If it passes, it will mark your commit with a green check, and if it fails, it will unceremoniously mark it with a cross.

commit result from 'zhuwenlong.com'

Congratulations! So far, you have learned how to write test files and how to let continuous integration automatically run test commands.

But does that end the story?

2. Code Coverage with codecov.io

Things are far from as simple as we imagine. Imagine if Mofei had a little bro who, reluctantly forced by Mofei, started writing unit tests. To cut corners, this clever little bro directly wrote a script like this:

test('go to hell test!!', (assert) => {
    assert.ok();
});

He thought this script would return success each time, and then, every submission would fool Travis, right?

Of course, Mofei is not someone to be trifled with! He quickly came up with a countermeasure, and that’s where code coverage comes into play!

Code coverage should more accurately be called test code coverage, which effectively reflects the quality of unit tests.

1.1 How to Check Code Coverage?

To check code coverage, we can use existing tools. Here, we introduce nyc (npm install nyc), which can conveniently produce a code coverage report.

Once installed, we simply need to add a command to package.json:

"scripts": {
    "coverage": "nyc --reporter html tape test/*.test.js",
},

The first part nyc --reporter html means using nyc to export an html report (for the usage of this tool, you can check the official documentation or let me know. If needed, I can write a separate article to introduce this tool), and the second part tape test/*.test.js is self-explanatory.

Next, we can run npm run coverage (note you can't run npm couverage because other scripts aren’t as robust as test). Once completed, you’ll find a coverage directory added to your directory. Feel free to open it, locate src/index.html, and open it in a browser. You should see something like this:

coverage img from 'zhuwenlong.com'

PS: This is a very good example from Mofei, where all test pass rates are 100%. Learn from Mofei!

However, below is a non-standard unit test written by Mofei's little bro:

coverage img from 'zhuwenlong.com'

Here we see that encode.js has a shocking coverage of only 18.18%!! (Salary deducted! Salary deducted!)

Where exactly isn’t complete? Let's click on encode.js.

coverage img from 'zhuwenlong.com'

Isn’t that exciting?! All statements not covered by unit tests are highlighted in red! Now Mofei's little bro can't be lazy anymore and must work to improve his code coverage, haha!

In actual projects, having a coverage rate of 100% is extremely desirable, but achieving this will consume a lot of effort and sometimes it's not even feasible. Therefore, in actual projects, we can set an acceptable threshold, such as 90%, 85%, etc. Of course, this should be decided based on your team's specific situation.

At this point, we can use tools to check the quality of our unit tests, but don’t think this is the end; in fact, it can get even more interesting!

1.2 How to Use codecov.io with GitHub

Think about it: if we have to run npm run coverage every time, isn’t that a bit cumbersome? Can we do something even more extreme? For example, automatically run coverage after every code submission? Reflecting on the continuous integration tools we set up earlier, we can implement this.

Let me introduce a helpful tool: codecov.io. It not only fulfills our extreme requests but is also a clever tool that tells you how the current code quality is doing every time you submit a PR.

coverage img from 'zhuwenlong.com'

Pretty extreme, right?! Let's see how to make it work!

Open https://codecov.io and log in with GitHub. Find the Add new repository button (I know this button is hidden a bit deep, but you will find it), then select a project from GitHub. The system will provide you with an Upload Token. With this token, you can prove you are who you are, and then you can upload reports to your codecov anywhere using this token.

coverage img from 'zhuwenlong.com'

To upload reports, we need to install the official tool codecov (npm install codecov) and then write an upload script in package.json:

"upload-coverage": "nyc report --reporter json && codecov -f ./coverage/coverage-final.json"

The first part nyc report --reporter json is used to generate a json report, and the second part codecov -f ./coverage/coverage-final.json is for uploading to codecov.

Now, you just need to run:

CODECOV_TOKEN="21639620-4627-42ef-a21f-f270c6358671" npm run upload-coverage

And your code report will be uploaded.

But!

Having to enter this token every time can be a bit annoying, can’t we let our buddy Travis solve this problem? The answer is yes! Remember our Travis configuration? We added npm run upload-coverage to after_script:

after_script:
  - npm run upload-coverage

But if you look closely, where is the token? Why don’t we need to write it here? How does it know our permissions?

Imagine if we wrote the token in the configuration; this would mean everyone could see our token and could misuse our permissions, which is highly unsafe. Therefore, Travis thought of this and provided an opportunity to add environmental variables.

To set environment variables, you need to visit Travis’s website and find the settings.

coverage img from 'zhuwenlong.com'

Fill your token in the Environment Variables.

coverage img from 'zhuwenlong.com'

Now, every time Travis runs upload-coverage, it can automatically input the token.

Mission accomplished!

Let’s upload some code to GitHub and see what happens!

coverage img from 'zhuwenlong.com'

After a successful upload, you will find a status indicator in the commits list. You can click on Details to view specifics. In the details, you can see the detailed logs of each test execution and your code coverage records. You can even see a very nice code coverage graph. I’ll leave this as a teaser; go discover it yourself.

Finally, let me tell you a little tech trick. Don’t forget to add those little icons to your README; they will change with each code submission, making your Repo look more sophisticated!

[![Build Status](https://travis-ci.com/zmofei/emojily.svg?branch=master)](https://travis-ci.com/zmofei/emojily) 
[![codecov](https://codecov.io/gh/zmofei/emojily/branch/master/graph/badge.svg)](https://codecov.io/gh/zmofei/emojily) 
[![GitHub](https://img.shields.io/github/license/mashape/apistatus.svg)](LICENSE) 
[![npm](https://img.shields.io/npm/v/emojily.svg)](https://www.npmjs.com/package/emojily)

The final result is:

coverage img from 'zhuwenlong.com'

Each time you submit code, the corresponding build status and codecov percentage will be displayed on the page according to the results of the run, allowing anyone to see whether your latest unit tests have passed and understand the quality of your unit tests. Isn’t that cool? Hurry up and try it out!!!

THE END

More Articles You Might Be Interested In

Got any insights on the content of this post? Let me know!

avatar

Mofei's Friend (Click to edit)

The world is big, say something!

HI. I AM MOFEI!

NICE TO MEET YOU!