What are the purposes for testing a piece of software? Who should be testing the software? Should developers write tests for their own code? The answers to these questions, I assume, varies from project to project and from team to team, however in this post I'll discuss the methods and steps I've taken in testing my code.
Tests are basically a method for making sure that the software the team produces gives expected result that the software was designed to produce. When given a requirement, often a piece of software will be written that satisfy those given requirements. But how do we test the software to make sure that those requirements are satified, you might asked. There exist 4 levels of software testing:
Unit testing
These are test that make sure that a small section of code work as intended. These codes to test could be functions, or classes.
Integration testing
A step up from unit testing. In a piece of software there often exist many components split off from each other to reduce code coupling. The integration test, tests the interfaces and interactions between the software components.
System testing
Test the whole system to make sure that it's integrated properly. This differs from integration testing, in that integration testing tests the components of the software, while system testing tests the whole build of the software. This might include interactions with external services etc.
Acceptance testing
Often the the user provides a set of criterias that must be satified for a software to be accepted as complete. Also known as User Acceptance Testing, acceptance testing tests the software to make sure that the acceptance criterias are satisfied in order to be accepted by the users, customers, or stakeholders of the project.
In this article, we'll be discussing unit and acceptance testing. Examples will be given in NodeJS's TypeScript, using the Loopback 4 framework.
A good and short answer to this question is that it is simply to make sure that our code work as we expected it to work. If we write a function that receive 2 integer arguments, add them together and return the result, we would expect to receive 3 if the integer 1 and 2 were given to the function.
0
1
2
function addTwoIntegerTogether(a: number, b: number): number {
return a + b
}
You might think that the function is pretty straight forward, right? Why would we write tests for some simple function like this? You would be right to think so. However, now imagine if the function is a little more complex.
0
1
2
3
4
function iKnowWhatThisFunctionDoesBecauseIWroteItDuhh(a: number, b: number): number {
const c = (a + b) ** a
const d = c * (b + (a - 42))
return d
}
Now, the function has multiple lines and some other mathematical operations. Is it complex enough to write test for it? We could manually test the function, of course. Get out a calculator and go through each line of the function etc. The main problem isn't really about making sure that the function works as intended now (technically it's half the problem), the problem is making sure that the function still works as intended later on in the life of the software. The function might get refactored or rewritten entirely in the future for performance or source code restructure reasons etc. The tests for the function are there to make sure that after these changes the function still works as intended.
0
1
2
3
4
// HEYY, someone refactored me!
// Do I still produce the same result, given the same input as before?
function iKnowWhatThisFunctionDoesBecauseIWroteItDuhh(a: number, b: number): number {
return ((a + b) ** a) * (b + (a - 42))
}
This bring us to another concept in software testing called Regression Testing. Regression testing are tests that were written in the past and ran again in the present after some change in the software to make sure that the software still works as intended. These tests also includes tests for bugs that were found, therefore the test suite often grows over time.
The project is written with the Loopback 4 framework in TypeScript. You can clone the project at GitHub. The project structure consists of the following:
/src/datasources
: Datasources are connectors that represent data in external systems. These external systems could be database storage such as PostgreSQL, or cache such as Redis. Datasource can also include connectors for external REST APIs. In this project there are two datasources: a PostgreSQL database datasource, and an external REST API datasource (the collection API)./src/models
: Represent the business domain object. These models includes model properties that when configured will be set as tables in the database. This project consist of one local model (item) which will be set in the PostgreSQL database, and one external model (collection) which exist in a remote system and is accessed through the REST API./src/repositories
: A connector between the model and the datasource which provides access for the model against the underlying database (datasource). Since there only exist one local model (item), there is only one repository: the item repository which connect the item model to the PostgreSQL database./src/services
: Classes that can perform operations and are resolved by bindings (dependency injection) within the Loopback context system. These classes could be custom classes you create to do some operations, or it could be a Loopback's proxy service class etc. The Loopback proxy service class is created to link to a connector datasource. These connector datasources include REST, SOAP, or gRPC. This project contains two service classes: a proxy service class for the collection external REST API datasource, and a calculator service class for doing basic calculation operations (addition, subtraction, multiplication, and division)./src/controllers
: Represent the API operations available for the application. In a REST APIs project the controller represents the available application's REST APIs. In this project there exist two controllers: the item controller is use for accessing the item model from the underlying database, and the calculator controller which exposes the calculator service class's methods. Note that the item controller is injected with the collection proxy class. When an item is created, a new collection is to be created too. Here the collection POST REST API is called through the proxy service./src/__tests__
: A collection of tests for the controllers and services. The tests includes acceptance tests for the item and calculator controllers, and unit test for the calculator service class.0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
| src/ | __tests__/ | acceptance/ | model/ | collection-service.model.ts | memory-db.model.ts | calculator.controller.acceptance.ts | item.controller.acceptance.ts | test-helper.ts | unit/ | calculator.unit.ts | controllers/ | calculator.controller.ts | item.controller.ts | datasources/ | collection.datasource.ts | datasource.ts | models/ | collection.model.ts | item.model.ts | repositories/ | item.repository.ts | services/ | calculator.service.ts | collection.service.ts
To run a simulated production version of the application, go into the project directory and run the following command in the terminal:
0
make
This command will run the makefile which will install the NodeJS dependencies, build a Docker image of the application, and bring up a Docker Compose network consisting of two containers: the application container (running at port 3000), and a PostgreSQL container (running at port 5432). The application container takes the environmental variables from the file: container.env
.
This simulated production version has the application connect to a real PostgreSQL database and includes a dummy template URL for the collection APIs endpoint. All endpoints from the controllers can be called and will give expected response except one: the POST /item
endpoint. This is the only endpoint which makes a request to the collection APIs. Because the application is running in a simulated production environment, the collection external APIs doesn't actually exist, therefore this prevents us from fully testing our application. If only there was a way to mock the service easily without having to actually write and run the mock service itself. **Hint
Interfaces state method signatures and properties that a class can choose to implement. These method signatures that the interface state doesn't include the implementation information. The class will have to implement it themselve. Interfaces are a great way to decoupled the code base. For example given two classes (A and B)that are use for storing data into some sort of data store. These two classes have different methods of accessing and storing data into their respective data store.
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
class A {
constructor(private db: DBA) {}
public async function createData(data: Data): Promise<DataFromA> {
return this.db.create(data)
}
public async function findData(id: number): Promise<DataFromA> {
const data = await this.db.find({
eql: {
id: id
}
})
if (data.length === 0) {
throw new Error(`Data with id: ${id} doesn't exist.`)
}
return data[0]
}
}
class B {
constructor(private db: DBB) {}
public async function create(data: Data): Promise<DataFromB> {
return this.db.create(data)
}
public async function findByID(id: number, options: OptionB): Promise<DataFromB> {
return this.db.findByID(id, options)
}
}
As you can see, the method signature are different so it would be difficult to use them in our code base interchangably. Imgaine that when starting off a project the A
class was used as the data store of the project. All the calls to store and retrieve data goes through A's methods. Then imgaine that after sometimes in the project life, it was decided to change the data store from the A
class to the B
class. The method signatures of the two classes are different, and because of that there'll be quite a bit of refactoring to do, to change from A's methods to B's methods.
This extra work could have possibly been avoided if both of these classes shares method signatures, so that the main code base doesn't have to change the method calls. The interface IDB
is an example of such interface to be implemented by the A
and B
classes. The interface states that any class that implements it, must have two public async methods: create
and find
, that returns a promise of the Data
object.
0 1 2 3
interface IDB { create(data: Data): Promise<Data> find(id: number, options: OptionB): Promise<Data> }
Here the two classes implements the IDB
interface. As you can see, the underlying logic of the method is similar (but not completely unchanged) to the previous classes without the interface. The only changes that are needed to be done is to change the method signatures to satify the interface. The name of the method are changed, as well as the parameters and return type. The method's underlying logic also needed to be changed to satify the return type. For example, the return type DataFromA
and DataFromB
needed to be mapped/converted to the return type Data
to satify the interface.
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
class A implements IDB {
constructor(private db: DBA) {}
public async function create(data: Data): Promise<Data> {
const dataFromA = await this.db.create(data)
return this.mapToData(dataFromA)
}
public async function find(id: number): Promise<Data> {
const data = await this.db.find({
eql: {
id: id
}
})
if (data.length === 0) {
throw new Error(`Data with id: ${id} doesn't exist.`)
}
return this.mapToData(data[0])
}
private function mapToData(data: DataFromA): Data {
...
}
}
class B implements IDB {
constructor(private db: DBB) {}
public async function create(data: Data): Promise<Data> {
const dataFromB = await this.db.create(data)
return this.mapToData(dataFromB)
}
public async function find(id: number, options: OptionB): Promise<Data> {
const dataFromB = await this.db.findByID(id, options)
return this.mapToData(dataFromB)
}
private function mapToData(data: DataFromB): Data {
...
}
}
Now for the actual use case of interface for testing. Imagine that there exist a class (X
) that implements an interface (IX
). This class is use for broadcasting some data from our application to some external service. When testing our application, we don't want to actually send data to the external service, therefore, we'll have to mock the class. We can write a mock class (XMock
) that implements the IX
interface, where the logic of the class is up to you to implement how much or how little you want. The method to configure the application to use this mock class XMock
varies from frameworks and languages, and I won't get into that in this article.
Below is a real use case of using interface for mocking external service classes. The file /src/services/collection.service.ts
states an interface and a class for the collection REST API proxy service. The class uses a REST config from the file /src/datasources/collection.datasource.ts
(you don't need to worry about this file). Here you can see that the CollectionProvider
class implements a CollectionService Provider
interface. The interface basically specify that the class must have a value
method which returns a class that implements the CollectionService
interface. So basically the provider class takes the REST config from the collection datasource, compose a class which implements the CollectionService
interface from the datasource config, and return it in the value
method.
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
/*
/src/services/collection.service.ts
*/
import {BindingScope, inject, injectable, Provider} from '@loopback/core'
import {getService} from '@loopback/service-proxy'
import {CollectionDataSource} from '../datasources'
import {Collection} from '../models/collection.model'
export interface CollectionService {
// this is where you define the Node.js methods that will be
// mapped to REST/SOAP/gRPC operations as stated in the datasource
// json file.
getAllCollection(): Promise<Collection[]>
createCollection(body: Collection): Promise<Collection>
getCollection(id: number): Promise<Collection>
updateCollection(id: number, body: Collection): Promise<void>
deleteCollection(id: number): Promise<void>
}
@injectable({scope: BindingScope.APPLICATION})
export class CollectionProvider implements Provider<CollectionService> {
constructor(
// Collection must match the name property in the datasource json file
@inject('datasources.Collection')
protected dataSource: CollectionDataSource = new CollectionDataSource(),
) {}
value(): Promise<CollectionService> {
return getService(this.dataSource)
}
}
The Collection
service is injected and used in the controller file /src/controllers/item.controller.ts
. In the ItemController
class constructor line 7 and 8 the collection service is injected and set as a private property in the class. Notice that the type of the injected collection service is CollectionService
, an interface, not a concrete class. In line 7 the @inject
decorator from the Loopback framework is used. Loopback's dependency injection works by registering/binding each of instance of classes (services, datasources etc.) to the application context, and from there the bindings (e.g. "services.Collection"
) could be use for injecting the class's dependencies. Once injected into the controller class, the collection service is used at the create item method line 44 to create a collection with the item information. When testing our application we'll want to mock the collection service so that our application doesn't send the test data to the external service.
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
/*
/src/controllers/item.controller.ts
*/
export class ItemController {
constructor(
@repository(ItemRepository)
private itemRepository: ItemRepository,
@inject('services.Collection')
private collectionService: CollectionService,
) {}
@post('/item', {
responses: {
'200': {
description: 'Item model instance',
content: {
'application/json': {schema: getModelSchemaRef(Item)},
},
},
},
})
async create(
@requestBody({
content: {
'application/json': {
schema: getModelSchemaRef(Item, {
title: 'NewItem',
exclude: ['id'],
}),
},
},
})
item: Omit<Item, 'id'>,
): Promise<Item> {
// Create and store an item in our db.
const newItem = await this.itemRepository.create(item)
// Create a collection with the information from the created item.
const newCollection: Collection = {
name: newItem.name,
item_id: newItem.id,
}
// Send the collection to the collection REST service. This function makes
// an HTTP POST request to an external service.
await this.collectionService.createCollection(newCollection)
// Return the created item.
return newItem
}
...
}
The Loopback framework provide a testing library which integrates the testing framework Mocha. All test files are located in the /src/__tests__
directory. Our project tests directory consist of acceptance tests and unit test. We'll look at the acceptance test for the item controller first. The test file structure are as follow:
describe
describes the thing we're testing on.before
is a function in the describe
scope that will run once at the start of the describe
.after
is a function in the describe
scope that will run once at the end of the describe
.it
is a test case we're testing in the describe
. One describe
can have multiple it
test cases. Note that before
and after
will only run once in the describe
scope. If you want something similar to run before and/or after each it
test cases, then use beforeEach
and afterEach
instead.0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
/*
/src/__tests__/acceptance/item.controller.acceptance.ts
*/
import {Count} from '@loopback/repository'
import {Client, expect} from '@loopback/testlab'
import {Loopback4TestingApplication} from '../../application'
import {Item} from '../../models'
import {mockServices, setupApplication} from './test-helper'
describe('Acceptance: Item Controller', () => {
let app: Loopback4TestingApplication
let client: Client
before('setupApplication', async () => {
;({app, client} = await setupApplication())
// Mock any services that won't be use in testing this applicaton.
await mockServices(app)
})
after(async () => {
// Drop all tables in the db to start a new.
await app.stop()
})
...
it('invokes POST /item: expect to get the created item', async () => {
const res = await client
.post(`/item`)
.expect(200)
.send(<Partial<Item>>{
name: 'item_1',
colour: 'blue',
})
const expectedRes: Partial<Item> = {
id: 1,
name: 'item_1',
colour: 'blue',
}
expect(res.body).to.deepEqual(expectedRes)
})
})
In before
a function (mockServices
line 17) is called to mock the services for each test run. The collection service is mocked inside this function and bind to the application context. The code below from the file /src/__tests__/acceptance/test-helper.ts
shows the implementation. In the mockServices
function, it takes the main application class as app
, binds an instance of the class MockCollectionService
with the binding key "services.Collection"
to the application context.
0
1
2
3
4
5
6
7
8
9
10
11
12
13
/*
/src/__tests__/acceptance/test-helper.ts
*/
import {Loopback4TestingApplication} from '../..'
import {MockCollectionService} from './model'
...
export async function mockServices(
app: Loopback4TestingApplication,
): Promise<void> {
// Bind the service that makes external REST call to a mock service.
app.bind('services.Collection').toClass(MockCollectionService)
}
The MockCollectionService
class implementation is given below from the file /src/__tests__/acceptance/model/collection-service.model.ts
. The class implements the CollectionService
interface, where the implementation of each methods does nothing. It only needs to have all the methods to satify the interface it's implementing. Now when the item controller test try to create an item via POST /item
and it calls the createCollection
method in the mock service, it'll simply do nothing and return the given Collection
argument.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
/* /src/__tests__/acceptance/model/collection-service.model.ts */ import {Collection} from '../../../models/collection.model' import {CollectionService} from '../../../services' // A class which mocks the Collection class by implementing the // CollectionService interface. // The implemented functions are simply mocked to do nothing. export class MockCollectionService implements CollectionService { public async getAllCollection(): Promise<Collection[]> { return [] } public async createCollection(body: Collection): Promise<Collection> { return body } public async getCollection(id: number): Promise<Collection> { return { name: 'string', item_id: 1, } } public async updateCollection(id: number, body: Collection): Promise<void> {} public async deleteCollection(id: number): Promise<void> {} }
You can see how powerful interfaces can be in helping us decoupled our code base and write better code. From here on this article will discuss Loopback specific methods for testing the application. If you're interested, read on, otherwise I hope this article have been helpful to you.
This application uses dynamic datasource instead of set datasource provided by default by Loopback 4. Dynamic datasource are datasource that can be created programmatically. The file /src/datasources/datasource.ts
shows how this is done. The function initDatasource
when called will check if the environment this application is running in is production or not, if so it'll create a PostgreSQL type datasource and bind it to the application context. Remember from earlier in the article where the makefile simulate a production environment? In a production environment a PostgreSQL database is set as a datasource, whereas in other environment such as dev, no datasource will be set (we'll see why this is the case in a moment).
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
/*
/src/datasources/datasource.ts
*/
import {juggler} from '@loopback/repository'
import {Loopback4TestingApplication} from '../application'
export async function initDatasource(
app: Loopback4TestingApplication,
): Promise<void> {
if (process.env.NODE_ENV === 'production') {
await postgreSQLDataSource(app)
}
}
async function postgreSQLDataSource(
app: Loopback4TestingApplication,
): Promise<void> {
const dsName = 'PostgreSQL'
const datasource = new juggler.DataSource({
name: 'PostgreSQL',
connector: 'postgresql',
url: '',
host: process.env.DATABASE_SERVER,
port: process.env.DATABASE_PORT,
user: process.env.DATABASE_USERNAME,
password: process.env.DATABASE_PASSWORD,
database: process.env.DATABASE_DB_NAME,
})
app.dataSource(datasource, dsName)
}
The initDatasource
function is called in the /src/index.ts
file, in the main
function, which is the entry point of the application. In line 12, after instantiating a new application instance, it attempts to create the PostgreSQL datasource. Note that it is called before booting up the application, this is done to satisfy the Loopback's Life Cycle requirements.
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
/*
/src/index.ts
*/
import {ApplicationConfig, Loopback4TestingApplication} from './application'
import {initDatasource} from './datasources'
require('dotenv').config()
export * from './application'
export async function main(options: ApplicationConfig = {}) {
const app = new Loopback4TestingApplication(options)
await initDatasource(app)
await app.boot()
await app.migrateSchema()
await app.start()
const url = app.restServer.url
console.log(`Server is running at ${url}`)
console.log(`Try ${url}/ping`)
return app
}
...
Now that we've discussed the production case of the application datasource, let's discuss how we can switch to a memory based datasource when we're testing the application via the Loopback testing framework.
There exist a function called setupApplication
in the file /src/__tests__/acceptance/test-helper.ts
. This function is called by the controller acceptance tests to create and setup the application instance for running the tests on. Here you can see below that setupApplication
is called in the before
function at line 8.
0
1
2
3
4
5
6
7
8
9
10
11
12
13
/*
/src/__tests__/acceptance/item.controller.acceptance.ts
*/
describe('Acceptance: Item Controller', () => {
let app: Loopback4TestingApplication
let client: Client
before('setupApplication', async () => {
;({app, client} = await setupApplication())
// Mock any services that won't be use in testing this applicaton.
await mockServices(app)
})
})
The setupApplication
function instantiate the application class (line 19), instantiate a memory datasource (line 25), and binds it to the application context with the binding key "datasources.PostgreSQL"
so that any injection for the "real" PostgreSQL datasource in the application will get the memory datasource instead. The memory datasource options are pretty standard. The only option to take special note of is the file
option. This option is set as null
so that the data doesn't persist between test sets (describes
).
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
/*
/src/__tests__/acceptance/test-helper.ts
*/
import {
createRestAppClient,
givenHttpServerConfig,
} from '@loopback/testlab'
import {Loopback4TestingApplication} from '../..'
import {MemoryDataSource} from './model'
export async function setupApplication(): Promise<AppWithClient> {
const restConfig = givenHttpServerConfig({
// Customize the server configuration here.
// Empty values (undefined, '') will be ignored by the helper.
//
// host: process.env.HOST,
// port: +process.env.PORT,
})
const app = new Loopback4TestingApplication({
rest: restConfig,
})
// Create a memory based DB and bind it to the PostgreSQL datasource.
// Use memory db instead of postgreSQL db for easier and efficient testing.
const datasourceSQL = new MemoryDataSource({
name: 'testdb',
connector: 'memory',
localStorage: '',
file: null,
})
app.bind('datasources.PostgreSQL').to(datasourceSQL)
await app.boot()
await app.migrateSchema()
await app.start()
const client = createRestAppClient(app)
return {app, client}
}
...
The file /src/__tests__/acceptance/model/memory-db.model.ts
shows the memory datasource class which extends the default Juggler.Datasoure
class. The Juggler.Datasource
class is a class which all types of datasource will need to extend from (e.g. PostgreSQL, Redis, Memory etc.). Here, the MemoryDataSource
class needed to be written because of a limitation with the Loopback's memory connector datasource. A limitation with the memory connector is that it doesn't implement database's transaction like the PostgreSQL connector does. This isn't a problem if the application code doesn't use the transaction feature, but it will result in errors if it does. Therefore, the Transaction
class is mocked here in the MemoryDataSource
class.
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
/*
/src/__tests__/acceptance/model/memory-db.model.ts
*/
import {
IsolationLevel,
juggler,
Options,
Transaction,
} from '@loopback/repository'
/**
* A mock class of the Transaction class. Use in conjunction with the
* `MemoryDataSource` class.
*/
class MemoryDataSourceTransaction implements juggler.Transaction {
id: string
public async commit(): Promise<void> {}
public async rollback(): Promise<void> {}
public isActive(): boolean {
return true
}
}
/**
* A datasource class which extends from the memory datasource class use for testing.
* The class implements the method `beginTransaction` to mock the ability to do
* SQL transactions like the production database PostgreSQL can.
*/
export class MemoryDataSource extends juggler.DataSource {
constructor(settings: Options) {
super(settings)
}
public async beginTransaction(
options?: IsolationLevel | Options,
): Promise<Transaction> {
return new MemoryDataSourceTransaction()
}
}
By binding a memory datasource to the application using the binding key of the real PostgreSQL datasource, we're able to minimise the test setup and tear down time (no PostgreSQL container), and decoupled our code even more. Note that by default if you used the Loopback 4 CLI to generate the datasource and repository, the datasource won't be dynamic, and the repository will takes a concrete class of the datasource (sub class) instead of the super class Juggler.Datasource
. You'll have to refactor your datasource to be dynamic as shown earlier in the article, and to configure the repositories to receive the super class shown in line 14 in the example below.
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
/*
/src/repositories/item.repository.ts
*/
import {inject} from '@loopback/core'
import {DefaultCrudRepository, juggler} from '@loopback/repository'
import {Item, ItemRelations} from '../models'
export class ItemRepository extends DefaultCrudRepository<
Item,
typeof Item.prototype.id,
ItemRelations
> {
constructor(
@inject('datasources.PostgreSQL')
dataSource: juggler.DataSource,
) {
super(Item, dataSource)
}
}
Remember the CalculatorService
service class we discussed about in the the project code section? The service class lives in /src/services/calculator.service.ts
and is injected into the CalculatorController
at /src/controllers/calculator.controller.ts
. The controller class basically just expose the methods of the calculator service class as a REST APIs via the application. In this project you'll see two test files related to the calculator feature: the controller acceptance test file /src/__tests__/acceptance/calculator.controller.acceptance.ts
and the service class unit test file /src/__tests__/unit/calculator.unit.ts
. These 2 tests basically tests the same test cases but one runs via the controller class which depends on the service class, and another one just runs via the service class itself. We'll talk about the unit test which runs via the CalculatorService
service class. The file below shows the unit test file for CalculatorService
. The unit test doesn't depends on the application class, but only the CalculatorService
service class. In the before
function it simply instantiate it and from there the class instance is used in the test cases.
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
/*
/src/__tests__/unit/calculator.unit.ts
*/
import {expect} from '@loopback/testlab'
import {CalculatorService} from '../../services'
describe('Class: Calculator', () => {
let calculatorService: CalculatorService
before('setupApplication', async () => {
calculatorService = new CalculatorService()
})
it('Addition with 0 arguments: expect to get a result of 0.', async () => {
const result = calculatorService.addition([])
expect(result).to.deepEqual({
result: 0,
})
})
it('Subtraction with 0 arguments: expect to get a result of 0.', async () => {
const result = calculatorService.subtraction([])
expect(result).to.deepEqual({
result: 0,
})
})
...
})
To run the acceptance and unit tests, simply go to the root directory of the project and run the npm command:
0
npm run test
This will run all the tests inside the __tests__
directory.
The Loopback 4 framework also provide a way to do code coverage. With code coverage we as developers can see how much of our code are covered by tests. This is especially useful for when you want to cover as many branches of a feature as possible. To setup code coverage you'll need to:
.nycrc
in your project’s root directory and fill it with the following settings. These settings basically describe what to include and exclude in your code coverage. If there are files that you think shouldn't be included in the coverage report, then add it to the exclude
property. The reporter
option states what format to write the code coverage report in. With this setting, the code coverage report will be outputted as an HTML web page.0 1 2 3 4 5 6
{ "include": ["dist"], "exclude": ["dist/__tests__/"], "extension": [".js", ".ts"], "reporter": ["text", "html"], "exclude-after-remap": false }
coverage
to the file .prettierignore
to make the prettier ignore the generated coverage report files. If you don't do this, the prettier will output errors.0 1 2 3
# .prettierignore dist *.json coverage
scripts
to the package.json
file:0 1 2 3 4 5 6 7 8
{ "scripts": { "precoverage": "npm test", "coverage": "open coverage/index.html", "coverage:ci": "lb-nyc report --reporter=text-lcov | coveralls", "test:ci": "lb-nyc lb-mocha", "test": "lb-nyc lb-mocha --allow-console-logs \"dist/__tests__\"" } }
Now, to get a code coverage report simply run the following command:
0
npm run coverage
There are many different types of test, and many ways to test your software. This article only discussed the surface level of testing and their benefits. I hope that this can be a good introduction and motivation for you to write more and better tests in your software in the future.