Chapter X

Example Projects

Example projects you can follow along with!

Subsections of Example Projects

Express Starter Project

YouTube Video

This example project is the first in a series toward building a complete full-stack web application using Node.js and Express to create a RESTful API on the backend that connects to a database, and then a Vue single page application on the frontend.

In doing so, we’ll explore some of the standard ways web developers use existing tools, frameworks, and libraries to perform many of the operations we’ve learned how to do manually throughout this course. In essence, you’ve already learned how to build these things from scratch, but now we’ll look at how professionals use dependencies to accomplish many of the same things.

We’ll also explore techniques for writing good, clean JavaScript code that includes documentation and API information, unit testing, and more.

Finally, we’ll learn how to do all of this using GitHub Codespaces, so everything runs directly in the web browser with no additional software or hardware needed. Of course, you can also do everything locally using Docker Desktop and Visual Studio Code as well.

Project Deliverables

At the end of this example, we will have a project with the following features:

  1. A working GitHub Codespace containing Node.js
  2. A bare-bones Express application
  3. Update Express application from CommonJS to ES Modules
  4. Application logs with Winston and Morgan
  5. Install other useful Express libraries
  6. A better development server using Nodemon
  7. A tool for managing environment variables
  8. Code documentation with JSDoc and OpenAPI comments
  9. Linting and Formatting with ESLint and Prettier

Let’s get started!

Subsections of Express Starter Project

GitHub Codespace

YouTube Video

Creating a Codespace

To begin, we will start with an empty GitHub repository. You can either create one yourself, or you may be working from a repository provided through GitHub Classroom.

At the top of the page, you may see either a Create a Codespace button in an empty repository, or a Code button that opens a panel with a Codespaces tab and a Create Codespace on main button in an initialized repository. Go ahead and click that button.

Codespace in Empty Repository Codespace in Empty Repository

Codespace in Initialized Repository Codespace in Initialized Repository

Once you do, GitHub will start creating a new GitHub Codespace for your project. This process may take a few moments.

Once it is done, you’ll be presented with a window that looks very similar to Visual Studio Code’s main interface. In fact - it is! It is just a version of Visual Studio Code running directly in a web browser. Pretty neat!

For the rest of this project, we’ll do all of our work here in GitHub Codespaces directly in our web browser.

Working Locally?

If you would rather do this work on your own computer, you’ll need to install the following prerequisites:

For now, you’ll start by cloning your GitHub repository to your local computer, and opening it in Visual Studio Code. We’ll create some configuration files, and then reopen the project using a Dev Container in Docker. When looking in the Command Palette, just swap the “Codespaces” prefix with the “Dev Containers” prefix in the command names.

Once you’ve created your GitHub Codespace, you can always find it again by visiting the repository in your web browser, clicking the Code button and choosing the Codespaces tab.

Existing Codespace Existing Codespace

Configuring the Codespace

When we first create a GitHub Codespace, GitHub will use a default dev container configuration. It includes many tools that are preinstalled for working on a wide variety of projects. Inside of the Codespace, you can run the following command in the terminal to get a URL that contains a list of all tools installed and their versions:

$ devcontainer-info content-url

The current default configuration as of this writing can be found here.

Documenting Terminal Commands

In these example projects, we’ll prefix any terminal commands with a dollar sign $ symbol, representing the standard Linux terminal command prompt. You should not enter this character into the terminal, just the content after it. This makes it easy to see individual commands in the documentation, and also makes it easy to tell the difference between commands to be executed and the output produced by that command.

You can learn more in the Google Developer Documentation Style Guide.

For this project, we are going to configure our own dev container that just contains the tools we need for this project. This also allows us to use the same configuration both in GitHub Codespaces as well as locally on our own systems using Docker.

To configure our own dev container, we first must open the Visual Studio Code Command Palette. We can do this by pressing CTRL+SHIFT+P, or by clicking the top search bar on the page and choosing Show and Run Commands >.

In the Command Palette, search for and choose the Codespaces: Add Dev Container Configuration Files… option, then choose Create a new configuration…. In the list that appears, search for “node” to find the container titled “Node.js & TypeScript” and choose that option.

Choosing a Dev Container Configuration Choosing a Dev Container Configuration

You’ll then be prompted to choose a version to use. We’ll use 22-bookworm for this project. That refers to Node version 22 LTS running on a Debian Bookworm LTS Linux image. Both of these are current, long term supported (LTS) versions of the software, making them an excellent choice for a new project.

Finally, the last question will ask if we’d like to add any additional features to our dev container configuration. We’ll leave this blank for now, but in the future you may find some of these additional features useful and choose to add them here.

Once that is done, a .devcontainer folder will be created, with a devcontainer.json file inside of it. The content of that file should match what is shown below:

// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/typescript-node
{
	"name": "Node.js & TypeScript",
	// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
	"image": "mcr.microsoft.com/devcontainers/typescript-node:1-22-bookworm"

	// Features to add to the dev container. More info: https://containers.dev/features.
	// "features": {},

	// Use 'forwardPorts' to make a list of ports inside the container available locally.
	// "forwardPorts": [],

	// Use 'postCreateCommand' to run commands after the container is created.
	// "postCreateCommand": "yarn install",

	// Configure tool-specific properties.
	// "customizations": {},

	// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
	// "remoteUser": "root"
}

Over time, we’ll come back to this file to add additional features to our dev container. For now, we’ll just leave it as-is.

Dependabot

You may also see a second file, .github/dependabot.yml that is also created. This file is used by the GitHub Dependabot to keep your dev container configuration up to date. You may get occasional notices from GitHub in the future if there are any updates to software included in your dev container configuration.

At this point, we are ready to rebuilt our GitHub Codespace to use our new dev container configuration. To do this, open the Command Palette once again and look for the Codespaces: Rebuild Container option. Click that option, then select the Full Rebuild option in the popup window since we have completely changed our dev container configuration.

Now, we can sit back and be patient while GitHub Codespaces rebuilds our environment using the new configuration. This may take several minutes.

Once it is complete, we can confirm that Node.js is installed and running the correct version by running the following command and checking the output matches our expected version of Node.js:

$ node --version
v22.12.0

If that works, then our dev container environment in GitHub Codespaces should be set up and ready to go!

Now is a good time to commit our current work to git and push it to GitHub. Even though we are working in a GitHub Codespace, we still have to commit and push our work to get it saved. You can do this using the Source Control sidebar tab on the page, or using the classic terminal commands as shown below.

$ git add .
$ git commit -m "Dev Container"
$ git push -u origin main

For the rest of this exercise, we’ll assume that you are comfortable with git and GitHub and can take care of committing and pushing your work yourself, but we’ll give you several hints showing when we hit a good opportunity to save your work.

Express Starter

YouTube Video

Generating an Express Application

Now that we have our dev container configured, we can start setting up an Express application. The recommended method in the documentation is to use the Express application generator, so we’ll use that method. You may want to refer to the documentation for this command to see what options are available.

Express Documentation

You may also want to bookmark the Express Documentation website as well, since it contains lots of helpful information about how Express works that may not be covered in this tutorial.

For this project, we’ll use the following command to build our application:

$ npx express-generator --no-view --git server

Let’s break down that command to see what it is doing:

  • npx - The npx command is included with Node.js and npm and allows us to run a command from an npm package, including packages that aren’t currently installed!. This is the preferred way to run commands that are available in any npm packages.
  • express-generator - This is the express-generator package in npm that contains the command we are using to build our Express application.
  • --no-view - This option will generate a project without a built-in view engine.
  • --git - This option will add a .gitignore file to our project
  • server - This is the name of the directory where we would like to create our application.

When we run that command, we may be prompted to install the express-generator package, so we can press y to install it.

That command will produce a large amount of output, similar to what is shown below:

Need to install the following packages:
express-generator@4.16.1
Ok to proceed? (y) y

npm warn deprecated mkdirp@0.5.1: Legacy versions of mkdirp are no longer supported. Please update to mkdirp 1.x. (Note that the API surface has changed to use Promises in 1.x.)

   create : server/
   create : server/public/
   create : server/public/javascripts/
   create : server/public/images/
   create : server/public/stylesheets/
   create : server/public/stylesheets/style.css
   create : server/routes/
   create : server/routes/index.js
   create : server/routes/users.js
   create : server/public/index.html
   create : server/.gitignore
   create : server/app.js
   create : server/package.json
   create : server/bin/
   create : server/bin/www

   change directory:
     $ cd server

   install dependencies:
     $ npm install

   run the app:
     $ DEBUG=server:* npm start

As we can see, it created quite a few files for us! Let’s briefly review what each of these files and folders are for:

  • public - this folder contains the static HTML, CSS, and JavaScript files that will be served from our application. Much later down the road, we’ll place the compiled version of our Vue frontend application in this folder. For now, it just serves as a placeholder for where those files will be placed.
  • routes - this folder contains the Express application routers for our application. There are currently only two routers, the index.js router connected to the / path, and the users.js router connected to the /users path.
  • .gitignore - this file tells git which files or folders can be ignored when committing to the repository. We’ll discuss this file in detail below.
  • app.js - this is the main file for our Express application. It loads all of the libraries, configurations, and routers and puts them all together into a single Express application.
  • package.json - this file contains information about the project, including some metadata, scripts, and the list of external dependencies. More information on the structure and content of that file can be found in the documentation.
  • bin/www - this file is the actual entrypoint for our web application. It loads the Express application defined in app.js, and then creates an http server to listen for incoming connections and sends them to the Express application. It also handles figuring out which port the application should listen on, as well as some common errors.

Since we are only building a RESTful API application, there are a few files that we can delete or quickly modify:

  • Delete everything in the public folder except the file index.html
  • Inside of the public/index.html file, remove the line referencing the stylesheet: <link rel="stylesheet" href="/stylesheets/style.css"> since it has been deleted.

At this point, we should also update the contents of the package.json file to describe our project. It currently contains information similar to this:

{
  "name": "server",
  "version": "0.0.0",
  "private": true,
  "scripts": {
    "start": "node ./bin/www"
  },
  "dependencies": {
    "cookie-parser": "~1.4.4",
    "debug": "~2.6.9",
    "express": "~4.16.1",
    "morgan": "~1.9.1"
  }
}

For now, let’s update the name and version entries to match our project:

{
  "name": "example-project",
  "version": "0.0.1",
  "private": true,
  "scripts": {
    "start": "node ./bin/www"
  },
  "dependencies": {
    "cookie-parser": "~1.4.4",
    "debug": "~2.6.9",
    "express": "~4.16.1",
    "morgan": "~1.9.1"
  }
}

In a stand-alone application like ours, these values really don’t matter, but if we do decide to publish this application as an npm module in the future, these values will be used to build the module itself.

Exploring App.js

Let’s quickly take a look at the contents of the app.js file to get an idea of what this application does:

var express = require('express');
var path = require('path');
var cookieParser = require('cookie-parser');
var logger = require('morgan');

var indexRouter = require('./routes/index');
var usersRouter = require('./routes/users');

var app = express();

app.use(logger('dev'));
app.use(express.json());
app.use(express.urlencoded({ extended: false }));
app.use(cookieParser());
app.use(express.static(path.join(__dirname, 'public')));

app.use('/', indexRouter);
app.use('/users', usersRouter);

module.exports = app;

At the top, the file loads several libraries, including cookie-parser for parsing cookies sent from the browser, and morgan for logging requests. It then also loads the two routers, index and users.

Next, we see the line var app = express() - this line actually creates the Express application and stores a reference to it in the app variable.

The next few lines add various middlewares to the Express application using the app.use() function. Each of these is effectively a function that is called each time the application receives a request, one after the other, until a response is generated and sent. See Using middleware in the Express documentation for more details.

The last line of that group uses the express.static middleware to serve static files from the public directory (it uses the path library and the __dirname global variable to construct the correct absolute path to those files). So, if the user requests any path that matches a static file, that file will be sent to the user. This will happen even if a static file matches an existing route, since this middleware is added to the application before the routes. So, there are some instances where we may want to connect this middleware to the application after adding some important routes - we’ll discuss that in the future as we continue to build this application.

After that, the two routers are added as well. Each router is given a base path - the index router is given the / path, then the users router is given the /users path. These are the URL paths that are used to determine where each incoming request should be sent in the application. See routing in the Express documentation for more details.

Finally, the Express application referenced in app is exported from this file. It is used by the bin/www file and attached to an http server to listen for incoming requests.

Order Matters

Because Express is a routing and middleware framework, the order in which you add middlewares and routers determines how the application functions. So, we must be very thoughtful about the order in which we add middlewares and routers to our application. In this example, notice, that we add the logger first, then parse any incoming JSON requests, then decode any URL encoded requests, then parse any cookies, before doing anything else.

This is a common error that trips up many first-time Express developers, so be mindful as you add and adjust content in this file!

Installing Dependencies

Now that we’ve generated a basic Express web application, we need to install all of the dependencies. This is also the first step we’ll need to do anytime we clone this project for the first time or if we rebuild our GitHub codespace or dev container.

To do this, we need to go to the terminal and change directory to the server folder:

$ cd server
Working Directory

Remember that we can always see the current working directory by looking at the command prompt in the terminal, or by typing the pwd command:

Working Directory in Terminal Working Directory in Terminal

Present Working Directory Present Working Directory

Once inside of the server folder, we can install all our dependencies using the following command:

$ npm install

When we run that command, we’ll see output similar to the following:

added 53 packages, and audited 54 packages in 4s

7 vulnerabilities (3 low, 4 high)

To address all issues, run:
  npm audit fix --force

Run `npm audit` for details.

It looks like we have some out of date packages and vulnerabilities to fix!

Updating Dependencies

Thankfully, there is a very useful command called npm-check-updates that we can use to update our dependencies anytime there is a problem. We can run that package’s command using npx as we saw earlier:

$ npx npm-check-updates

As before, we’ll be prompted to install the package if it isn’t installed already. Once it is done, we’ll see output like this:

Need to install the following packages:
npm-check-updates@17.1.14
Ok to proceed? (y) y

Checking /workspaces/example-project/server/package.json
[====================] 4/4 100%

 cookie-parser   ~1.4.4  →   ~1.4.7
 debug           ~2.6.9  →   ~4.4.0
 express        ~4.16.1  →  ~4.21.2
 morgan          ~1.9.1  →  ~1.10.0

Run npx npm-check-updates -u to upgrade package.json

When we run the command, it tells us which packages are out of date and lists a newer version of the package we can install.

Tread Carefully!

In an actual production application, it is important to make sure your dependencies are kept up to date. At the same time, you’ll want to carefully read the documentation for these dependencies and test your project after any dependency updates, just to ensure that your application works correctly using the new versions.

For example, in the output above, we see this:

 debug           ~2.6.9  →   ~4.4.0

This means that the debug library is two major versions out of date (see Semantic Versioning for more information on how to interpret version numbers)! If we check the debug versions list on npm, we can see that version 2.6.9 was released in September 2017 - a very long time ago.

When a package undergoes a major version change, it often comes with incompatible API changes. So, we may want to consult the documentation for each major version or find release notes or upgrade guides to refer to. In this case, we can refer to the release notes for each version on GitHub:

We may even need to check some of the release notes for minor releases as well.

Thankfully, the latest version of the debug library is compatible with our existing code, and later in this project we’ll replace it with a better logging infrastructure anyway.

Now that we know which dependencies can be updated, we can use the same command with the -u option to update our package.json file easily:

$ npx npm-check-updates -u

We should see output similar to this:

Upgrading /workspaces/example-project/server/package.json
[====================] 4/4 100%

 cookie-parser   ~1.4.4  →   ~1.4.7
 debug           ~2.6.9  →   ~4.4.0
 express        ~4.16.1  →  ~4.21.2
 morgan          ~1.9.1  →  ~1.10.0

Run npm install to install new versions.

We can also check our package.json file to see the changes:

{
  "name": "example-project",
  "version": "0.0.1",
  "private": true,
  "scripts": {
    "start": "node ./bin/www"
  },
  "dependencies": {
    "cookie-parser": "~1.4.7",
    "debug": "~4.4.0",
    "express": "~4.21.2",
    "morgan": "~1.10.0"
  }
}

Finally, we can install those dependencies:

$ npm install

Now when we run that command, we should see that everything is up to date!

added 36 packages, changed 24 packages, and audited 90 packages in 4s

14 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities

There we go! We now have a sample Express application configured with updated dependencies.

Testing the Application

At this point, we are ready to actually test our application. To do this, we can run the following command from within the server directory in our project:

$ npm start

When we do, we’ll see a bit of information on the terminal:

> example-project@0.0.1 start
> node ./bin/www

We’ll also see a small popup in the bottom right corner of the screen, telling us that it has detected that our application is listening on port 3000.

Listening Port Popup Listening Port Popup

So, to access our application, we can click on the Open in Browser button on that popup. If everything works correctly, we should be able to see our application running in our web browser:

Running Example Application Running Example Application

Take a look at the long URL in the browser - that URL includes the name of the GitHub Codespace (laughing-computing-machine in this example), followed by a random Codespace ID (jj5j9p97vx435jqj), followed by the port our application is listening on (3000). We’ll look at ways we can build this URL inside of our application in the future, but for now it is just worth noting.

Finding Listening Ports

If you didn’t see the popup appear, or you cannot find where your application is running, check the PORTS tab above the console in GitHub Codespaces:

Listening Port List Listening Port List

We can click on the URL under the Forwarded Addresses heading to access the port in our web browser. We can also use this interface to configure additional ports that we want to be able to access outside of the GitHub Codespace.

We can also access any routes that are configured in our application. For example, the default Express application includes a /users route, so we can just add /users to the end of the URL in our web browser to access it. We should see this page when we do:

Running Example Users Path Running Example Users Path

Great! It looks like our example application in running correctly.

Committing to GitHub

Now is a great time to commit and push our project to GitHub. Before we do, however, we should double-check that our project has a proper server/.gitignore file. It should have been created by the Express application generator if we used the --git option, but it is always important to double-check that it is there before trying to commit a new project.

Gitignore File Gitignore File

A .gitignore file is used to tell git which files should not be committed to a repository. For a project using Node.js, we especially don’t want to commit our node_modules folder. This folder contains all of the dependencies for our project, and can often be very large.

Why don’t we want to commit it? Because it contains lots of code that isn’t ours, and it is much better to just install the dependencies locally whenever we develop or use our application. That is the whole function of the package.json file and the npm command - it lets us focus on only developing our own code, and it will find and manage all other external dependencies for us.

So, as a general rule of thumb, we should NEVER commit the node_modules folder to our repository.

Missing gitignore file?

If your project does not have a .gitignore file, you can usually find one for the language or framework you are using in the excellent gitignore GitHub Repository. Just look for the appropriate file and add the contents to a .gitignore file in your project. For example, you can find a Node.gitignore file to use in this project.

At long last, we are ready to commit and push all of our changes to this project. If it works correctly, it should only commit the code files we’ve created, but none of the files that are ignored in the .gitignore file.

Convert to ES Modules

YouTube Video

CommonJS vs ES Modules

By default, the Express application generator creates an application using the CommonJS module format. This is the original way that JavaScript modules were packaged. However, many libraries and frameworks have been moving to the new ECMAScript module format (commonly referred to as ES modules), which is current official standard way of packaging JavaScript modules.

Since we want to build an industry-grade application, it would be best to update our application to use the new ES module format. This format will become more and more common over time, and many dependencies on npm have already started to shift to only supporting the ES module format. So, let’s take the time now to update our application to use that new format before we go any further.

Enabling ES Module Support

To enable ES module support in our application, we must simply add "type": "module", to the package.json file:

{
  "name": "example-project",
  "version": "0.0.1",
  "type": "module",
  "private": true,
  "scripts": {
    "start": "node ./bin/www"
  },
  "dependencies": {
    "cookie-parser": "~1.4.7",
    "debug": "~4.4.0",
    "express": "~4.21.2",
    "morgan": "~1.10.0"
  }
}

Now, let’s try to run our application:

$ npm start

When we do, we’ll get some errors:

> example-project@0.0.1 start
> node ./bin/www

file:///workspaces/example-project/server/bin/www:7
var app = require('../app');
          ^

ReferenceError: require is not defined in ES module scope, you can use import instead
    at file:///workspaces/example-project/server/bin/www:7:11
    at ModuleJob.run (node:internal/modules/esm/module_job:271:25)
    at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:547:26)
    at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:116:5)

Node.js v22.12.0

By changing that one line in package.json, the Node.js runtime is trying to load our project using ES modules instead of CommonJS modules, and it causes all sorts of errors. Thankfully, most of them are easy to fix! In most cases, we are simply making two updates:

  1. Replacing require statements with import statements
  2. Replacing module.exports statements with export default statements.

Let’s go file by file and make these updates. We’ll only show the lines that are commented out and their replacements directly below - you’ll need to look carefully at each file, find the commented line, and replace it with the new line.

  • bin/www
// var app = require('../app');
import app from '../app.js';

// var debug = require('debug')('server:server');
import debugLibrary from 'debug';
const debug = debugLibrary('server:server');

// var http = require('http');
import http from 'http';
  • app.js
// var express = require('express');
import express from 'express';

// var path = require('path');
import path from 'path';

// var cookieParser = require('cookie-parser');
import cookieParser from 'cookie-parser';

// var logger = require('morgan');
import logger from 'morgan';

// var indexRouter = require('./routes/index');
import indexRouter from './routes/index.js';

// var usersRouter = require('./routes/users');
import usersRouter from './routes/users.js';

// -=-=- other code omitted here -=-=-

//module.exports = app;
export default app;
  • routes/index.js and routes/users.js
// var express = require('express');
import express from 'express';

// var router = express.Router();
const router = express.Router();

// -=-=- other code omitted here -=-=-

// module.exports = router;
export default router;

At this point, let’s test our application again to see if we’ve updated everything correctly:

$ npm start

Now, we should get an error message similar to this:

file:///workspaces/example-project/server/app.js:25
app.use(express.static(path.join(__dirname, 'public')));
                                 ^

ReferenceError: __dirname is not defined in ES module scope
    at file:///workspaces/example-project/server/app.js:25:34
    at ModuleJob.run (node:internal/modules/esm/module_job:271:25)
    at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:547:26)
    at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:116:5)

Node.js v22.12.0

This is a bit trickier to debug, but a quick Google search usually leads to the correct answer. In this case, the __dirname variable is a global variable that is defined when Node.js is running a CommonJS module, as discussed in the documentation. However, when Node.js is running an ES module, many of these global variables have been relocated to the import.meta property, as shown in the documentation. So, we can just replace __dirname with the import.meta.dirname variable in app.js:

//app.use(express.static(path.join(__dirname, 'public')));
app.use(express.static(path.join(import.meta.dirname, 'public')));

Let’s try to run our application again - it should be able to start this time:

$ npm start

Updating a Node.js application to use ES modules is not terribly difficult, especially if it is done early in development. However, since we’ve made this change, we’ll have to be careful as we continue to develop our application. Many online tutorials, documentation, and references assume that any Node.js and Express application is still using CommonJS modules, so we may have to translate any code we find to match our new ES module setup.

This is a good point to commit and push our work!

References

Debugging & Logging

YouTube Video

Debugging with the Debug Utility

Now that we have a basic Express application, let’s add some helpful tools for developers to make our application easier to work with and debug in the future. These are some great quality of life tweaks that many professional web applications include, but often new developers fail to add them early on in development and waste lots of time adding them later. So, let’s take some time now to add these features before we start developing any actual RESTful endpoints.

First, you may have noticed that the bin/www file includes the debug utility. This is a very common debugging module that is included in many Node.sj applications, and is modeled after how Node.js itself handles debugging internally. It is a very powerful module, and one that you should make use of anytime you are creating a Node.js library to be published on npm and shared with others.

Let’s quickly look at how we can use the debug utility in our application. Right now, when we start our application, we see very little output on the terminal:

$ npm start

That command produces this output:

> example-project@0.0.1 start
> node ./bin/www

As we access various pages and routes, we may see some additional lines of output appear, like this:

GET / 304 2.569 ms - -
GET /users 200 2.417 ms - 23
GET / 200 1.739 ms - 120

These lines come from the morgan request logging middleware, which we’ll discuss on the next page of this example.

To enable the debug library, we simply must set an environment variable in the terminal when we run our application, as shown here:

$ DEBUG=* npm start
Environment Variables

An environment variable is a value that is present in memory in a running instance of an operating system. These generally give running processes information about the system, but may also include data and information provided by the user or system administrator. Environment variables are very common ways to configure applications that run in containers, like our application will when it is finally deployed. We’ll cover this in detail later in this course; for now, just understand that we are setting a variable in memory that can be accessed inside of our application.

Now, we’ll be provided with a lot of debugging output from all throughout our application:

> example-project@0.0.1 start
> node ./bin/www

  express:router:route new '/' +0ms
  express:router:layer new '/' +1ms
  express:router:route get '/' +0ms
  express:router:layer new '/' +1ms
  express:router:route new '/' +0ms
  express:router:layer new '/' +0ms
  express:router:route get '/' +0ms
  express:router:layer new '/' +0ms
  express:application set "x-powered-by" to true +1ms
  express:application set "etag" to 'weak' +0ms
  express:application set "etag fn" to [Function: generateETag] +0ms
  express:application set "env" to 'development' +0ms
  express:application set "query parser" to 'extended' +0ms
  express:application set "query parser fn" to [Function: parseExtendedQueryString] +0ms
  express:application set "subdomain offset" to 2 +0ms
  express:application set "trust proxy" to false +0ms
  express:application set "trust proxy fn" to [Function: trustNone] +1ms
  express:application booting in development mode +0ms
  express:application set "view" to [Function: View] +0ms
  express:application set "views" to '/workspaces/example-project/server/views' +0ms
  express:application set "jsonp callback name" to 'callback' +0ms
  express:router use '/' query +1ms
  express:router:layer new '/' +0ms
  express:router use '/' expressInit +0ms
  express:router:layer new '/' +0ms
  express:router use '/' logger +0ms
  express:router:layer new '/' +0ms
  express:router use '/' jsonParser +0ms
  express:router:layer new '/' +0ms
  express:router use '/' urlencodedParser +1ms
  express:router:layer new '/' +0ms
  express:router use '/' cookieParser +0ms
  express:router:layer new '/' +0ms
  express:router use '/' serveStatic +0ms
  express:router:layer new '/' +0ms
  express:router use '/' router +0ms
  express:router:layer new '/' +1ms
  express:router use '/users' router +0ms
  express:router:layer new '/users' +0ms
  express:application set "port" to 3000 +2ms
  server:server Listening on port 3000 +0ms

Each line of output starts with a package name, such as express:application showing the namespace where the logging message came from (which usually corresponds to the library or module it is contained in), followed by the message itself. The last part of the line looks like +0ms, and is simply a timestamp showing the time elapsed since the last debug message was printed.

At the very bottom we see the debug line server:server Listening on port 3000 +0ms - this line is what is actually printed in the bin/www file. Let’s look at that file and see where that comes from:

// -=-=- other code omitted here -=-=-

import debugLibrary from 'debug';
const debug = debugLibrary('server:server');

// -=-=- other code omitted here -=-=-

function onListening() {
  var addr = server.address();
  var bind = typeof addr === 'string'
    ? 'pipe ' + addr
    : 'port ' + addr.port;
  debug('Listening on ' + bind);
}

At the top of that file, we import the debug library, and then instantiate it using the name 'server:server'. This becomes the namespace for our debug messages printed using this instance of the debug library. Then, inside of the onListening() function, we call the debug function and provide a message to be printed.

When we run our application, we can change the value of the DEBUG environment variable to match a particular namespace to only see messages from that part of our application:

$ $ DEBUG=server:* npm start

This will only show output from our server namespace:

> example-project@0.0.1 start
> node ./bin/www

  server:server Listening on port 3000 +0ms

The debug utility is a very powerful tool for diagnosing issues with a Node.js and Express application. You can learn more about how to use and configure the debug utility in the documentation.

Logging with Winston

However, since we are focused on creating a web application and not a library, let’s replace debug with the more powerful winston logger. This allows us to create a robust logging system based on the traditional concept of severity levels of the logs we want to see.

To start, let’s install winston using the npm command (as always, we should make sure we are working in the server directory of our application):

$ npm install winston

We should see output similar to the following:

added 28 packages, and audited 118 packages in 2s

15 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities
Multiple Dependencies

Notice how installing a single dependency actually installed 28 individual packages? This is a very useful feature of how Node.js and npm packages are structured, since each package can focus on doing only one task really well while reusing common tools and utilities that other packages may also use (thereby reducing the number of duplicated packages that may need to be installed). Unfortunately, this can also lead to situations where an issue with a single package can cause cascading failures and incompatibilities across the board. So, while it is very helpful to install these dependencies in our application, we always want to do so with caution and make sure are always using dependencies that are well maintained and actually add value to our application.

The left-pad Incident

For a vivid case study of the concerns around using unnecessary dependencies, look at the npm left-pad incident. The left-pad library was a simple utility that added padding to the left side of a string. The entire library itself was a single function that contained less than 10 lines of actual code. However, when the developer of that library removed access to it due to a dispute, it ended up nearly breaking the entire npm ecosystem. Core development tools such as Babel, Webpack and more all used that library as a dependency, and with the rise of automated build systems, each tool broke as soon as the next rebuild cycle was initiated. It also caused issues with major online platforms such as Facebook, PayPal, Netflix, and Spotify.

Even today, nearly 9 years after the incident, the left-pad library is still present on npm, even though it is listed as deprecated since JavaScript now includes a method String.prototype.padStart() that performs the same action. As of January 2025, there are still 540 libraries on npm that list left-pad as a dependency, and it is downloaded over 1 million times per week!

Now that we’ve installed winston, we should configure it. We could place all of the code to configure it inside of each file where it is used, but let’s instead create a standalone configuration file for winston that we can use throughout our application.

To do this, let’s create a new folder named configs inside of our server folder to house configurations for various dependencies, and then inside of that folder create a new file named logger.js for this configuration. In that file, we can place the following content:

import winston from 'winston';
const { combine, timestamp, printf, colorize, align, errors } = winston.format;

// Log Levels
//   error: 0
//   warn: 1
//   info: 2
//   http: 3
//   verbose: 4
//   debug: 5
//   silly: 6

function level () {
    if (process.env.LOG_LEVEL) {
      if (process.env.LOG_LEVEL === '0' || process.env.LOG_LEVEL === 'error') {
        return 'error';
      }
      if (process.env.LOG_LEVEL === '1' || process.env.LOG_LEVEL === 'warn') {
        return 'warn';
      }
      if (process.env.LOG_LEVEL === '2' || process.env.LOG_LEVEL === 'info') {
        return 'info';
      }
      if (process.env.LOG_LEVEL === '3' || process.env.LOG_LEVEL === 'http') {
        return 'http';
      }
      if (process.env.LOG_LEVEL === '4' || process.env.LOG_LEVEL === 'verbose') {
        return 'verbose';
      }
      if (process.env.LOG_LEVEL === '5' || process.env.LOG_LEVEL === 'debug') {
        return 'debug';
      }
      if (process.env.LOG_LEVEL === '6' || process.env.LOG_LEVEL === 'silly') {
        return 'silly';
      }
    }
    return 'http';
}

const logger = winston.createLogger({
    // call `level` function to get default log level
    level: level(),
    // Format configuration
    format: combine(
        colorize({ all: true }),
        errors({ stack: true}),
        timestamp({
            format: 'YYYY-MM-DD hh:mm:ss.SSS A',
        }),
        align(),
        printf((info) => `[${info.timestamp}] ${info.level}: ${info.stack ? info.stack : info.message}`)
    ),
    // Output configuration
    transports: [new winston.transports.Console()]
})

export default logger;

At the top, we see a helpful comment just reminding us which log levels are available by default in winston. Then, we have a level function that determines what our desired log level should be based on an environment variable named LOG_LEVEL. We’ll set that variable a bit later in this tutorial. Based on that log level, our system will print any logs at that level or lower in severity level. Finally, we create an instance of the winston logger and provide lots of configuration information about our desired output format. All of this is highly configurable. To fully understand this configuration, take some time to review the winston documentation.

Now, let’s update our bin/www file to use this logger instead of the debug utility. Lines that have been changed are highlighted:

// -=-=- other code omitted here -=-=-

// var debug = require('debug')('server:server');
// import debugLibrary from 'debug';
// const debug = debugLibrary('server:server');
import logger from '../configs/logger.js';

// -=-=- other code omitted here -=-=-

function onError(error) {
  if (error.syscall !== 'listen') {
    throw error;
  }

  var bind = typeof port === 'string'
    ? 'Pipe ' + port
    : 'Port ' + port;

  // handle specific listen errors with friendly messages
  switch (error.code) {
    case 'EACCES':
      // console.error(bind + ' requires elevated privileges');
      logger.error(new Error(bind + ' requires elevated privileges'));
      process.exit(1);
      break;
    case 'EADDRINUSE':
      // console.error(bind + ' is already in use');
      logger.error(new Error(bind + ' is already in use'));
      process.exit(1);
      break;
    default:
      throw error;
  }
}

/**
 * Event listener for HTTP server "listening" event.
 */

function onListening() {
  var addr = server.address();
  var bind = typeof addr === 'string'
    ? 'pipe ' + addr
    : 'port ' + addr.port;
  // debug('Listening on ' + bind);
  logger.debug('Listening on ' + bind)
}

Basically, we’ve replaced all instances of the debug method with logger.debug. We’ve also replaced a couple of uses of console.error to instead use logger.error. They will also create new Error object, which will cause winston to print a stack trace as well.

With that change in place, we can now remove the debug utility from our list of dependencies:

$ npm uninstall debug

Now, let’s run our program to see winston in action:

$ npm start

When we run it, we should see this output:

> example-project@0.0.1 start
> node ./bin/www

Notice how winston didn’t print any debug messages? That is because we haven’t set our LOG_LEVEL environment variable. So, let’s do that by creating two different scripts in our package.json file - one to run the application with a default log level, and another to run it with the debug log level:

{
  "name": "example-project",
  "version": "0.0.1",
  "type": "module",
  "private": true,
  "scripts": {
    "start": "LOG_LEVEL=http node ./bin/www",
    "dev": "LOG_LEVEL=debug node ./bin/www"
  },
  "dependencies": {
    "cookie-parser": "~1.4.7",
    "express": "~4.21.2",
    "morgan": "~1.10.0",
    "winston": "^3.17.0"
  }
}

The npm run command can be used to run any of the scripts in the scripts section of our package.json file. So, if we want to run our application so we can see the debug messages, we can use the following command:

$ npm run dev

Now we should see some debug messages in the output:

> example-project@0.0.1 dev
> LOG_LEVEL=debug node ./bin/www

[2025-01-17 06:23:03.622 PM] info:      Listening on port 3000

Great! Notice how the logger outputs a timestamp, the log level, and the message, all on the same line? This matches the configuration we used in the configs/logger.js file. On most terminals, each log level will even be a different color!

Debug Logging in Color Debug Logging in Color

Finally, since we really should make sure the message that the application is successfully listening on a port is printed by default, let’s change it to the info log level in our bin/www file:

// -=-=- other code omitted here -=-=-

function onListening() {
  var addr = server.address();
  var bind = typeof addr === 'string'
    ? 'pipe ' + addr
    : 'port ' + addr.port;
  // debug('Listening on ' + bind);
  logger.info('Listening on ' + bind)
}
Why Not Use NODE_ENV?

In many web applications written using Node.js and Express, you may have come across the NODE_ENV environment variable, which is often set to either development, production, or sometimes test to configure the application. While this may have made sense in the past, it is now considered an anti-pattern in Node.js. This is because there no fundamental difference between development and production in Node.js, and it is often very confusing if an application runs differently in different environments. So, it is better to directly configure logging via its own environment variable instead of using an overall variable that configures multiple services. See the Node.js Documentation for a deeper discussion of this topic.

This is a good point to commit and push our work!

References

Request Logging

YouTube Video

Logging Requests with Morgan

Now that we have configured a logging utility, let’s use it to also log all incoming requests sent to our web application. This will definitely make it much easier to keep track of what is going on in our application and make sure it is working correctly.

The Express application generator already installs a library for this, called morgan. We have already seen output from morgan before:

GET / 304 2.569 ms - -
GET /users 200 2.417 ms - 23
GET / 200 1.739 ms - 120

While this is useful, let’s reconfigure morgan to use our new winston logger and add some additional detail to the output.

Since morgan is technically a middleware in our application, let’s create a new folder called middlewares to store configuration for our various middlewares, and then we can create a new middleware file named request-logger.js in that folder. Inside of that file, we can place the following configuration:

import morgan from 'morgan';
import logger from '../configs/logger.js';

// Override morgan stream method to use our custom logger
// Log Format
// :method :url :status :response-time ms - :res[content-length]
const stream = {
    write: (message) => {
        // log using the 'http' severity
        logger.http(message.trim())
    }
}

// See https://github.com/expressjs/morgan?tab=readme-ov-file#api
const requestLogger = morgan('dev', { stream });

export default requestLogger;

In effect, this file basically tells morgan to write output through the logger.http() method instead of just directly to the console. We are importing our winston configuration from configs/logger.js to accomplish this. We are also configuring morgan to use the dev logging format; more information on log formats can be found in the documentation.

Finally, let’s update our app.js file to use this new request logger middleware instead of morgan:

import express from 'express';
import path from 'path';
import cookieParser from 'cookie-parser';
// import logger from 'morgan';
import requestLogger from './middlewares/request-logger.js';

import indexRouter from './routes/index.js';
import usersRouter from './routes/users.js';

var app = express();

// app.use(logger('dev'));
app.use(requestLogger);
app.use(express.json());
app.use(express.urlencoded({ extended: false }));
app.use(cookieParser());
app.use(express.static(path.join(import.meta.dirname, 'public')));

// -=-=- other code omitted here -=-=-

Now, let’s run our application and access a few of the routes via our web browser:

$ npm run dev

We should now see output from morgan included as http logs from winston:

> example-project@0.0.1 dev
> LOG_LEVEL=debug node ./bin/www

[2025-01-17 06:39:30.975 PM] info:      Listening on port 3000
[2025-01-17 06:39:37.430 PM] http:      GET / 200 3.851 ms - 120
[2025-01-17 06:39:40.665 PM] http:      GET /users 200 3.184 ms - 23
[2025-01-17 06:39:43.069 PM] http:      GET / 304 0.672 ms - -
[2025-01-17 06:39:45.424 PM] http:      GET /users 304 1.670 ms - -

When viewed on a modern terminal, they should even be colorized!

Request Logging Request Logging

Here, we can see each log level is colorized, and also the HTTP status codes in our morgan log output are also colorized. The first time each page is accessed, the browser receives a 200 status code in green with the content. The second time, our application correctly sends back a 304 status code in light blue, indicating that the content has not been modified and that the browser can use the cached version instead.

This is a good point to commit and push our work!

Other Libraries

YouTube Video

Other Useful Libraries

Before we move on, let’s install a few other useful libraries that perform various tasks in our Express application.

Compression

The compression middleware library does exactly what it says it will - it compresses any responses generated by the server and sent through the network. This can be helpful in many situations, but not all. Recall that compression is really just trading more processing time in exchange for less network bandwidth, so we may need to consider which of those we are more concerned about. Thankfully, adding or removing the compression middleware library is simple.

First, let’s install it using npm:

$ npm install compression

Then, we can add it to our app.js file, generally early in the chain of middlewares since it will impact all responses after that point in the chain.

import express from 'express';
import path from 'path';
import cookieParser from 'cookie-parser';
import compression from 'compression';
import requestLogger from './middlewares/request-logger.js';

import indexRouter from './routes/index.js';
import usersRouter from './routes/users.js';

var app = express();

app.use(compression());
app.use(requestLogger);
app.use(express.json());
app.use(express.urlencoded({ extended: false }));
app.use(cookieParser());
app.use(express.static(path.join(import.meta.dirname, 'public')));

app.use('/', indexRouter);
app.use('/users', usersRouter);

export default app;

To test this library, we can run our application with all built-in debugging enabled through the debug library as documented in the Express Documentation:

$ DEBUG=* npm run dev

We’ll see a bunch of output as our Express application is initialized. Once it is done, we can open the home page in our web browser to send an HTTP GET request to the server. This will produce the following log output:

  express:router dispatching GET / +1m
  express:router query  : / +0ms
  express:router expressInit  : / +1ms
  express:router compression  : / +0ms
  express:router logger  : / +0ms
  express:router urlencodedParser  : / +0ms
  body-parser:urlencoded skip empty body +1ms
  express:router cookieParser  : / +0ms
  express:router serveStatic  : / +0ms
  send stat "/workspaces/example-project/server/public/index.html" +0ms
  send pipe "/workspaces/example-project/server/public/index.html" +1ms
  send accept ranges +0ms
  send cache-control public, max-age=0 +0ms
  send modified Thu, 16 Jan 2025 23:17:14 GMT +0ms
  send etag W/"78-1947168173e" +1ms
  send content-type text/html +0ms
  compression no compression: size below threshold +1ms
  morgan log request +2ms
[2025-01-25 07:00:35.013 PM] http:      GET / 200 3.166 ms - 120

We can see in the highlighted line that the compression library did not apply any compression to the response because it was below the minium size threshold. This is set to 1kb by default according to the compression documentation.

So, to really see what it does, let’s generate a much larger response by adding some additional text to our public/index.html file (this text was generated using Lorem Ipsum):

<html>

<head>
  <title>Express</title>
</head>

<body>
  <h1>Express</h1>
  <p>Welcome to Express</p>

  <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam sed arcu tincidunt, porttitor diam a, porta nibh. Duis pretium tellus euismod, imperdiet elit id, gravida turpis. Fusce vitae pulvinar tellus. Donec cursus pretium justo, sed vehicula erat iaculis lobortis. Mauris dapibus scelerisque aliquet. Nullam posuere, magna vitae viverra lacinia, sapien magna imperdiet erat, ac sagittis ante ante tristique eros. Phasellus eget fermentum mauris. Integer justo lorem, finibus a ullamcorper in, feugiat in nunc. Etiam ut felis a magna aliquam consectetur. Duis eu mauris ut leo vehicula fringilla scelerisque vel mi. Donec placerat quam nulla, at commodo orci maximus sit amet. Curabitur tincidunt euismod enim, non feugiat nulla eleifend sed. Sed finibus metus sit amet metus congue commodo. Cras ullamcorper turpis sed mi scelerisque porta.</p>

  <p>Sed maximus diam in blandit elementum. Integer diam ante, tincidunt in pulvinar at, luctus in dui. Fusce tincidunt hendrerit dolor in suscipit. Nullam vitae tellus at justo bibendum blandit a vel ligula. Nunc sed augue blandit, finibus nisi nec, posuere orci. Maecenas ut egestas diam. Donec non orci nec ex rhoncus malesuada at eget ante. Proin ultricies cursus nunc eu mollis. Donec vel ligula vel eros luctus pulvinar. Proin vitae dui imperdiet, rutrum risus non, maximus purus. Vivamus fringilla augue tincidunt, venenatis arcu eu, dictum nunc. Mauris eu ullamcorper orci. Cras efficitur egestas ligula. Maecenas a nisl bibendum turpis tristique lobortis.</p>

</body>

</html>

Now, when we request that file, we should see this line in our debug output:

express:router dispatching GET / +24s
  express:router query  : / +1ms
  express:router expressInit  : / +0ms
  express:router compression  : / +0ms
  express:router logger  : / +0ms
  express:router urlencodedParser  : / +0ms
  body-parser:urlencoded skip empty body +0ms
  express:router cookieParser  : / +1ms
  express:router serveStatic  : / +0ms
  send stat "/workspaces/example-project/server/public/index.html" +0ms
  send pipe "/workspaces/example-project/server/public/index.html" +0ms
  send accept ranges +0ms
  send cache-control public, max-age=0 +0ms
  send modified Sat, 25 Jan 2025 19:05:18 GMT +0ms
  send etag W/"678-1949edaaa4c" +0ms
  send content-type text/html +0ms
  compression gzip compression +1ms
  morgan log request +1ms
[2025-01-25 07:05:20.234 PM] http:      GET / 200 1.232 ms - -

As we can see, the compression middleware is now compressing the response before it is sent to the server using the gzip compression algorithm. We can also see this in our web browser’s debugging tools - in Google Chrome, we notice that the Content-Encoding header is set to gzip as shown below:

Compressed Server Response Compressed Server Response

We’ll go ahead and integrate the compression middleware into our project for this course, but as discussed above, it is always worth considering whether the tradeoff of additional processing time to save network bandwidth is truly worth it.

Helmet

Another very useful Express library is helmet. Helmet sets several headers in the HTTP response from an Express application to help improve security. This includes things such as setting an appropriate Content-Security-Policy and removing information about the web server that could be leaked in the X-Powered-By header.

To install helmet we can simply use npm as always:

$ npm install helmet

Similar to the compression library above, we can simply add helmet to our Express application’s app.js file:

import express from 'express';
import path from 'path';
import cookieParser from 'cookie-parser';
import compression from 'compression';
import helmet from 'helmet';
import requestLogger from './middlewares/request-logger.js';

import indexRouter from './routes/index.js';
import usersRouter from './routes/users.js';

var app = express();

app.use(helmet());
app.use(compression());
app.use(requestLogger);
app.use(express.json());
app.use(express.urlencoded({ extended: false }));
app.use(cookieParser());
app.use(express.static(path.join(import.meta.dirname, 'public')));

app.use('/', indexRouter);
app.use('/users', usersRouter);

export default app;

To really see what the helmet library does, we can examine the headers sent by the server with and without helmet enabled.

First, here are the headers sent by the server without helmet enabled:

Insecure Headers Insecure Headers

When helmet is enabled, we see an entirely different set of headers:

Secure Headers Secure Headers

In the second screenshot, notice that the Content-Security-Policy header is now present, but the X-Powered-By header is not? Those changes, along with many others, are provided by the helmet library.

In general, it is always a good idea to review the security of the headers sent by our application. Installing helmet is a good start, but as we continue to develop applications we may learn additional ways we can configure helmet to provide even more security for our applications.

Nodemon

Finally, let’s also install the nodemon package to make developing our application a bit easier. At its core, nodemon is a simple tool that will auotmatically restart our application anytime it detects that a file has changed. In this way, we can just leave our application running in the background, and any changes we make to the code will immediately be availbale for us to test without having to manually restart the server.

To begin, let’s install nodemon as a development dependency using npm with the --save-dev flag:

$ npm install nodemon --save-dev

Notice that this will cause that library to be installed in a new section of our package.json file called devDependencies:

{
  ...
  "dependencies": {
    "compression": "^1.7.5",
    "cookie-parser": "~1.4.7",
    "express": "~4.21.2",
    "helmet": "^8.0.0",
    "morgan": "~1.10.0",
    "winston": "^3.17.0"
  },
  "devDependencies": {
    "nodemon": "^3.1.9"
  }
}

These dependencies are only installed by npm when we are developing our application. The default npm install command will install all dependencies, including development dependencies. However, we can instead either use npm install --omit=dev or set the NODE_ENV environment variable to production to avoid installing development dependencies.

Next, we can simply update our package.json file to use the nodemon command instead of node in the dev script:

{
  "name": "example-project",
  "version": "0.0.1",
  "type": "module",
  "private": true,
  "scripts": {
    "start": "LOG_LEVEL=http node ./bin/www",
    "dev": "LOG_LEVEL=debug nodemon ./bin/www"
  },
  ...
}

Now, when we execute our application:

$ npm run dev

We should see additional output from nodemon to see that it is working:

> example-project@0.0.1 dev
> LOG_LEVEL=debug nodemon ./bin/www

[nodemon] 3.1.9
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,cjs,json
[nodemon] starting `node ./bin/www`
[2025-01-25 09:37:24.734 PM] info:      Listening on port 3000

Now, with our application running, we can make any change to a file in our application, such as app.js, and it will automatically restart our application:

[nodemon] restarting due to changes...
[nodemon] starting `node ./bin/www`
[2025-01-25 09:39:02.858 PM] info:      Listening on port 3000

We can also always manually type rs in the terminal to restart the application when it is running inside of nodemon.

In general, using nodemon to develop a Node.js application is recommended, but we don’t want to use that in a production environment. So, we are careful to install nodemon as a development dependency only.

This is a good point to commit and push our work!

Environment

YouTube Video

Environment Variables

As discussed earlier, an environment variable is a value present in memory in the operating system environment where a process is running. They contain important information about the system where the application is running, but they can also be configured by the user or system administrator to provide information and configuration to any processes running in that environment. This is especially used when working with containers like the dev container we built for this project.

To explore this, we can use the printenv command in any Linux terminal:

$ printenv

When we run that command in our GitHub codespace, we’ll see output containing lines similar to this (many lines have been omitted as they contain secure information):

SHELL=/bin/bash
GITHUB_USER=russfeld
CODESPACE_NAME=laughing-computing-machine-jj5j9p97vx435jqj
HOSTNAME=codespaces-f1a983
RepositoryName=example-project
CODESPACES=true
YARN_VERSION=1.22.22
PWD=/workspaces/example-project/server
ContainerVersion=13
GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN=app.github.dev
USER=node
NODE_VERSION=22.12.0
OLDPWD=/workspaces/example-project
TERM_PROGRAM=vscode

As we can see, the environment contains many useful variables, including a CODESPACES variable showing that the application is running in GitHub Codespaces. We can also find our GITHUB_USER, CODESPACE_NAME and even the NODE_VERSION all in the environment.

Configuring the Environment

Because many web applications eventually run in a containerized environment anyway, it is very common practice to configure those applications through the use of environment variables. Thankfully, we can more easily control and configure our application through the use of a special library dotenvx that allows us to load a set of environment variables from a file named .env.

dotenv

The dotenvx library is a newer version of the dotenv library that has been used for this purpose for many years. dotenvx was developed by the same developer, and is often recommended as a new, modern replacement to dotenv for most users. It includes features that allow us to create multiple environments and even encrypt values. So, for this project we’ll use the newer library to take advantage of some of those features.

To begin, let’s install dotenvx using npm:

$ npm install @dotenvx/dotenvx

Next, we’ll need to import that library as early as possible in our application, since we want to make sure that the environment is properly loaded before any other configuration files are referenced, since they may require environment variables to work properly. In this case, we want to do that as the very first thing in app.js:

import '@dotenvx/dotenvx/config';
import express from 'express';
import path from 'path';
import cookieParser from 'cookie-parser';
import compression from 'compression';
import helmet from 'helmet';
import requestLogger from './middlewares/request-logger.js';

// -=-=- other code omitted here -=-=-

Now, when we run our application, we should get a helpful message letting us know that our environment file is missing:

> example-project@0.0.1 dev
> LOG_LEVEL=debug nodemon ./bin/www

[nodemon] 3.1.9
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,cjs,json
[nodemon] starting `node ./bin/www`
[MISSING_ENV_FILE] missing .env file (/workspaces/example-project/server/.env)
[MISSING_ENV_FILE] https://github.com/dotenvx/dotenvx/issues/484
[dotenvx@1.34.0] injecting env (0)
[2025-01-25 08:15:56.135 PM] info:      Listening on port 3000

This is one of the many benefits that comes from using the newer dotenvx library - it will helpfully remind us when we are running without an environment file, just in case we forgot to create one.

So, now let’s create the .env file in the server folder of our application, and add an environment variable to that file:

LOG_LEVEL=error

This should set the logging level of our application to error, meaning that only errors will be logged to the terminal. So, let’s run our application and see what it does:

$ npm run dev

However, when we do, we notice that we are still getting http logging in the output:

> example-project@0.0.1 dev
> LOG_LEVEL=debug nodemon ./bin/www

[nodemon] 3.1.9
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,cjs,json
[nodemon] starting `node ./bin/www`
[dotenvx@1.34.0] injecting env (0) from .env
[2025-01-25 08:20:17.438 PM] info:      Listening on port 3000
[2025-01-25 08:23:56.896 PM] http:      GET / 304 3.405 ms -

This is because we are already setting the LOG_LEVEL environment variable directly in our package.json file:

{
  "name": "example-project",
  "version": "0.0.1",
  "type": "module",
  "private": true,
  "scripts": {
    "start": "LOG_LEVEL=http node ./bin/www",
    "dev": "LOG_LEVEL=debug nodemon ./bin/www"
  },
  ...
}

This is actually a great feature! The dotenvx library will not override any existing environment variables - so, if the environment is already configured, or we want to override anything that may be present in our .env file, we can just set it in the environment before running our application, and those values will take precedence!

For now, let’s go ahead and remove that variable from the dev script in our package.json file:

{
  "name": "example-project",
  "version": "0.0.1",
  "type": "module",
  "private": true,
  "scripts": {
    "start": "LOG_LEVEL=http node ./bin/www",
    "dev": "nodemon ./bin/www"
  },
  ...
}

Now, when we run our program, we should not see any logging output (unless we can somehow cause the server to raise an error, which is unlikely right now):

> example-project@0.0.1 dev
> nodemon ./bin/www

[nodemon] 3.1.9
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,cjs,json
[nodemon] starting `node ./bin/www`
[dotenvx@1.34.0] injecting env (1) from .env

Finally, let’s go ahead and set the value in our .env file back to the debug setting:

LOG_LEVEL=debug

Now, when we run our application, we can see that it is following that configuration:

> example-project@0.0.1 dev
> nodemon ./bin/www

[nodemon] 3.1.9
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,cjs,json
[nodemon] starting `node ./bin/www`
[dotenvx@1.34.0] injecting env (1) from .env
[2025-01-25 08:28:54.587 PM] info:      Listening on port 3000
[2025-01-25 08:28:58.625 PM] http:      GET / 200 3.475 ms - -

Great! We now have a powerful way to configure our application using a .env file.

Other Environment Variables

Right now, our program only uses one other environment variable, which can be found in the bin/www file:

#!/usr/bin/env node

import app from '../app.js';
import logger from '../configs/logger.js';
import http from 'http';

/**
 * Get port from environment and store in Express.
 */
var port = normalizePort(process.env.PORT || '3000');
app.set('port', port);

// -=-=- other code omitted here -=-=-

The code process.env.PORT || '3000' is a commonly used shorthand in JavaScript to check for the presence of a variable. Basically, if process.env.PORT is set, then that code will resolve to that value. If not, then the or operator || will use the second option, which is the value '3000' that is just hard-coded into our application.

So, we can set that value explicitly in our .env file:

LOG_LEVEL=debug
PORT=3000

In general, it is always good practice to explicitly list all configurable values in the .env file when developing an application, since it helps us keep track of them.

However, each value should also have a logical default value if no configuration is provided. Ideally, our application should be able to run correctly with minimal configuration, or it should at least provide clear errors to the user when a configuration value is not provided. For example, we can look back at the level() function in configs/logger.js to see that it will set the logging level to http if it cannot find an appropriate LOG_LEVEL environment variable.

Environment Variable Security

Storing the configuration for our application in a .env file is a great option, and it is even included as item 3 of the twelve-factor methodology for developing modern web applications.

Unfortunately, this can present one major security flaw - often, the information stored in the .env file is very sensitive, since it may include database passwords, encryption keys, and more. So, we want to make absolutely sure that our .env file is never committed to git or GitHub, and it should never be shared between developers.

We can enforce this by ensuring that our .gitignore file inside of our server folder includes a line that prevents us from accidentally committing the .env file. Thankfully, both the .gitignore produced by the Express application generator, as well as the one in the GitHub gitignore repository both already include that line.

Instead, it is common practice to create a second file called .env.example (or similar) that contains a list of all configurable environment variables, along with safe default values for each. So, for this application, we might create a .env.example file that looks like this:

LOG_LEVEL=http
PORT=3000

This file can safely be committed to git and stored in GitHub. When a new developer or user clones our project, they can easily copy the .env.example file to .env and update it to match their desired configuration.

As we continue to add environment variables to our .env file, we should also make sure the .env.example file is kept up to date.

This is a good point to commit and push our work, but be extra sure that our .env file DOES NOT get committed to git!

OpenAPI Documentation

YouTube Video

OpenAPI Documentation

There are many different ways to document the features of a RESTful web application. One of the most commonly used methods is the OpenAPI Specification (OAS). OpenAPI was originally based on the Swagger specification, so we’ll sometimes still see references to the name Swagger in online resources.

At its core, the OpenAPI Specification defines a way to describe the functionality of a RESTful web application in a simple document format, typically structured as a JSON or YAML file. For example, we can find an example YAML file for a Petstore API that is commonly cited as an example project for understanding the OpenAPI Specification format.

That file can then be parsed and rendered as an interactive documentation website for developers and users of the API itself. So, we can find a current version of the Petstore API Documentation online and compare it to the YAML document to see how it works.

For more information on the OpenAPI Specification, consult their Getting Started page.

Configuration OpenAPI

For our project, we are going to take advantage of two helpful libraries to automatically generate and serve OpenAPI documentation for our code using documentation comments:

  • swagger-jsdoc - generates OpenAPI Specification based on JSDoc comments.
  • swagger-ui-express - serves an OpenAPI Documentation page based on the specification generated by other tools.

First, let’s install both of those libraries into our project:

$ npm install swagger-jsdoc swagger-ui-express

Next, we should create a configuration file for the swagger-jsdoc library that contains some basic information about our API. We can store that in the configs/openapi.js file with the following content:

import swaggerJSDoc from 'swagger-jsdoc'

function url() {
  if (process.env.OPENAPI_HOST) {
    return process.env.OPENAPI_HOST
  } else {
    const port = process.env.PORT || '3000'
    return`http://localhost:${port}`
  }
}

const options = {
  definition: {
    openapi: '3.1.0',
    info: {
      title: 'Example Project',
      version: '0.0.1',
      description: 'Example Project',
    },
    servers: [
      {
        url: url(),
      },
    ],
  },
  apis: ['./routes/*.js'],
}

export default swaggerJSDoc(options)

Let’s look at a few items in this file to see what it does:

  • url() - this function checks for the OPENAPI_HOST environment variable. If that is set, then it will use that value. Otherwise, it uses a sensible default value of http://localhost:3000 or whatever port is set in the environment.
  • options - the options object is used to configure the swagger-jsdoc library. We can read more about how to configure that library in the documentation. At a minimum, it provides some basic information about the API, as well as the URL where the API is located, and a list of source files to read information from. For now, we only want to read from the routes stored in the routes folder, so we include that path along with a wildcard filename.

We should also take a minute to add the OPENAPI_HOST environment variable to our .env and .env.example files. If we are running our application locally, we can figure out what this value should be pretty easily (usually it will look similar to http://localhost:3000 or similar). However, when we are running in GitHub Codespaces, our URL changes each time. Thankfully, we can find all the information we need in the environment variables provided by GitHub Codespaces (see the previous page for a full list).

So, the item we need to add to our .env file will look something like this:

LOG_LEVEL=debug
PORT=3000
OPENAPI_HOST=https://$CODESPACE_NAME-$PORT.$GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN

This is one of the key features of the dotenvx library we are using - it will expand environment variables based on the existing environment. So, we are using the values stored in the CODESPACE_NAME, PORT, and GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN environment variables to construct the appropriate URL for our application.

In our .env.example file, we might want to make a note of this in a comment, just to be helpful for future developers. Comments in the .env file format are prefixed with hashes #.

LOG_LEVEL=debug
PORT=3000
OPENAPI_HOST=http://localhost:3000
# For GitHub Codespaces
# OPENAPI_HOST=https://$CODESPACE_NAME-$PORT.$GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN

Once that configuration is created, we can add it to our app.js file, along with a few lines to actually make the documentation visible:

import '@dotenvx/dotenvx/config';
import express from 'express';
import path from 'path';
import cookieParser from 'cookie-parser';
import compression from 'compression';
import helmet from 'helmet';
import requestLogger from './middlewares/request-logger.js';
import logger from './configs/logger.js';
import openapi from './configs/openapi.js'
import swaggerUi from 'swagger-ui-express'

import indexRouter from './routes/index.js';
import usersRouter from './routes/users.js';

var app = express();

app.use(helmet());
app.use(compression());
app.use(requestLogger);
app.use(express.json());
app.use(express.urlencoded({ extended: false }));
app.use(cookieParser());
app.use(express.static(path.join(import.meta.dirname, 'public')));

app.use('/', indexRouter);
app.use('/users', usersRouter);

if (process.env.OPENAPI_VISIBLE === 'true') {
    logger.warn('OpenAPI documentation visible!');
    app.use('/docs', swaggerUi.serve, swaggerUi.setup(openapi, {explorer: true}));
}

export default app;

Notice that we are using the OPENAPI_VISIBLE environment variable to control whether the documentation is visible or not, and we print a warning to the terminal if it is enabled. This is because it is often considered very insecure to make the details of our API visible to users unless that is the explicit intent, so it is better to be cautious.

Of course, to make the documentation appear, we’ll have to set the OPENAPI_VISIBLE value to true in our .env file, and also add a default entry to the .env.example file as well:

LOG_LEVEL=debug
PORT=3000
OPENAPI_HOST=https://$CODESPACE_NAME-$PORT.$GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN
OPENAPI_VISIBLE=true

Now, let’s run our application and see what happens:

$ npm run dev

We should see the following output when our application initializes:

> example-project@0.0.1 dev
> nodemon ./bin/www

[nodemon] 3.1.9
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,cjs,json
[nodemon] starting `node ./bin/www`
[dotenvx@1.34.0] injecting env (4) from .env
[2025-01-25 09:10:37.646 PM] warn:      OpenAPI documentation visible!
[2025-01-25 09:10:37.649 PM] info:      Listening on port 3000

Now, let’s load our application in a web browser, and go to the /docs path. We should see our OpenAPI Documentation website!

OpenAPI Documentation Example OpenAPI Documentation Example

Notice that the Servers URL matches the URL at the top of the page! That means our complex OPENAPI_HOST environment variable is working correctly.

However, we notice that our server does not have any operations defined yet, so we need to add those before we can really make use of this documentation website.

Documenting Routes

To document our routes using the OpenAPI Specification, we can add a simple JSDoc comment above each route function with some basic information, prefixed by the @swagger tag.

/**
 * @swagger
 * tags:
 *   name: index
 *   description: Index Routes
 */
import express from 'express';

const router = express.Router();

/**
 * @swagger
 * /:
 *   get: 
 *     summary: index page
 *     description: Gets the index page for the application
 *     tags: [index]
 *     responses:
 *       200: 
 *         description: success
 */
router.get('/', function(req, res, next) {
  res.render('index', { title: 'Express' });
});

export default router;
/**
 * @swagger
 * tags:
 *   name: users
 *   description: Users Routes
 */
import express from 'express';

const router = express.Router();

/**
 * @swagger
 * /users:
 *   get: 
 *     summary: users list page
 *     description: Gets the list of all users in the application
 *     tags: [users]
 *     responses:
 *       200: 
 *         description: a resource            
 */
router.get('/', function(req, res, next) {
  res.send('respond with a resource');
});

export default router;

Now, when we run our application and view the documentation, we see two operations:

OpenAPI Documentation With Operations OpenAPI Documentation With Operations

We can expand the operation to learn more about it, and even test it on a running server if our URL is set correctly:

OpenAPI Documentation Operation Example OpenAPI Documentation Operation Example

As we develop our RESTful API, this documentation tool will be a very powerful way for us to understand our own API’s design, and it will help us communicate easily with other developers who wish to use our API as well.

This is a good point to commit and push our work!

References

JSDoc Documentation

YouTube Video

JSDoc Documentation

It is also considered good practice to add additional documentation to all of the source files we create for this application. One common standard is JSDoc, which is somewhat similar to the JavaDoc comments we may have seen in previous courses. JSDoc can be used to generate documentation, but we won’t be using that directly in this project. However, we will be loosely following the JSDoc documentation standard to give our code comments some consistency. We can find a full list of the tags in the JSDoc Documentation.

For example, we can add a file header to the top of each source file with a few important tags. We may also want to organize our import statements and add notes for each group. We can also document individual functions, such as the normalizePort function in the bin/www file. Here’s a fully documented and commented version of that file:

/**
 * @file Executable entrypoint for the web application
 * @author Russell Feldhausen <russfeld@ksu.edu>
 */

// Import libraries
import http from 'http';

// Import Express application
import app from '../app.js';

// Import logging configuration
import logger from '../configs/logger.js';

// Get port from environment and store in Express
var port = normalizePort(process.env.PORT || '3000');
app.set('port', port);

// Create HTTP server
var server = http.createServer(app);

// Listen on provided port, on all network interfaces
server.listen(port);

// Attach event handlers
server.on('error', onError);
server.on('listening', onListening);

/**
 * Normalize a port into a number, string, or false.
 * 
 * @param {(string|number)} val - a value representing a port to connect to
 * @returns {(number|string|boolean)} the port or `false`
 */
function normalizePort(val) {
  var port = parseInt(val, 10);

  if (isNaN(port)) {
    // named pipe
    return val;
  }

  if (port >= 0) {
    // port number
    return port;
  }

  return false;
}

/**
 * Event listener for HTTP server "error" event.
 * 
 * @param {error} error - the HTTP error event
 * @throws error if the error cannot be determined
 */
function onError(error) {
  if (error.syscall !== 'listen') {
    throw error;
  }

  var bind = typeof port === 'string'
    ? 'Pipe ' + port
    : 'Port ' + port;

  // handle specific listen errors with friendly messages
  switch (error.code) {
    case 'EACCES':
      logger.error(new Error(bind + ' requires elevated privileges'));
      process.exit(1);
      break;
    case 'EADDRINUSE':
      logger.error(new Error(bind + ' is already in use'));
      process.exit(1);
      break;
    default:
      throw error;
  }
}

/**
 * Event listener for HTTP server "listening" event.
 */
function onListening() {
  var addr = server.address();
  var bind = typeof addr === 'string'
    ? 'pipe ' + addr
    : 'port ' + addr.port;
  logger.info('Listening on ' + bind)
}

Here is another example of a cleaned up, reorganized, and documented version of the app.js file. Notice that it also includes an @export tag at the top to denote the type of object that is exported from this file.

/**
 * @file Main Express application
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports app Express application
 */

// Load environment (must be first)
import '@dotenvx/dotenvx/config';

// Import libraries
import compression from 'compression';
import cookieParser from 'cookie-parser';
import express from 'express';
import helmet from 'helmet';
import path from 'path';
import swaggerUi from 'swagger-ui-express'

// Import configurations
import logger from './configs/logger.js';
import openapi from './configs/openapi.js'

// Import middlewares
import requestLogger from './middlewares/request-logger.js';

// Import routers
import indexRouter from './routes/index.js';
import usersRouter from './routes/users.js';

// Create Express application
var app = express();

// Use libraries
app.use(helmet());
app.use(compression());
app.use(express.urlencoded({ extended: false })); 
app.use(cookieParser());
app.use(express.json());

// Use middlewares
app.use(requestLogger);

// Use static files
app.use(express.static(path.join(import.meta.dirname, 'public')));

// Use routers
app.use('/', indexRouter);
app.use('/users', usersRouter);

// Use SwaggerJSDoc router if enabled
if (process.env.OPENAPI_VISIBLE === 'true') {
    logger.warn('OpenAPI documentation visible!');
    app.use('/docs', swaggerUi.serve, swaggerUi.setup(openapi, {explorer: true}));
}

export default app;

Finally, here is a fully documented routes/index.js file, showing how routes can be documented both with JSDoc tags as well as OpenAPI Specification items:

/**
 * @file Index router
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports router an Express router
 * 
 * @swagger
 * tags:
 *   name: index
 *   description: Index Routes
 */

// Import libraries
import express from "express";

// Create Express router
const router = express.Router();

/**
 * Gets the index page for the application
 *
 * @param {Object} req - Express request object
 * @param {Object} res - Express response object
 * @param {Function} next - Express next middleware function
 *
 * @swagger
 * /:
 *   get: 
 *     summary: index page
 *     description: Gets the index page for the application
 *     tags: [index]
 *     responses:
 *       200: 
 *         description: success
 */
router.get('/', function(req, res, next) {
  res.render('index', { title: 'Express' });
});

export default router;

Now is a great time to document all of the JavaScript files in our application following the JSDoc standard.

This is a good point to commit and push our work!

Linting & Formatting

YouTube Video

Linting

Finally, let’s look at two other tools that will help us write clean and maintainable JavaScript code. The first tool is eslint, which is a linting tool to find bugs and issues in JavaScript code by performing some static analysis on it. This helps us avoid any major issues in our code that can be easily detected just by looking at the overall style and structure of our code.

To begin, we can install eslint following the recommended process in their documentation:

$ npm init @eslint/config@latest

It will install the package and ask several configuration questions along the way. We can follow along with the answers shown in the output below:

Need to install the following packages:
@eslint/create-config@1.4.0
Ok to proceed? (y) y

@eslint/create-config: v1.4.0

✔ How would you like to use ESLint? · problems
✔ What type of modules does your project use? · esm
✔ Which framework does your project use? · none
✔ Does your project use TypeScript? · javascript
✔ Where does your code run? · node
The config that you've selected requires the following dependencies:

eslint, globals, @eslint/js
✔ Would you like to install them now? · No / Yes
✔ Which package manager do you want to use? · npm
☕️Installing...

added 70 packages, and audited 273 packages in 5s

52 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities
Successfully created /workspaces/example-project/server/eslint.config.js file.

Once it is installed, we can run eslint using the following command:

$ npx eslint --fix .

When we do, we’ll probably get a couple of errors:

/workspaces/example-project/server/routes/index.js
  35:36  error  'next' is defined but never used  no-unused-vars

/workspaces/example-project/server/routes/users.js
  35:36  error  'next' is defined but never used  no-unused-vars

✖ 2 problems (2 errors, 0 warnings)

In both of our routes files, we have included the next parameter, but it is unused. We could remove it, but it is often considered good practice to include that parameter in case we need to explicitly use it. So, in our eslint.config.js we can add an option to ignore that parameter (pay careful attention to the formatting; for some reason the file by default does not have much spacing between the curly braces):

import globals from "globals";
import pluginJs from "@eslint/js";

/** @type {import('eslint').Linter.Config[]} */
export default [
  {
    languageOptions: { globals: globals.node },
    rules: {
      'no-unused-vars': [
        'error',
        {
          argsIgnorePattern: 'next'
        }
      ]
    }
  },
  pluginJs.configs.recommended,
];

Now, when we run that command, we should not get any output!

$ npx eslint --fix .

To make this even easier, let’s add a new script to the scripts section of our package.json file for this tool:

{
  ...
  "scripts": {
    "start": "LOG_LEVEL=http node ./bin/www",
    "dev": "LOG_LEVEL=debug nodemon ./bin/www",
    "lint": "npx eslint --fix ."
  },
  ...
}

Now we can just run this command to check our project for errors:

$ npm run lint

Formatting

Another commonly used tool for JavaScript developers is prettier. Prettier will reformat our JavaScript code to match a defined coding style, making it much easier to read and maintain.

First, let’s install prettier using npm as a development dependency:

$ npm install prettier --save-dev

We also need to create a .prettierrc configuration file that just contains an empty JavaScript object for now:

{}

There are many options that can be placed in that configuration file - see the Prettier Documentation for details. For now, we’ll just leave it blank.

We can now run the prettier command on our code:

$ npx prettier . --write

When we do, we’ll see output listing all of the files that have been changed:

.prettierrc 34ms
app.js 34ms
configs/logger.js 19ms
configs/openapi.js 7ms
eslint.config.js 5ms
middlewares/request-logger.js 5ms
package-lock.json 111ms (unchanged)
package.json 2ms (unchanged)
public/index.html 29ms
routes/index.js 4ms
routes/users.js 3ms

Notice that nearly all of the files have been updated in some way. Many times it simply aligns code and removes extra spaces, but other times it will rewrite long lines.

Just like with eslint, let’s add a new script to package.json to make this process simpler as well:

{
  ...
  "scripts": {
    "start": "LOG_LEVEL=http node ./bin/www",
    "dev": "LOG_LEVEL=debug nodemon ./bin/www",
    "lint": "npx eslint --fix .",
    "format": "npx prettier . --write"
  },
  ...
}

With that script in place, we can clean up our code anytime using this command:

$ npm run format
Best Practice

Now that we have installed both eslint and prettier, it is always a good practice to run both tools before committing any code to git and pushing to GitHub. This ensures that your codebase is always clean, well formatted, and free of errors or bugs that could be easily spotted by these tools.

This is a good point to commit and push our work!

Summary

YouTube Video

Summary

In this example project, we created an Express application with the following features:

  1. GitHub Codespaces
  2. A sample Express application
  3. Updated to ES Modules
  4. Application logs with Winston and Morgan
  5. Other useful libraries such as Compression and Helmet
  6. A better development server using Nodemon
  7. Environment variables through Dotenvx
  8. Code documentation with JSDoc and OpenAPI comments
  9. Linting and Formatting with ESLint and Prettier

This example project makes a great basis for building robust RESTful web APIs and other Express applications.

As you work on projects built from this framework, we welcome any feedback or additions to be made. Feel free to submit requests to the course instructor for updates to this project.

Adding a Database

This example project builds on the previous Express Starter Project by adding a database. A database is a powerful way to store and retrieve the data used by our web application.

To accomplish this, we’ll learn about different libraries that interface between our application and a database. Once we’ve installed a library, we’ll discover how to use that library to create database tables, add initial data to those tables, and then easily access them within our application.

Project Deliverables

At the end of this example, we will have a project with the following features:

  1. An SQLite database
  2. The Sequelize ORM tool
  3. The Umzug migration tool
  4. A simple migration to create tables for Users and Roles
  5. Seed data for those tables
  6. Automated processes to migrate and seed the data on application startup
  7. A simple route to query user information
Prior Work

This project picks up right where the last one left off, so if you haven’t completed that one yet, go back and do that before starting this one.

Let’s get started!

Subsections of Adding a Database

Sequelize

YouTube Video

Database Libraries

To begin, we must first select a library to use when interfacing with our database. There are many different types of libraries, and many different options to choose from.

  1. First and foremost, we can always just write raw SQL queries directly in our code. This is often very straightforward, but also can lead to very complex code and security issues. It also doesn’t offer many of the more advanced features such as mapping database results to object types and automatically managing database schemas.

  2. Another option is an SQL query library, such as Knex.js or Kysely. These libraries provide a helpful abstraction on top of SQL, allowing developers to build queries using syntax that is more comfortable and familiar to them. These libraries also have additional features do manage database schemas and sample data

  3. The final option is an Object-Relational Mapping (ORM) library such as Objection or Sequelize. These libraries provide the most abstraction away from raw SQL, often allowing developers to store and retrieve data in a database as if it were stored in a list or dictionary data structure.

For this project, we’re going to use the Sequelize ORM, coupled with the Umzug migration tool. Both of these libraries are very commonly used in Node.js projects, and are actively maintained.

Database Engines

We also have many choices for the database engine we want to use for our web projects. Some common options include PostgreSQL, MySQL, MariaDB, MongoDB, Firebase, and many more.

For this project, we’re going to use SQLite. SQLite is unique because it is a database engine that only requires a single file, so it is self-contained and easy to work with. It doesn’t require any external database servers or software, making it perfect for a small development project. In fact, SQLite may be one of the most widely deployed software modules in the whole world!

Naturally, if wer plan on growing a web application beyond a simple hobby project with a few users, we should spend some time researching a reliable database solution. Thankfully, the Sequelize ORM supports many different database engines so it is easy to switch.

Installing Sequelize

To begin, let’s install both sequelize as well as the sqlite3 library using npm:

$ npm install sqlite3 sequelize

Once those libraries are installed, we can now configure sequelize following the information in the Sequelize Documentation. Let’s create a new file configs/database.js to store our database configuration:

/**
 * @file Configuration information for Sequelize database ORM
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports sequelize a Sequelize instance
 */

// Import libraries
import Sequelize from 'sequelize';

// Import logger configuration
import logger from "./logger.js";

// Create Sequelize instance
const sequelize = new Sequelize({
    dialect: 'sqlite',
    storage: process.env.DATABASE_FILE || ":memory:",
    logging: logger.sql.bind(logger)
})

export default sequelize;

This file creates a very simple configuration for sequelize that uses the sqlite dialect. It uses the DATABASE_FILE environment variable to control the location of the database in the file system, and it also uses the logger.sql log level to log any data produced by the library. If a DATABASE_FILE environment variable is not provided, it will default to storing data in the SQLite In-Memory Database, which is great for testing and quick development.

Of course, a couple of those items don’t actually exist yet, so let’s add those in before we move on! First, we need to add a DATABASE_FILE environment variable to both our .env and .env.example files:

LOG_LEVEL=debug
PORT=3000
OPENAPI_HOST=https://$CODESPACE_NAME-$PORT.$GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN
OPENAPI_VISIBLE=true
DATABASE_FILE=database.sqlite
LOG_LEVEL=debug
PORT=3000
OPENAPI_HOST=http://localhost:3000
# For GitHub Codespaces
# OPENAPI_HOST=https://$CODESPACE_NAME-$PORT.$GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN
OPENAPI_VISIBLE=false
DATABASE_FILE=database.sqlite

We also need to add a new logging level called sql to our logger configuration in configs/logger.js. This is a bit more involved, because it means we have to now list all intended logging levels explicitly. See the highlighted lines below for what has been changed, but the entire file is included for convenience:

/**
 * @file Configuration information for Winston logger
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports logger a Winston logger object
 */

// Import libraries
import winston from "winston";

// Extract format options
const { combine, timestamp, printf, colorize, align, errors } = winston.format;

/**
 * Determines the correct logging level based on the Node environment
 *
 * @returns {string} the desired log level
 */
function level () {
  if (process.env.LOG_LEVEL) {
    if (process.env.LOG_LEVEL === '0' || process.env.LOG_LEVEL === 'error') {
      return 'error';
    }
    if (process.env.LOG_LEVEL === '1' || process.env.LOG_LEVEL === 'warn') {
      return 'warn';
    }
    if (process.env.LOG_LEVEL === '2' || process.env.LOG_LEVEL === 'info') {
      return 'info';
    }
    if (process.env.LOG_LEVEL === '3' || process.env.LOG_LEVEL === 'http') {
      return 'http';
    }
    if (process.env.LOG_LEVEL === '4' || process.env.LOG_LEVEL === 'verbose') {
      return 'verbose';
    }
    if (process.env.LOG_LEVEL === '5' || process.env.LOG_LEVEL === 'debug') {
      return 'debug';
    }
    if (process.env.LOG_LEVEL === '6' || process.env.LOG_LEVEL === 'sql') {
      return 'sql';
    }
    if (process.env.LOG_LEVEL === '7' || process.env.LOG_LEVEL === 'silly') {
      return 'silly';
    }
  }
  return 'http';
}

// Custom logging levels for the application
const levels = {
  error: 0,
  warn: 1,
  info: 2,
  http: 3,
  verbose: 4,
  debug: 5,
  sql: 6,
  silly: 7
}

// Custom colors
const colors = {
  error: 'red',
  warn: 'yellow',
  info: 'green',
  http: 'green',
  verbose: 'cyan',
  debug: 'blue',
  sql: 'gray',
  silly: 'magenta'
}

winston.addColors(colors)

// Creates the Winston instance with the desired configuration
const logger = winston.createLogger({
  // call `level` function to get default log level
  level: level(),
  levels: levels,
  // Format configuration
  // See https://github.com/winstonjs/logform
  format: combine(
    colorize({ all: true }),
    errors({ stack: true }),
    timestamp({
      format: "YYYY-MM-DD hh:mm:ss.SSS A",
    }),
    align(),
    printf(
      (info) =>
        `[${info.timestamp}] ${info.level}: ${info.stack ? info.stack : info.message}`,
    ),
  ),
  // Output configuration
  transports: [new winston.transports.Console()],
});

export default logger;

We have added a new sql logging level that is now part of our logging setup. One of the unique features of sequelize is that it will actually allow us to log all SQL queries run against our database, so we can enable and disable that level of logging by adjusting the LOG_LEVEL environment variable as desired.

There! We now have a working database configuration. Before we can make use of it, however, we need to add additional code to create and populate our database. So, we’ll need to continue on in this tutorial before we can actually test our application.

Migrations

YouTube Video

Umzug

Now that we have a database configured in our application, we need to create some way to actually populate that database with the tables and information our app requires. We could obviously do that manually, but that really makes it difficult (if not impossible) to automatically build, test, and deploy this application.

Thankfully, most database libraries also have a way to automate building the database structure. This is known as schema migration or often just migration. We call it migration because it allows us to update the database schema along with new versions of the application, effectively migrating our data to new versions as we go.

The sequelize library recommends using another library, named Umzug, as the preferred way to manage database migrations. It is actually completely framework agnostic, and would even work with ORMs other than Sequelize.

Setting up Umzug

To begin, let’s install umzug using npm:

$ npm install umzug

Next, we can create a configuration file to handle our migrations, named configs/migrations.js, with the following content as described in the Umzug Documentation:

/**
 * @file Configuration information for Umzug migration engine
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports umzug an Umzug instance
 */

// Import Libraries
import { Umzug, SequelizeStorage } from 'umzug';

// Import database configuration
import database from "./database.js";
import logger from "./logger.js";

// Create Umzug instance
const umzug = new Umzug({
    migrations: {glob: 'migrations/*.js'},
    context: database.getQueryInterface(),
    storage: new SequelizeStorage({
        sequelize: database,
        modelName: 'migrations'
    }),
    logger: logger
})

export default umzug;

Notice that this configuration uses our existing sequelize database configuration, and also uses an instance of our logger as well. It is set to look for any migrations stored in the migrations/ folder.

The umzug library also has a very handy way to run migrations directly from the terminal using a simple JavaScript file, so let’s create a new file named migrate.js in the root of the server directory as well with this content:

// Load environment (must be first)
import "@dotenvx/dotenvx/config";

// Import configurations
import migrations from './configs/migrations.js'

// Run Umzug as CLI application
migrations.runAsCLI();

This file will simply load our environment configuration as well as the umzug instance for migrations, and then instruct it to run as a command-line interface (CLI) application. This is very handy, as we’ll see shortly.

Creating a Migration

Now we can create a new migration to actually start building our database structure for our application. For this simple example, we’ll build a users table with four fields:

Users ERD Users ERD

We can refer to both the Umzug Documentation and Examples as well as the Sequelize Documentation. So, let’s create a new folder named migrations to match our configuration above, then a new file named 00_users.js to hold the migration for our users table:

/**
 * @file Users table migration
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports up the Up migration
 * @exports down the Down migration
 */

// Import Libraries
import {Sequelize} from 'sequelize';

/**
 * Apply the migration
 * 
 * @param {queryInterface} context the database context to use 
 */
export async function up({context: queryInterface}) {
    await queryInterface.createTable('users', {
        id: {
            type: Sequelize.INTEGER,
            primaryKey: true,
            autoIncrement: true,
        },
        username: {
            type: Sequelize.STRING,
            unique: true,
            allowNull: false,
        },
        createdAt: {
            type: Sequelize.DATE,
            allowNull: false,
        },
        updatedAt: {
            type: Sequelize.DATE,
            allowNull: false,
        },
    })
}

/**
 * Roll back the migration
 * 
 * @param {queryInterface} context the database context to use 
 */
export async function down({context: queryInterface}) {
    await queryInterface.dropTable('users');
}

A migration consists of two functions. First, the up function is called when the migration is applied, and it should define or modify the database structure as desired. In this case, since this is the first migration, we can assume we are starting with a blank database and go from there. The other function, down, is called whenever we want to undo, or rollback, the migration. It should effectively undo any changes made by the up function, leaving the database in the state it was before the migration was applied.

Sequential File Names

Most migration systems, including umzug, apply the migrations in order according to the filenames of the migrations. Some systems automatically append a timestamp to the name of the migration file when it is created, such as 20250203112345_users.js. For our application, we will simply number them sequentially, starting with 00.

Finally, we can use the migrate.js file we created to run umzug from the command line to apply the migration:

$ node migrate up

If everything works correctly, we should receive some output showing that our migration succeeded:

[dotenvx@1.34.0] injecting env (5) from .env
[2025-02-03 10:59:35.066 PM] info:      { event: 'migrating', name: '00_users.js' }
[2025-02-03 10:59:35.080 PM] info:      { event: 'migrated', name: '00_users.js', durationSeconds: 0.014 }
[2025-02-03 10:59:35.080 PM] info:      applied 1 migrations.

We should also see a file named database.sqlite added to our file structure. If desired, we can install the SQLite Viewer extension in VS Code to explore the contents of that file to confirm it is working correctly.

Users Table in SQLite Users Table in SQLite

Add Extension to Dev Container

When installing a VS Code extension, we can also choose to have it added directly to our devcontainer.json file so it is available automatically whenever we close this repository into a new codespace or dev container. Just click the gear icon on the marketplace page and choose “Add to devcontainer.json` from the menu!

Add to Dev Container Add to Dev Container

If we need to roll back that migration, we can use a similar command:

$ node migrate down

There are many more commands available to apply migrations individually and more. Check the Umzug Documentation for more details

Seeds

YouTube Video

Database Seeding

Another useful task that umzug can handle is adding some initial data to a new database. This process is known as seeding the database. Thankfully, the process for seeding is nearly identical to the process for migrations - in fact, it uses the same operations in different ways! So, let’s explore how to set that up.

First, we’ll create a new configuration file at configs/seeds.js that contains nearly the same content as configs/migrations.js with a couple of important changes on the highlighted lines:

/**
 * @file Configuration information for Umzug seed engine
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports umzug an Umzug instance
 */

// Import Libraries
import { Umzug, SequelizeStorage } from 'umzug';

// Import database configuration
import database from "./database.js";
import logger from "./logger.js";

// Create Umzug instance
const umzug = new Umzug({
    migrations: {glob: 'seeds/*.js'},
    context: database.getQueryInterface(),
    storage: new SequelizeStorage({
        sequelize: database,
        modelName: 'seeds'
    }),
    logger: logger
})

export default umzug;

All we really have to do is change the folder where the migrations (in this case, the seeds) are stored, and we also change the name of the model, or table, where that information will be kept in the database.

Next, we’ll create a seed.js file that allows us to run the seeds from the command line. Again, this file is nearly identical to the migrate.js file from earlier, with a couple of simple changes:

// Load environment (must be first)
import "@dotenvx/dotenvx/config";

// Import configurations
import seeds from './configs/seeds.js'

// Run Umzug as CLI application
seeds.runAsCLI();

Finally, we can create a new folder seeds to store our seeds, and then create the first seed also called 00_users.js to add a few default users to our database:

/**
 * @file Users seed
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports up the Up migration
 * @exports down the Down migration
 */

// Timestamp in the appropriate format for the database
const now = new Date().toISOString().slice(0, 23).replace("T", " ") + " +00:00";

// Array of objects to add to the database
const users = [
    {
        id: 1,
        username: 'admin',
        createdAt: now,
        updatedAt: now
    },
    {
        id: 2,
        username: 'contributor',
        createdAt: now,
        updatedAt: now
    },
    {
        id: 3,
        username: 'manager',
        createdAt: now,
        updatedAt: now
    },
    {
        id: 4,
        username: 'user',
        createdAt: now,
        updatedAt: now
    },
];

/**
 * Apply the seed
 * 
 * @param {queryInterface} context the database context to use 
 */
export async function up({context: queryInterface}) {
    await queryInterface.bulkInsert('users', users);
}

/**
 * Roll back the seed
 * 
 * @param {queryInterface} context the database context to use 
 */
export async function down({context: queryInterface}) {
    await queryInterface.bulkDelete("users", {}, { truncate: true });
}

This seed will add 4 users to the database. Notice that we are setting both the createdAt and updatedAt fields manually - while the sequelize library will manage those for us in certain situations, we must handle them manually when doing a bulk insert directly to the database.

At this point we can insert our seeds into the database using the command line interface:

$ node seed up
[dotenvx@1.34.0] injecting env (5) from .env
[2025-02-04 02:47:20.702 PM] info:      { event: 'migrating', name: '00_users.js' }
[2025-02-04 02:47:20.716 PM] info:      { event: 'migrated', name: '00_users.js', durationSeconds: 0.013 }
[2025-02-04 02:47:20.716 PM] info:      applied 1 migrations.

Now, once we’ve done that, we can go back to the SQLite Viewer extension in VS Code to confirm that our data was properly inserted into the database.

Seeded Data Seeded Data

Migrate Before Seeding

One common mistake that is very easy to do is to try and seed the database without first migrating it.

[2025-02-04 02:51:39.452 PM] info:      { event: 'migrating', name: '00_users.js' }

Error: Migration 00_users.js (up) failed: Original error: SQLITE_ERROR: no such table: users

Thankfully umzug gives a pretty helpful error in this case.

Another common error is to forget to roll back seeds before rolling back and resetting any migrations. In that case, when you try to apply your seeds again, they will not be applied since the database thinks the data is still present. So, remember to roll back your seeds before rolling back any migrations!

We’re almost ready to test our app! The last step is to create a model for our data, which we’ll cover on the next page.

Models

YouTube Video

Database Models

Now that we have our database table structure and sample data set up, we can finally configure sequelize to query our database by defining a model representing that data. At its core, a model sis simply an abstraction that represents the structure of the data in a table of our database. We can equate this to a class in object-oriented programming - each row or record in our database can be thought of as an instance of our model class. You can learn more about models in the Sequelize Documentation

To create a model, let’s first create a models folder in our app, then we can create a file user.js that contains the schema for the User model, based on the users table.

Singular vs. Plural

By convention, model names are usually singular like “user” while table names are typically pluralized like “users.” This is not a rule that must be followed, but many web frameworks use this convention so we’ll also follow it.

The User model schema should look very similar to the table definition used in the migration created earlier in this example:

/**
 * @file User schema
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports UserSchema the schema for the User model
 */

// Import libraries
import Sequelize from 'sequelize';

const UserSchema = {
    id: {
        type: Sequelize.INTEGER,
        primaryKey: true,
        autoIncrement: true,
    },
    username: {
        type: Sequelize.STRING,
        unique: true,
        allowNull: false,
    },
    createdAt: {
        type: Sequelize.DATE,
        allowNull: false,
    },
    updatedAt: {
        type: Sequelize.DATE,
        allowNull: false,
    },
}

export default UserSchema

At a minimum, a model schema defines the attributes that are stored in the database, but there are many more features that can be added over time, such as additional computed fields (for example, a fullName field that concatenates the giveName and familyName fields stored in the database). We’ll explore ways to improve our models in later examples.

Once we have the model schema created, we’ll create a second file named models.js that will pull together all of our schemas and actually build the sequelize models that can be used throughout our application.

/**
 * @file Database models
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports User a Sequelize User model
 */

// Import database connection
import database from "../configs/database.js";

// Import Schemas
import UserSchema from './user.js';

// Create User Model
const User = database.define(
    // Model Name
    'User',
    // Schema
    UserSchema,
    // Other options
    {
        tableName: 'users'
    }
)

export {
    User
}

It is also important to note that we can define the name of the table that stores instances of the model in the tableName option.

We will see why it is important to use this models.js file (instead of just defining the model itself and not just the schema in the users.js file) once we start adding relations between the models. For now, we’ll start with this simple scaffold that we can expand upon in the future.

Models vs. Migrations

One of the more interesting features of sequelize is that it can use just the models themselves to define the structure of the tables in the database. It has features such as Model Synchronization to keep the database structure updated to match the given models.

However, even in the documentation, sequelize recommends using migrations for more complex database structures. So, in our application, the migrations will represent the incremental steps required over time to construct our application’s database tables, whereas the models will represent the full structure of the database tables at this point in time. As we add new features to our application, this difference will become more apparent.

Model Querying

Finally, we are at the point where we can actually use our database in our application! So, let’s update the route for the users endpoint to actually return a list of the users of our application in a JSON format:

/**
 * @file Users router
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports router an Express router
 *
 * @swagger
 * tags:
 *   name: users
 *   description: Users Routes
 */

// Import libraries
import express from "express";

// Create Express router
const router = express.Router();

// Import models
import { User } from '../models/models.js'

/**
 * Gets the list of users
 * 
 * @param {Object} req - Express request object
 * @param {Object} res - Express response object
 * @param {Function} next - Express next middleware function
 * 
 * @swagger
 * /users:
 *   get: 
 *     summary: users list page
 *     description: Gets the list of all users in the application
 *     tags: [users]
 *     responses:
 *       200: 
 *         description: a resource         
 */
router.get("/", async function (req, res, next) {
  const users = await User.findAll();
  res.json(users)
});

export default router;

The only change we need to make is to import our User model we just created in the models/models.js file, and then use the User.findAll() query method inside of our first route method. A full list of all the querying functions in sequelize can be found in the Sequelize Documentation

Now, let’s start our application and see if it works! We should make sure we have migrated and seeded the database recently before starting. If everything works correctly, we should be able to navigate to the /users path and see the following JSON output on the page:

[
  {
    "id": 1,
    "username": "admin",
    "createdAt": "2025-02-04T15:36:32.000Z",
    "updatedAt": "2025-02-04T15:36:32.000Z"
  },
  {
    "id": 2,
    "username": "contributor",
    "createdAt": "2025-02-04T15:36:32.000Z",
    "updatedAt": "2025-02-04T15:36:32.000Z"
  },
  {
    "id": 3,
    "username": "manager",
    "createdAt": "2025-02-04T15:36:32.000Z",
    "updatedAt": "2025-02-04T15:36:32.000Z"
  },
  {
    "id": 4,
    "username": "user",
    "createdAt": "2025-02-04T15:36:32.000Z",
    "updatedAt": "2025-02-04T15:36:32.000Z"
  }
]

Awesome! We have now developed a basic web application that is able to query a database and present data to the user in a JSON format. This is the first big step toward actually building a RESTful API application.

This is a good point to commit and push our work!

Committing Database Files

One thing we might notice is that our database.sqlite file is in the list of files to be committed to our GitHub repository for this project. In many cases, you may or may not want to do this, depending on what type of data you are storing in the database and how you are using it.

For this application, and the projects in this course, we’ll go ahead and commit our database to GitHub since that is the simplest way to share that information.

Documenting Models

YouTube Video

Documenting Models with Open API

Before we move ahead, let’s quickly take a minute to add some documentation to our models using the Open API specification. The details can be found in the Open API Specification Document

First, let’s update our configuration in the configs/openapi.js file to include the models directory:

// -=-=- other code omitted here -=-=-

// Configure SwaggerJSDoc options
const options = {
  definition: {
    openapi: "3.1.0",
    info: {
      title: "Example Project",
      version: "0.0.1",
      description: "Example Project",
    },
    servers: [
      {
        url: url(),
      },
    ],
  },
  apis: ["./routes/*.js", "./models/*.js"],
};

// -=-=- other code omitted here -=-=-

Next, at the top of our models/user.js file, we can add information in an @swagger tag about our newly created User model, usually placed right above the model definition itself:

// -=-=- other code omitted here -=-=-

/**
 * @swagger
 * components:
 *   schemas:
 *     User:
 *       type: object
 *       required:
 *         - username
 *       properties:
 *         id:
 *           type: integer
 *           description: autogenerated id
 *         username:
 *           type: string
 *           description: username for the user
 *         createdAt:
 *           type: string
 *           format: date-time
 *           description: when the user was created
 *         updatedAt:
 *           type: string
 *           format: date-time
 *           description: when the user was last updated
 *       example:
 *           id: 1
 *           username: admin
 *           createdAt: 2025-02-04T15:36:32.000Z
 *           updatedAt: 2025-02-04T15:36:32.000Z
 */
const UserSchema = {

// -=-=- other code omitted here -=-=-

Finally, we can now update our route in the routes/users.js file to show that it is outputting an array of User objects:

// -=-=- other code omitted here -=-=-

/**
 * Gets the list of users
 * 
 * @param {Object} req - Express request object
 * @param {Object} res - Express response object
 * @param {Function} next - Express next middleware function
 * 
 * @swagger
 * /users:
 *   get: 
 *     summary: users list page
 *     description: Gets the list of all users in the application
 *     tags: [users]
 *     responses:
 *       200:
 *         description: the list of users
 *         content:
 *           application/json:
 *             schema:
 *               type: array
 *               items:
 *                 $ref: '#/components/schemas/User'       
 */
router.get("/", async function (req, res, next) {
  const users = await User.findAll();
  res.json(users)
});

// -=-=- other code omitted here -=-=-

With all of that in place, we can start our application with the Open API documentation enabled, then navigate to the /docs route to see our updated documentation. We should now see our User model listed as a schema at the bottom of the page:

User Schema User Schema

In addition, we can see that the /users route has also been updated to show that it returns an array of User objects, along with the relevant data:

User Route Documentation User Route Documentation

As we continue to add models and routes to our application, we should also make sure our Open API documentation is kept up to date with the latest information.

This is a good point to commit and push our work!

Automation

YouTube Video

Automating Database Deployment

One very helpful feature we can add to our application is the ability to automatically migrate and seed the database when the application first starts. This can be especially helpful when deploying this application in a container.

To do this, let’s add some additional code to our bin/www file that is executed when our project starts:

/**
 * @file Executable entrypoint for the web application
 * @author Russell Feldhausen <russfeld@ksu.edu>
 */

// Import libraries
import http from 'http';

// Import Express application
import app from '../app.js';

// Import configurations
import database from '../configs/database.js';
import logger from '../configs/logger.js';
import migrations from '../configs/migrations.js';
import seeds from '../configs/seeds.js';

// Get port from environment and store in Express.
var port = normalizePort(process.env.PORT || '3000');
app.set('port', port);

// Create HTTP server.
var server = http.createServer(app);

// Attach event handlers
server.on('error', onError);
server.on('listening', onListening);

// Call startup function
startup();

/**
 * Server startup function
 */
function startup() {
  try {
    // Test database connection
    database.authenticate().then(() => {
      logger.debug("Database connection successful")
      // Run migrations
      migrations.up().then(() => {
        logger.debug("Database migrations complete")
        if (process.env.SEED_DATA === 'true') {
          logger.warn("Database data seeding is enabled!")
          seeds.up().then(() => {
            logger.debug("Database seeding complete")
            server.listen(port)
          })
        } else {
          // Listen on provided port, on all network interfaces.
          server.listen(port)
        }
      })
    })
  } catch (error){
    logger.error(error)
  }
}

// -=-=- other code omitted here -=-=-

We now have a new startup function that will first test the database connection, then run the migrations, and finally it will seed the database if the SEED_DATA environment variable is set to true. Once all that is done, it will start the application by calling server.listen using the port.

Notice that this code uses the then() function to resolve promises instead of the async and await keywords. This is because it is running at the top level, and cannot include any await keywords.

To enable this, let’s add the SEED_DATA environment variable to both .env and .env.example:

LOG_LEVEL=debug
PORT=3000
OPENAPI_HOST=https://$CODESPACE_NAME-$PORT.$GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN
OPENAPI_VISIBLE=true
DATABASE_FILE=database.sqlite
SEED_DATA=true
LOG_LEVEL=debug
PORT=3000
OPENAPI_HOST=http://localhost:3000
# For GitHub Codespaces
# OPENAPI_HOST=https://$CODESPACE_NAME-$PORT.$GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN
OPENAPI_VISIBLE=false
DATABASE_FILE=database.sqlite
SEED_DATA=false

To test this, we can delete the database.sqlite file in our repository, then start our project:

$ npm run dev

If it works correctly, we should see that our application is able to connect to the database, migrate the schema, and add the seed data, before fully starting:

> example-project@0.0.1 dev
> nodemon ./bin/www

[nodemon] 3.1.9
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,cjs,json
[nodemon] starting `node ./bin/www`
[dotenvx@1.34.0] injecting env (6) from .env
[2025-02-04 06:56:11.823 PM] warn:      OpenAPI documentation visible!
[2025-02-04 06:56:12.163 PM] debug:     Database connection successful
[2025-02-04 06:56:12.208 PM] info:      { event: 'migrating', name: '00_users.js' }
[2025-02-04 06:56:12.265 PM] info:      { event: 'migrated', name: '00_users.js', durationSeconds: 0.058 }
[2025-02-04 06:56:12.266 PM] debug:     Database migrations complete
[2025-02-04 06:56:12.266 PM] warn:      Database data seeding is enabled!
[2025-02-04 06:56:12.296 PM] info:      { event: 'migrating', name: '00_users.js' }
[2025-02-04 06:56:12.321 PM] info:      { event: 'migrated', name: '00_users.js', durationSeconds: 0.024 }
[2025-02-04 06:56:12.321 PM] debug:     Database seeding complete
[2025-02-04 06:56:12.323 PM] info:      Listening on port 3000

There we go! Our application will now always make sure the database is properly migrated, and optionally seeded, before it starts. Now, when another developer or user starts our application, it will be sure to have a working database.

This is a good point to commit and push our work!

Another Table

YouTube Video

Adding Another Table

Now that we have a working database, let’s explore what it takes to add a new table to our application to represent additional models and data in our database.

We’ve already created a users table, which contains information about the users of our application. Now we want to add a roles table to contain all of the possible roles that our users can hold. In addition, we need some way to associate a user with a number of roles. Each user can have multiple roles, and each role can be assigned to multiple users. This is known as a many to many database relation, and requires an additional junction table to implement it properly. The end goal is to create the database schema represented in this diagram:

User Roles Database Diagram User Roles Database Diagram

To do this, we’ll go through three steps:

  1. Create a migration to modify the database schema
  2. Create a model for each table
  3. Add additional seed data for these tables

Migration

First, we need to create a new migration to modify the database schema to include the two new tables. So, we’ll create a file named 01_roles.js in the migrations folder and add content to it to represent the two new tables we need to create:

/**
 * @file Roles table migration
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports up the Up migration
 * @exports down the Down migration
 */

// Import Libraries
import {Sequelize} from 'sequelize';

/**
 * Apply the migration
 * 
 * @param {queryInterface} context the database context to use 
 */
export async function up({context: queryInterface}) {
    await queryInterface.createTable('roles', {
        id: {
            type: Sequelize.INTEGER,
            primaryKey: true,
            autoIncrement: true,
        },
        role: {
            type: Sequelize.STRING,
            allowNull: false,
        },
        createdAt: {
            type: Sequelize.DATE,
            allowNull: false,
        },
        updatedAt: {
            type: Sequelize.DATE,
            allowNull: false,
        },
    })

    await queryInterface.createTable('user_roles', {
        user_id: {
            type: Sequelize.INTEGER,
            primaryKey: true,
            references: { model: 'users', key: 'id' },
            onDelete: "cascade"
        },
        role_id: {
            type: Sequelize.INTEGER,
            primaryKey: true,
            references: { model: 'roles', key: 'id' },
            onDelete: "cascade"
        }
    })
}

/**
 * Roll back the migration
 * 
 * @param {queryInterface} context the database context to use 
 */
export async function down({context: queryInterface}) {
    await queryInterface.dropTable('user_roles');
    await queryInterface.dropTable('roles');
}

In this migration, we are creating two tables. The first, named roles, stores the list of roles in the application. The second, named user_roles, is the junction table used for the many-to-many relationship between the users and roles table. Notice that we have to add the tables in the correct order, and also in the down method we have to remove them in reverse order. Finally, it is important to include the onDelete: "cascade" option for each of our reference fields in the user_roles table, as that will handle deleting associated entries in the junction table when a user or role is deleted.

The user_roles table also includes a great example for adding a foreign key reference between two tables. More information can be found in the Sequelize Documentation.

Models

Next, we need to create two models to represent these tables. The first is the role model schema, stored in models/role.js with the following content:

/**
 * @file Role model
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports RoleSchema the schema for the Role model
 */

// Import libraries
import Sequelize from 'sequelize';

/**
 * @swagger
 * components:
 *   schemas:
 *     Role:
 *       type: object
 *       required:
 *         - role
 *       properties:
 *         id:
 *           type: integer
 *           description: autogenerated id
 *         role:
 *           type: string
 *           description: name of the role
 *         createdAt:
 *           type: string
 *           format: date-time
 *           description: when the user was created
 *         updatedAt:
 *           type: string
 *           format: date-time
 *           description: when the user was last updated
 *       example:
 *           id: 1
 *           role: manage_users
 *           createdAt: 2025-02-04T15:36:32.000Z
 *           updatedAt: 2025-02-04T15:36:32.000Z
 */
const RoleSchema = {
    id: {
        type: Sequelize.INTEGER,
        primaryKey: true,
        autoIncrement: true,
    },
    role: {
        type: Sequelize.STRING,
        allowNull: false,
    },
    createdAt: {
        type: Sequelize.DATE,
        allowNull: false,
    },
    updatedAt: {
        type: Sequelize.DATE,
        allowNull: false,
    },
}

export default RoleSchema

Notice that this file is very similar to the models/user.js file created earlier, with a few careful changes made to match the table schema.

We also need to create a model schema for the user_roles table, which we will store in the models/user_role.js file with the following content:

/**
 * @file User role junction model
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports RoleSchema the schema for the UserRole model
 */

// Import libraries
import Sequelize from 'sequelize';

const UserRoleSchema = {
    userId: {
        type: Sequelize.INTEGER,
        primaryKey: true,
        references: { model: 'User', key: 'id' },
        onDelete: "cascade"
    },
    roleId: {
        type: Sequelize.INTEGER,
        primaryKey: true,
        references: { model: 'Role', key: 'id' },
        onDelete: "cascade"
    }
}

export default UserRoleSchema

Finally, we can now update our models/models.js file to create the Role and UserRole models, and also to define the associations between them and the User model.

/**
 * @file Database models
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports User a Sequelize User model
 * @exports Role a Sequelize Role model
 * @exports UserRole a Sequelize UserRole model
 */

// Import database connection
import database from "../configs/database.js";

// Import Schemas
import UserSchema from './user.js';
import RoleSchema from "./role.js";
import UserRoleSchema from "./user_role.js";

// Create User Model
const User = database.define(
    // Model Name
    'User',
    // Schema
    UserSchema,
    // Other options
    {
        tableName: 'users'
    }
)

// Create Role Model
const Role = database.define(
    // Model Name
    'Role',
    // Schema
    RoleSchema,
    // Other options
    {
        tableName: 'roles'
    }
)

// Create UserRole Model
const UserRole = database.define(
    // Model Name
    'UserRole',
    // Schema
    UserRoleSchema,
    // Other options
    {
        tableName: 'user_roles',
        timestamps: false,
        underscored: true
    }
)

// Define Associations
Role.belongsToMany(User, { through: UserRole, unique: false, as: "users" })
User.belongsToMany(Role, { through: UserRole, unique: false, as: "roles" })

export {
    User,
    Role,
    UserRole,
}

Notice that this file contains two lines at the bottom to define the associations included as part of this table, so that sequelize will know how to handle it. This will instruct sequelize to add additional attributes and features to the User and Role models for querying the related data, as we’ll see shortly.

We also added the line timestamps: false to the other options for the User_roles table to disable the creation and management of timestamps (the createdAt and updatedAt attributes), since they may not be needed for this relation.

Finally, we added the underscored: true line to tell sequelize that it should interpret the userId and roleId attributes (written in camel case as preferred by Sequelize) as user_id and role_id, respectively (written in snake case as we did in the migration).

Camel Case vs. Snake Case

The choice of either CamelCase or snake_case naming for database attributes is a matter of preference. In this example, we show both methods, and it is up to each developer to select their own preferred style.

Seed

Finally, let’s create a new seed file in seeds/01_roles.js to add some default data to the roles and user_roles tables:

/**
 * @file Roles seed
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports up the Up migration
 * @exports down the Down migration
 */

// Timestamp in the appropriate format for the database
const now = new Date().toISOString().slice(0, 23).replace("T", " ") + " +00:00";

// Array of objects to add to the database
const roles = [
    {
        id: 1,
        role: 'manage_users',
        createdAt: now,
        updatedAt: now
    },
    {
        id: 2,
        role: 'manage_documents',
        createdAt: now,
        updatedAt: now
    },
    {
        id: 3,
        role: 'add_documents',
        createdAt: now,
        updatedAt: now
    },
    {
        id: 4,
        role: 'manage_communities',
        createdAt: now,
        updatedAt: now
    },
    {
        id: 5,
        role: 'add_communities',
        createdAt: now,
        updatedAt: now
    },
    {
        id: 6,
        role: 'view_documents',
        createdAt: now,
        updatedAt: now
    },
    {
        id: 7,
        role: 'view_communities',
        createdAt: now,
        updatedAt: now
    }
];

const user_roles = [
    {
        user_id: 1,
        role_id: 1
    },
    {
        user_id: 1,
        role_id: 2
    },
    {
        user_id: 1,
        role_id: 4
    },
    {
        user_id: 2,
        role_id: 3
    },
    {
        user_id: 2,
        role_id: 5
    },
    {
        user_id: 3,
        role_id: 2
    },
    {
        user_id: 3,
        role_id: 4
    },
    {
        user_id: 4,
        role_id: 6
    },
    {
        user_id: 4,
        role_id: 7
    }
]

/**
 * Apply the migration
 * 
 * @param {queryInterface} context the database context to use 
 */
export async function up({context: queryInterface}) {
    await queryInterface.bulkInsert('roles', roles);
    await queryInterface.bulkInsert('user_roles', user_roles);
}

/**
 * Roll back the migration
 * 
 * @param {queryInterface} context the database context to use 
 */
export async function down({context: queryInterface}) {
    await queryInterface.bulkDelete('user_roles', {} , { truncate: true });
  await queryInterface.bulkDelete("roles", {}, { truncate: true });
}

Once again, this seed is very similar to what we’ve seen before. Notice that we use the truncate option to remove all entries in the user_roles table when we undo this seed as well as the roles table.

Seeding from a CSV File

It is also possible to seed the database from a CSV or other data file using a bit of JavaScript code. Here’s an example for seeding a table that contains all of the counties in Kansas using a CSV file with that data that is read with the convert-csv-to-json library:

// Import libraries
const csvToJson = import("convert-csv-to-json");

// Timestamp in the appropriate format for the database
const now = new Date().toISOString().slice(0, 23).replace("T", " ") + " +00:00";

export async function up({ context: queryInterface }) {
  // Read data from CSV file
  // id,name,code,seat,population,est_year
  // 1,Allen,AL,Iola,"12,464",1855
  let counties = (await csvToJson)
    .formatValueByType()
    .supportQuotedField(true)
    .fieldDelimiter(",")
    .getJsonFromCsv("./seeds/counties.csv");

  // append timestamps and parse fields
  counties.map((c) => {
    // handle parsing numbers with comma separators
    c.population = parseInt(c.population.replace(/,/g, ""));
    c.createdAt = now;
    c.updatedAt = now;
    return c;
  });
  
  // insert into database
  await queryInterface.bulkInsert("counties", counties);
}

export async function down({ context: queryInterface }) {
  await queryInterface.bulkDelete("counties", {}, { truncate: true });
}

Update User Model

Finally, let’s update the User model schema to include related roles. At this point, we just have to update the Open API documentation to match:

// -=-=- other code omitted here -=-=-
/**
 * @swagger
 * components:
 *   schemas:
 *     User:
 *       type: object
 *       required:
 *         - username
 *       properties:
 *         id:
 *           type: integer
 *           description: autogenerated id
 *         username:
 *           type: string
 *           description: username for the user
 *         createdAt:
 *           type: string
 *           format: date-time
 *           description: when the user was created
 *         updatedAt:
 *           type: string
 *           format: date-time
 *           description: when the user was last updated
 *         roles:
 *           type: array
 *           items:
 *             $ref: '#/components/schemas/Role'
 *       example:
 *           id: 1
 *           username: admin
 *           createdAt: 2025-02-04T15:36:32.000Z
 *           updatedAt: 2025-02-04T15:36:32.000Z
 *           roles:
 *             - id: 1
 *               role: manage_users
 *             - id: 2
 *               role: manage_documents
 *             - id: 4
 *               role: manage_communities
 */
// -=-=- other code omitted here -=-=-

Now we can modify our route in routes/users.js to include the data from the related Role model in our query:

/**
 * @file Users router
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports router an Express router
 *
 * @swagger
 * tags:
 *   name: users
 *   description: Users Routes
 */

// Import libraries
import express from "express";

// Create Express router
const router = express.Router();

// Import models
import { User, Role } from '../models/models.js'

/**
 * Gets the list of users
 * 
 * @param {Object} req - Express request object
 * @param {Object} res - Express response object
 * @param {Function} next - Express next middleware function
 * 
 * @swagger
 * /users:
 *   get: 
 *     summary: users list page
 *     description: Gets the list of all users in the application
 *     tags: [users]
 *     responses:
 *       200:
 *         description: the list of users
 *         content:
 *           application/json:
 *             schema:
 *               type: array
 *               items:
 *                 $ref: '#/components/schemas/User'          
 */
router.get("/", async function (req, res, next) {
  const users = await User.findAll({
    include: {
      model: Role,
      as: "roles",
      attributes: ['id', 'role'],
      through: {
        attributes: [],
      },
    },
  });
  res.json(users)
});

export default router;

You can learn more about querying associations in the Sequelize Documentation.

If everything works, we should see our roles now included in our JSON output when we navigate to the /users route:

[
  {
    "id": 1,
    "username": "admin",
    "createdAt": "2025-01-28T23:06:01.000Z",
    "updatedAt": "2025-01-28T23:06:01.000Z",
    "roles": [
      {
        "id": 1,
        "role": "manage_users"
      },
      {
        "id": 2,
        "role": "manage_documents"
      },
      {
        "id": 4,
        "role": "manage_communities"
      }
    ]
  },
  {
    "id": 2,
    "username": "contributor",
    "createdAt": "2025-01-28T23:06:01.000Z",
    "updatedAt": "2025-01-28T23:06:01.000Z",
    "roles": [
      {
        "id": 3,
        "role": "add_documents"
      },
      {
        "id": 5,
        "role": "add_communities"
      }
    ]
  },
  {
    "id": 3,
    "username": "manager",
    "createdAt": "2025-01-28T23:06:01.000Z",
    "updatedAt": "2025-01-28T23:06:01.000Z",
    "roles": [
      {
        "id": 2,
        "role": "manage_documents"
      },
      {
        "id": 4,
        "role": "manage_communities"
      }
    ]
  },
  {
    "id": 4,
    "username": "user",
    "createdAt": "2025-01-28T23:06:01.000Z",
    "updatedAt": "2025-01-28T23:06:01.000Z",
    "roles": [
      {
        "id": 6,
        "role": "view_documents"
      },
      {
        "id": 7,
        "role": "view_communities"
      }
    ]
  }
]

That should also exactly match the schema and route information in our Open API documentation provided at the /docs route.

There we go! That’s a quick example of adding an additional table to our application, including a relationship and more.

As a last step before finalizing our code, we should run the lint and format commands and deal with any errors they find. Finally, we can commit and push our work.

RESTful API

This example project builds on the previous Adding a Database project by using that project to create a RESTful API. That API can be used to access and modify the data in the database. We’ll also add a suite of unit tests to explore our API and ensure that it is working correctly.

Project Deliverables

At the end of this example, we will have a project with the following features:

  1. A RESTful API with several routes for creating, reading, updating, and deleting (CRUD) data in the database
  2. Open API Documentation for API Routes
  3. Full Unit Test Suite with Coverage Metrics
Prior Work

This project picks up right where the last one left off, so if you haven’t completed that one yet, go back and do that before starting this one.

Let’s get started!

Subsections of RESTful API

API Design

YouTube Video

Good API Design

There are many articles online that discuss best practices in API design. For this project, we’re going to follow a few of the most common recommendations:

Let’s start with the first one - we can easily add a version number to our API’s URL paths. This allows us to make breaking changes to the API in the future without breaking any of the current functionality.

API Versioning

Our current application contains data for both a User and a Role model. For this example, we’ll begin by adding a set of RESTful API routes to work with the Role model. In order to add proper versioning to our API, we will want these routes visible at the /api/v1/roles path.

First, we should create the folder structure inside of our routes folder to match the routes used in our API. This means we’ll create an api folder, then a v1 folder, and finally a roles.js file inside of that folder:

API Folder Paths API Folder Paths

Before we create the content in that file, let’s also create a new file in the base routes folder named api.js that will become the base file for all of our API routes:

/**
 * @file API main router
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports router an Express router
 *
 * @swagger
 * tags:
 *   name: api
 *   description: API routes
 */

// Import libraries
import express from "express";

// Import v1 routers
import rolesRouter from "./api/v1/roles.js"

// Create Express router
const router = express.Router();

// Use v1 routers
router.use("/v1/roles", rolesRouter);

/**
 * Gets the list of API versions
 *
 * @param {Object} req - Express request object
 * @param {Object} res - Express response object
 * @param {Function} next - Express next middleware function
 *
 * @swagger
 * /api/:
 *   get:
 *     summary: list API versions
 *     tags: [api]
 *     responses:
 *       200:
 *         description: the list of users
 *         content:
 *           application/json:
 *             schema:
 *               type: array
 *               items:
 *                 type: object
 *                 properties:
 *                   version:
 *                     type: string
 *                   url: 
 *                     type: string
 *             example:
 *               - version: "1.0"
 *                 url: /api/v1/
 */
router.get('/', function (req, res, next) {
  res.json([
    {
      version: "1.0",
      url: "/api/v1/"
    }
  ])
})

export default router

This file is very simple - it just outputs all possible API versions (in this case, we just have a single API version). It also imports and uses our new roles router. Finally, it includes some basic Open API documentation for the route it contains. Let’s quickly add some basic content to our roles router, based on the existing content in our users router from before:

/**
 * @file Roles router
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports router an Express router
 *
 * @swagger
 * tags:
 *   name: roles
 *   description: Roles Routes
 */

// Import libraries
import express from "express";

// Create Express router
const router = express.Router();

// Import models
import { Role } from "../../../models/models.js";

// Import logger
import logger from "../../../configs/logger.js"

/**
 * Gets the list of roles
 *
 * @param {Object} req - Express request object
 * @param {Object} res - Express response object
 * @param {Function} next - Express next middleware function
 *
 * @swagger
 * /api/v1/roles:
 *   get:
 *     summary: roles list page
 *     description: Gets the list of all roles in the application
 *     tags: [roles]
 *     responses:
 *       200:
 *         description: the list of roles
 *         content:
 *           application/json:
 *             schema:
 *               type: array
 *               items:
 *                 $ref: '#/components/schemas/Role'
 */
router.get("/", async function (req, res, next) {
  try {
    const roles = await Role.findAll();
    res.json(roles);
  } catch (error) {
    logger.error(error)
    res.status(500).end()
  }
});

export default router;

Notice that we have added an additional try and catch block to the route function. This will ensure any errors that are thrown by the database get caught and logged without leaking any sensitive data from our API. It is always a good practice to wrap each API method in a try and catch block.

Get All Route Only

For this particular application’s API design, we will only be creating the get all RESTful method for the Role model. This is because we don’t actually want any users of the application modifying the roles themselves, since those roles will eventually be used in the overall authorization structure of the application (to be added in a later example). However, when creating or updating users, we need to be able to access a full list of all available roles, which can be found using this particular API endpoint.

We’ll explore the rest of the RESTful API methods in the User model later in this example.

Controllers and Services

More complex RESTful API designs may include additional files such as controllers and services to add additional structure to the application. For example, there might be multiple API routes that access the same method in a controller, which then uses a service to perform business logic on the data before storing it in the database.

For this example project, we will place most of the functionality directly in our routes to simplify our structure.

You can read more about how to use controllers and services in the MDN Express Tutorial.

Since we are creating routes in a new subfolder, we also need to update our Open API configuration in configs/openapi.js so that we can see the documentation contained in those routes:

// -=-=- other code omitted here -=-=-

// Configure SwaggerJSDoc options
const options = {
  definition: {
    openapi: "3.1.0",
    info: {
      title: "Lost Communities",
      version: "0.0.1",
      description: "Kansas Lost Communities Project",
    },
    servers: [
      {
        url: url(),
      },
    ],
  },
  apis: ["./routes/*.js", "./models/*.js", "./routes/api/v1/*.js"],
};

export default swaggerJSDoc(options);

Now that we’ve created these two basic routers, let’s get them added to our app.js file so they are accessible to the application:

// -=-=- other code omitted here -=-=-

// Import routers
import indexRouter from "./routes/index.js";
import usersRouter from "./routes/users.js";
import apiRouter from "./routes/api.js";

// Create Express application
var app = express();

// Use libraries
app.use(express.json());
app.use(express.urlencoded({ extended: false }));
app.use(helmet());
app.use(compression());
app.use(cookieParser());

// Use middlewares
app.use(requestLogger);

// Use static files
app.use(express.static(path.join(import.meta.dirname, "public")));

// Use routers
app.use("/", indexRouter);
app.use("/users", usersRouter);
app.use("/api", apiRouter);

// -=-=- other code omitted here -=-=-

Now, with everything in place, let’s run our application and see if we can access that new route at /api/v1/roles:

$ npm run dev

If everything is working correctly, we should see our roles listed in the output on that page:

List of Roles List of Roles

We should also be able to query the list of API versions at the path /api:

List of API Versions List of API Versions

Finally, we should also check and make sure our Open API documentation at the /docs path is up to date and includes the new routes:

API Documentation API Documentation

Roles Documentation Roles Documentation

There! This gives us a platform to build our new API upon. We’ll continue throughout this example project to add additional routes to the API as well as related unit tests.

Unit Testing

YouTube Video

Testing Web APIs

Now that we have created our first route in our RESTful API, we can start to write unit tests that will confirm our API works as intended. Adding unit testing early in the development process makes it much easier to keep up with unit tests as new features are added or even explore test-driven development!

There are many libraries that can be used to unit test a RESTful API using Node.js and Express. For this project, we’re going to use a number of testing libraries:

To begin, let’s install these libraries as development dependencies in our project using npm:

$ npm install --save-dev mocha chai supertest ajv chai-json-schema-ajv chai-shallow-deep-equal

Now that we have those libraries in place, let’s make a few modifications to our project configuration to make testing more convenient.

ESLint Plugin

To help with formatting and highlighting of our unit tests, we should update the content of our eslint.config.js to recognize items from mocha as follows:

import globals from "globals";
import pluginJs from "@eslint/js";

/** @type {import('eslint').Linter.Config[]} */
export default [
  {
    languageOptions: {
      globals: {
        ...globals.node,
        ...globals.mocha,
      },
    },
    rules: { "no-unused-vars": ["error", { argsIgnorePattern: "next" }] },
  },
  pluginJs.configs.recommended,
];

If working properly, this should also fix any errors visible in VS Code using the ESLint plugin!

Mocha Root Hooks

In testing frameworks such as mocha, we can create hooks that contain actions that should be taken before each test is executed in a file. The mocha framework also has root-level hooks that are actions to be taken before each and every test in every file. We can use a root-level hook to manage setting up a simple database for unit testing, as well as configuring other aspects of our application for testing.

First, let’s create a new test directory in our server folder, and inside of that we’ll create a file hooks.js to contain the testing hooks for our application.

/**
 * @file Root Mocha Hooks
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports mochaHooks A Mocha Root Hooks Object
 */

// Load environment (must be first)
import dotenvx from "@dotenvx/dotenvx";
dotenvx.config({path: ".env.test"})

// Import configuration
import database from "../configs/database.js";
import migrations from '../configs/migrations.js';
import seeds from '../configs/seeds.js';

// Root Hook Runs Before Each Test
export const mochaHooks = {

  // Hook runs once before any tests are executed
  beforeAll(done) {
    // Test database connection
    database.authenticate().then(() => {
      // Run migrations
      migrations.up().then(() => {
        done() 
      });
    });
  },
  
  // Hook runs before each individual test
  beforeEach(done) {
    // Seed the database
    seeds.up().then(() => {
      done();
    })
  },

  // Hook runs after each individual test
  afterEach(done) {
    // Remove all data from the database
    seeds.down({to: 0}).then(() => {
      done();
    });
  }
}

This file contains three hooks. First, the beforeAll hook, which is executed once before any tests are executed, is used to migrate the database. Then, we have the beforeEach() hook, which is executed before each individual test, which will seed the database with some sample data for us to use in our unit tests. Finally, we have an afterEach() hook that will remove any data from the database by undoing all of the seeds, which will truncate each table in the database.

Notice at the top that we are also loading our environment from a new environment file, .env.test. This allows us to use a different environment configuration when we perform testing. So, let’s create that file and populate it with the following content:

LOG_LEVEL=error
PORT=3000
OPENAPI_HOST=http://localhost:3000
OPENAPI_VISIBLE=false
DATABASE_FILE=:memory:
SEED_DATA=false

Here, the two major changes are to switch the log level to error so that we only see errors in the log output, and also to switch the database file to :memory: - a special filename that tells SQLite to create an in-memory database that is excellent for testing.

At this point, we can start writing our unit tests.

Writing Basic Unit Tests

Let’s start with a very simple case - the /api route we created earlier. This is a simple route that only has a single method and outputs a single item, but it already clearly demonstrates how complex unit testing can become.

For these unit tests, we can create a file api.js in the test folder with the following content:

/**
 * @file /api Route Tests
 * @author Russell Feldhausen <russfeld@ksu.edu>
 */

// Load Libraries
import request from 'supertest'
import { use, should } from 'chai'
import chaiJsonSchemaAjv from 'chai-json-schema-ajv'
import chaiShallowDeepEqual from 'chai-shallow-deep-equal'
use(chaiJsonSchemaAjv.create({ verbose: true }))
use(chaiShallowDeepEqual)

// Import Express application
import app from '../app.js';

// Modify Object.prototype for BDD style assertions
should()

These lines will import the various libraries required for these unit tests. We’ll explore how they work as we build the unit tests, but it is also recommended to read the documentation for each library (linked above) to better understand how each one works together in the various unit tests.

Now, let’s write our first unit test, which can be placed right below those lines in the same file:

// -=-=- other code omitted here -=-=-

/**
 * Get all API versions
 */
const getAllVersions = () => {
  it('should list all API versions', (done) => {
    request(app)
      .get('/api/')
      .expect(200)
      .end((err, res) => {
        if (err) return done(err)
        res.body.should.be.an('array')
        res.body.should.have.lengthOf(1)
        done()
      })
  })
}

/**
 * Test /api route
 */
describe('/api', () => {
  describe('GET /', () => {
    getAllVersions()
  })
})

This code looks quite a bit different than the code we’ve been writing so far. This is because the mocha and chai libraries use the Behavior-Driven Development, or BDD, style for writing unit tests. The core idea is that the unit tests should be somewhat “readable” by anyone looking at the code. So, it defines functions such as it and describe that are used to structure the unit tests.

In this example, the getAllVersions function is a unit test function that uses the request library to send a request to our Express app at the /api/ path. When the response is received from that request, we expect the HTTP status code to be 200, and the body of that request should be an array with a length of 1. Hopefully it is clear to see all of that just by reading the code in that function.

The other important concept is the special done function, which is provided as an argument to any unit test function that is testing asynchronous code. Because of the way asynchronous code is handled, the system cannot automatically determine when all promises have been returned. So, once we are done with the unit test and are not waiting for any further async responses, we need to call the done() method. Notice that we call that both at the end of the function, but also in the if statement that checks for any errors returned from the HTTP request.

Finally, at the bottom of the file, we have a few describe statements that actually build the structure that runs each unit test. When the tests are executed, only functions called inside of the describe statements will be executed.

Running Unit Tests

Now that we have created a simple unit test, let’s run it using the mocha test framework. To do this, we’ll add a new script to the package.json file with all of the appropriate options:

{
  ...
  "scripts": {
    "start": "LOG_LEVEL=http node ./bin/www",
    "dev": "nodemon ./bin/www",
    "lint": "npx eslint --fix .",
    "format": "npx prettier . --write",
    "test": "mocha --require test/hooks.js --recursive --parallel --timeout 2000 --exit"
  },
  ...
}

Here, we are using the mocha command with many options:

  • --require test/hooks.js - this requires the global hooks file to be used before each test
  • --recursive - this will recursively look for any tests in subdirectories
  • --parallel - this allows tests to run in parallel (this requires the SQLite in-memory database)
  • --timeout 2000 - this will stop any test if it runs for more than 2 seconds
  • --exit - this forces Mocha to stop after all tests have finished

So, now let’s run our tests using that script:

$ npm run test

If everything is working correctly, we should get the following output:

> lost-communities-solution@0.0.1 test
> mocha --require test/hooks.js --recursive --parallel --timeout 2000 --exit

[dotenvx@1.34.0] injecting env (6) from .env.test

[dotenvx@1.34.0] injecting env (0) from .env.test
[dotenvx@1.34.0] injecting env (0) from .env

  /api
    GET /
      ✔ should list all API versions


  1 passing (880ms)

Great! It looks like our test already passed!

Just to be sure, let’s quickly modify our test to look for an array of size 2 so that it should fail:

// -=-=- other code omitted here -=-=-

/**
 * Get all API versions
 */
const getAllVersions = () => {
  it('should list all API versions', (done) => {
    request(app)
      .get('/api/')
      .expect(200)
      .end((err, res) => {
        if (err) return done(err)
        res.body.should.be.an('array')
        res.body.should.have.lengthOf(2)
        done()
      })
  })
}

// -=-=- other code omitted here -=-=-

Now, when we run the tests, we should clearly see a failure report instead:

> lost-communities-solution@0.0.1 test
> mocha --require test/hooks.js --recursive --parallel --timeout 2000 --exit

[dotenvx@1.34.0] injecting env (6) from .env.test

[dotenvx@1.34.0] injecting env (0) from .env.test
[dotenvx@1.34.0] injecting env (0) from .env

  /api
    GET /
      1) should list all API versions


  0 passing (910ms)
  1 failing

  1) /api
       GET /
         should list all API versions:

      Uncaught AssertionError: expected [ { version: '1.0', url: '/api/v1/' } ] to have a length of 2 but got 1
      + expected - actual

      -1
      +2
      
      at Test.<anonymous> (file:///workspaces/lost-communities-solution/server/test/api.js:31:30)
      at Test.assert (node_modules/supertest/lib/test.js:172:8)
      at Server.localAssert (node_modules/supertest/lib/test.js:120:14)
      at Object.onceWrapper (node:events:638:28)
      at Server.emit (node:events:524:28)
      at emitCloseNT (node:net:2383:8)
      at process.processTicksAndRejections (node:internal/process/task_queues:89:21)

Thankfully, anytime a test fails, we get a very clear and easy to follow error report that pinpoints exactly which line in the test failed, and how the assertion was not met.

Before moving on, let’s update our test so that it should pass again.

Code Coverage

YouTube Video

Code Coverage

It is often helpful to examine the code coverage of our unit tests. Thankfully, there is an easy way to enable that in our project using the c8 library. So, we can start by installing it:

$ npm install --save-dev c8

Once it is installed, we can simply add it to a new script in the package.json file that will run our tests with code coverage:

{
  ...
  "scripts": {
    "start": "LOG_LEVEL=http node ./bin/www",
    "dev": "nodemon ./bin/www",
    "lint": "npx eslint --fix .",
    "format": "npx prettier . --write",
    "test": "mocha --require test/hooks.js --recursive --parallel --timeout 2000 --exit",
    "cov": "c8 --reporter=html --reporter=text mocha --require test/hooks.js --recursive --parallel --timeout 2000 --exit"
  },
  ...
}

All we have to do is add the c8 command with a few options in front of our existing mocha command.

Now, we can run our tests with code coverage using this script:

$ npm run cov

This time, we’ll see a bunch of additional output on the terminal

> lost-communities-solution@0.0.1 cov
> c8 --reporter=html --reporter=text mocha --require test/hooks.js --recursive --parallel --timeout 2000 --exit

[dotenvx@1.34.0] injecting env (6) from .env.test

[dotenvx@1.34.0] injecting env (0) from .env.test
[dotenvx@1.34.0] injecting env (0) from .env

  /api
    GET /
      ✔ should list all API versions


  1 passing (1s)

------------------------|---------|----------|---------|---------|-------------------------------------------
File                    | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s                         
------------------------|---------|----------|---------|---------|-------------------------------------------
All files               |   93.53 |    83.33 |   55.55 |   93.53 |                                           
 server                 |   88.52 |       50 |     100 |   88.52 |                                           
  app.js                |   88.52 |       50 |     100 |   88.52 | 53-59                                     
 server/configs         |   91.86 |    47.36 |     100 |   91.86 |                                           
  database.js           |     100 |      100 |     100 |     100 |                                           
  logger.js             |   85.56 |    30.76 |     100 |   85.56 | 24-25,27-28,30-31,33-34,36-37,39-40,42-43 
  migrations.js         |     100 |      100 |     100 |     100 |                                           
  openapi.js            |   92.85 |    66.66 |     100 |   92.85 | 19-21                                     
  seeds.js              |     100 |      100 |     100 |     100 |                                           
 server/middlewares     |     100 |      100 |     100 |     100 |                                           
  request-logger.js     |     100 |      100 |     100 |     100 |                                           
 server/migrations      |   96.07 |      100 |      50 |   96.07 |                                           
  00_users.js           |   95.55 |      100 |      50 |   95.55 | 44-45                                     
  01_roles.js           |   94.91 |      100 |      50 |   94.91 | 57-59                                     
  02_counties.js        |   96.61 |      100 |      50 |   96.61 | 58-59                                     
  03_communities.js     |   96.61 |      100 |      50 |   96.61 | 58-59                                     
  04_metadata.js        |   96.66 |      100 |      50 |   96.66 | 88-90                                     
  05_documents.js       |   95.71 |      100 |      50 |   95.71 | 68-70                                     
 server/models          |     100 |      100 |     100 |     100 |                                           
  community.js          |     100 |      100 |     100 |     100 |                                           
  county.js             |     100 |      100 |     100 |     100 |                                           
  document.js           |     100 |      100 |     100 |     100 |                                           
  metadata.js           |     100 |      100 |     100 |     100 |                                           
  metadata_community.js |     100 |      100 |     100 |     100 |                                           
  metadata_document.js  |     100 |      100 |     100 |     100 |                                           
  models.js             |     100 |      100 |     100 |     100 |                                           
  role.js               |     100 |      100 |     100 |     100 |                                           
  user.js               |     100 |      100 |     100 |     100 |                                           
  user_role.js          |     100 |      100 |     100 |     100 |                                           
 server/routes          |   68.72 |      100 |     100 |   68.72 |                                           
  api.js                |     100 |      100 |     100 |     100 |                                           
  index.js              |   97.43 |      100 |     100 |   97.43 | 36                                        
  users.js              |    46.8 |      100 |     100 |    46.8 | 52-62,66-73,77-91,95-105,109-138          
 server/routes/api/v1   |   87.71 |      100 |     100 |   87.71 |                                           
  roles.js              |   87.71 |      100 |     100 |   87.71 | 48-54                                     
 server/seeds           |   95.09 |      100 |      50 |   95.09 |                                           
  00_users.js           |   96.36 |      100 |      50 |   96.36 | 54-55                                     
  01_roles.js           |   97.36 |      100 |      50 |   97.36 | 112-114                                   
  02_counties.js        |   95.83 |      100 |      50 |   95.83 | 47-48                                     
  03_communities.js     |   95.65 |      100 |      50 |   95.65 | 45-46                                     
  04_metadata.js        |   89.39 |      100 |      50 |   89.39 | 60-66                                     
  05_documents.js       |   94.82 |      100 |      50 |   94.82 | 56-58

Right away we see that a large part of our application achieves 100% code coverage with a single unit test! This highlights both how tightly interconnected all parts of our application are (such that a single unit test exercises much of the code) but also that code coverage can be a very poor metric for unit test quality (seeing this result we might suspect our application is already well tested with just a single unit test).

We have also enabled the html reporter, so we can see similar results in a coverage folder that appears inside of our server folder. We can use various VS Code Extensions such as Live Preview to view that file in our web browser.

Port Conflict

The Live Preview extension defaults to port 3000, so we recommend digging into the settings and changing the default port to something else before using it.

Coverage Example Coverage Example

In either case, we can see that we’ve already reached 100% coverage on our routes/api.js file. However, as we’ll see in the next section, that doesn’t always mean that we are done writing our unit tests.

Other Tests

YouTube Video

Testing for Other Issues

Let’s consider the scenario where our routes/api.js file was modified slightly to have some incorrect code in it:

// -=-=- other code omitted here -=-=-
router.get('/', function (req, res, next) {
  res.json([
    {
      versoin: "1.0",
      url: "/api/ver1/"
    }
  ])
})

In this example, we have misspelled the version attribute, and also used an incorrect URL for that version of the API. Unfortunately, if we actually make that change to our code, our existing unit test will not catch either error!

So, let’s look at how we can go about catching these errors and ensuring our unit tests are actually valuable.

JSON Schemas

First, it is often helpful to validate the schema of the JSON output by our API. To do that, we’ve installed the ajv JSON schema validator and a chai plugin for using it in a unit test. So, in our test/api.js file, we can add a new test:

// -=-=- other code omitted here -=-=-

/**
 * Check JSON Schema of API Versions
 */
const getAllVersionsSchemaMatch = () => {
  it('all API versions should match schema', (done) => {
    const schema = {
      type: 'array',
      items: {
        type: 'object',
        required: ['version', 'url'],
        properties: {
          version: { type: 'string' },
          url: { type: 'string' },
        },
        additionalProperties: false,
      },
    }
    request(app)
      .get('/api/')
      .expect(200)
      .end((err, res) => {
        if (err) return done(err)
        res.body.should.be.jsonSchema(schema)
        done()
      })
  })
}

/**
 * Test /api route
 */
describe('/api', () => {
  describe('GET /', () => {
    getAllVersions()
    getAllVersionsSchemaMatch()
  })
})

In this test, we create a JSON schema following the AJV Instructions that defines the various attributes that should be present in the output. It is especially important to include the additionalProperties: false line, which helps prevent leaking any unintended attributes.

Now, when we run our tests, we should see that this test fails:

  /api
    GET /
      ✔ should list all API versions
      1) all API versions should match schema


  1 passing (1s)
  1 failing

  1) /api
       GET /
         all API versions should match schema:
     Uncaught AssertionError: expected [ { versoin: '1.0', …(1) } ] to match json-schema
[ { instancePath: '/0', …(7) } ]
      at Test.<anonymous> (file:///workspaces/lost-communities-solution/server/test/api.js:59:28)
...

As we can see, the misspelled version attribute will not match the given schema, causing the test to fail! That shows the value of such a unit test in our code.

Protecting Attributes

Let’s update our route to include the correct attributes, but also add an additional item that shouldn’t be present in the output:

// -=-=- other code omitted here -=-=-
router.get('/', function (req, res, next) {
  res.json([
    {
      version: "1.0",
      url: "/api/ver1/",
      secure_data: "This should not be shared!"
    }
  ])
})

This is an example of Broken Object Properly Level Authorization, one of the top 10 most common API security risks according to OWASP. Often our database models will include attributes that we don’t want to expose to our users, so we want to make sure they aren’t included in the output by accident.

If we run our test again, it should also fail:

  /api
    GET /
      ✔ should list all API versions
      1) all API versions should match schema


  1 passing (1s)
  1 failing

  1) /api
       GET /
         all API versions should match schema:
     Uncaught AssertionError: expected [ { version: '1.0', …(2) } ] to match json-schema
[ { instancePath: '/0', …(7) } ]
      at Test.<anonymous> (file:///workspaces/lost-communities-solution/server/test/api.js:59:28)
...

However, if we remove the line additionalProperties: false from our JSON schema unit test, it will now succeed. So, it is always important for us to remember to include that line in all of our JSON schemas if we want to avoid this particular security flaw.

Checking Values

However, we still have not caught our incorrect value in our API output:

// -=-=- other code omitted here -=-=-
router.get('/', function (req, res, next) {
  res.json([
    {
      version: "1.0",
      url: "/api/ver1/",
      secure_data: "This should not be shared!"
    }
  ])
})

For this, we need to write one additional unit test to check the actual content of the output. For this, we’ll use a deep equality plugin for chai:

// -=-=- other code omitted here -=-=-

/**
 * Check API version exists in list
 */
const findVersion = (version) => {
  it('should contain specific version', (done) => {
    request(app)
      .get('/api/')
      .expect(200)
      .end((err, res) => {
        if (err) return done(err)
        const foundVersion = res.body.find((v) => v.version === version.version)
        foundVersion.should.shallowDeepEqual(version)
        done()
      })
  })
}

/**
 * Test /api route
 */
describe('/api', () => {
  describe('GET /', () => {
    getAllVersions()
    getAllVersionsSchemaMatch()
  })

  describe('version: 1.0', () => {
    const version = {
      version: "1.0",
      url: "/api/v1/"
    }

    describe('GET /', () => {
      findVersion(version)
    })
  })
})

The findVersion unit test will check the actual contents of the output received from the API and compare it to the version object that is provided as input. In our describe statements below, we can see how easy it is to define a simple version object that we can use to compare to the output.

Use the Source!

One common mistake when writing these unit tests is to simply copy the object structure from the code that is being tested. This is considered bad practice since it virtually guarantee that any typos or mistakes are not caught. Instead, when constructing these unit tests, we should always go back to the original source document, typically a design document or API specification, and build our unit tests using that as a guide. This will ensure that our tests will actually catch things such as typos or missing data.

With that test in place, we should once again have a unit test that fails:

  /api
    GET /
      ✔ should list all API versions
      ✔ all API versions should match schema
    version: 1.0
      GET /
        1) should contain specific version


  2 passing (987ms)
  1 failing

  1) /api
       version: 1.0
         GET /
           should contain specific version:

      Uncaught AssertionError: Expected to have "/api/v1/" but got "/api/ver1/" at path "/url".
      + expected - actual

       {
      -  "url": "/api/ver1/"
      +  "url": "/api/v1/"
         "version": "1.0"
       }
      
      at Test.<anonymous> (file:///workspaces/lost-communities-solution/server/test/api.js:76:29)

Thankfully, in the output we clearly see the error, and it is easy to go back to our original design document to correct the error in our code.

Reusing Tests

While it may seem like we are using a very complex structure for these tests, there is actually a very important reason behind it. If done correctly, we can easily reuse most of our tests as we add additional data to the application.

Let’s consider the scenario where we add a second API version to our output:

// -=-=- other code omitted here -=-=-
router.get('/', function (req, res, next) {
  res.json([
    {
      version: "1.0",
      url: "/api/v1/"
    },
    {
      version: "2.0",
      url: "/api/v2/"
    }
  ])
})

To fully test this, all we need to do is update the array size in the getAllVersions and add an additional describe statement for the new version:

// -=-=- other code omitted here -=-=-

/**
 * Get all API versions
 */
const getAllVersions = () => {
  it('should list all API versions', (done) => {
    request(app)
      .get('/api/')
      .expect(200)
      .end((err, res) => {
        if (err) return done(err)
        res.body.should.be.an('array')
        res.body.should.have.lengthOf(2)
        done()
      })
  })
}

// -=-=- other code omitted here -=-=-

/**
 * Test /api route
 */
describe('/api', () => {
  describe('GET /', () => {
    getAllVersions()
    getAllVersionsSchemaMatch()
  })

  describe('version: 1.0', () => {
    const version = {
      version: "1.0",
      url: "/api/v1/"
    }

    describe('GET /', () => {
      findVersion(version)
    })
  })

  describe('version: 2.0', () => {
    const version = {
      version: "2.0",
      url: "/api/v2/"
    }

    describe('GET /', () => {
      findVersion(version)
    })
  })
})

With those minor changes, we see that our code now passes all unit tests:

  /api
    GET /
      ✔ should list all API versions
      ✔ all API versions should match schema
    version: 1.0
      GET /
        ✔ should contain specific version
    version: 2.0
      GET /
        ✔ should contain specific version

By writing reusable functions for our unit tests, we can often deduplicate and simplify our code.

Before moving on, let’s roll back our unit tests and the API to just have a single version. We should make sure all tests are passing before we move ahead!

Testing Roles

YouTube Video

Unit Testing Roles Routes

Now that we’ve created a basic unit test for the /api route, we can now expand on that to test our other existing route, the /api/v1/roles route. Once again, there is only one method inside of this route, the GET ALL method, so the unit tests should be similar between these two routes. The only difference here is this route is now reading from the database instead of just returning a static JSON array.

We can begin by creating a new api folder inside of the test folder, and then a v1 folder inside of that, and finally a new roles.js file to contain our tests. By doing this, the path to our tests match the path to the routes themselves, making it easy to match up the tests with the associated routers.

Inside of that file, we can place the first unit test for the roles routes:

/**
 * @file /api/v1/roles Route Tests
 * @author Russell Feldhausen <russfeld@ksu.edu>
 */

// Load Libraries
import request from "supertest";
import { use, should } from "chai";
import chaiJsonSchemaAjv from "chai-json-schema-ajv";
import chaiShallowDeepEqual from "chai-shallow-deep-equal";
use(chaiJsonSchemaAjv.create({ verbose: true }));
use(chaiShallowDeepEqual);

// Import Express application
import app from "../../../app.js";

// Modify Object.prototype for BDD style assertions
should();

/**
 * Get all Roles
 */
const getAllRoles = () => {
  it("should list all roles", (done) => {
    request(app)
      .get("/api/v1/roles")
      .expect(200)
      .end((err, res) => {
        if (err) return done(err);
        res.body.should.be.an("array");
        res.body.should.have.lengthOf(7);
        done();
      });
  });
};


/**
 * Test /api/v1/roles route
 */
describe("/api/v1/roles", () => {
  describe("GET /", () => {
    getAllRoles();
  });
});

Just like before, this unit test will simply send an HTTP GET request to the /api/v1/roles and expect to receive a response that contains an array of 7 elements, which matches the 7 roles defined in the seeds/01_roles.js file.

Adding Additional Formats to AJV

Next, we can create a test to confirm that the structure of that response matches our expectation:

// -=-=- other code omitted here -=-=-

/**
 * Check JSON Schema of Roles
 */
const getRolesSchemaMatch = () => {
  it("all roles should match schema", (done) => {
    const schema = {
      type: "array",
      items: {
        type: "object",
        required: ["id", "role"],
        properties: {
          id: { type: "number" },
          role: { type: "string" },
          createdAt: { type: "string" },
          updatedAt: { type: "string" }
        },
        additionalProperties: false,
      },
    };
    request(app)
      .get("/api/v1/roles")
      .expect(200)
      .end((err, res) => {
        if (err) return done(err);
        res.body.should.be.jsonSchema(schema);
        done();
      });
  });
};


/**
 * Test /api/v1/roles route
 */
describe("/api/v1/roles", () => {
  describe("GET /", () => {
    getAllRoles();
    getRolesSchemaMatch();
  });
});

However, as we write that test, we might notice that the createdAt and updatedAt fields are just defined as strings, when really they should be storing a timestamp. Thankfully, the AJV Schema Validator has an extension called AJV Formats that adds many new formats we can use. So, let’s install it as a development dependency using npm:

$ npm install --save-dev ajv-formats

Then, we can add it to AJV at the top of our unit tests and use all of the additional types in the AJV Formats documentation in our tests:

/**
 * @file /api/v1/roles Route Tests
 * @author Russell Feldhausen <russfeld@ksu.edu>
 */

// Load Libraries
import request from "supertest";
import { use, should } from "chai";
import Ajv from 'ajv'
import addFormats from 'ajv-formats';
import chaiJsonSchemaAjv from "chai-json-schema-ajv";
import chaiShallowDeepEqual from "chai-shallow-deep-equal";

// Import Express application
import app from "../../../app.js";

// Configure Chai and AJV
const ajv = new Ajv()
addFormats(ajv)
use(chaiJsonSchemaAjv.create({ ajv, verbose: true }));
use(chaiShallowDeepEqual);

// Modify Object.prototype for BDD style assertions
should();

// -=-=- other code omitted here -=-=-

/**
 * Check JSON Schema of Roles
 */
const getRolesSchemaMatch = () => {
  it("all roles should match schema", (done) => {
    const schema = {
      type: "array",
      items: {
        type: "object",
        required: ["id", "role"],
        properties: {
          id: { type: "number" },
          role: { type: "string" },
          createdAt: { type: "string", format: "iso-date-time" },
          updatedAt: { type: "string", format: "iso-date-time"  }
        },
        additionalProperties: false,
      },
    };
    request(app)
      .get("/api/v1/roles")
      .expect(200)
      .end((err, res) => {
        if (err) return done(err);
        res.body.should.be.jsonSchema(schema);
        done();
      });
  });
};

// -=-=- other code omitted here -=-=-

Now we can use the iso-date-time string format to confirm that the createdAt and updatedAt fields match the expected format. The AJV Formats package supports a number of helpful formats, such as email, uri, uuid, and more.

Testing Each Role

Finally, we should also check that each role we expect to be included in the database is present and accounted for. We can write a single unit test function for this, but we’ll end up calling it several times with different roles:

// -=-=- other code omitted here -=-=-

/**
 * Check Role exists in list
 */
const findRole = (role) => {
  it("should contain '" + role.role + "' role", (done) => {
    request(app)
      .get("/api/v1/roles")
      .expect(200)
      .end((err, res) => {
        if (err) return done(err);
        const foundRole = res.body.find(
          (r) => r.id === role.id,
        );
        foundRole.should.shallowDeepEqual(role);
        done();
      });
  });
};


/**
 * Test /api/v1/roles route
 */
describe("/api/v1/roles", () => {
  describe("GET /", () => {
    getAllRoles();
    getRolesSchemaMatch();
    // List of all expected roles in the application
    const roles = [ 
      {
        id: 1, 
        role: "manage_users"
      },
      {
        id: 2,
        role: "manage_documents"
      },
      {
        id: 3,
        role: "add_documents"
      },
      {
        id: 4,
        role: "manage_communities"
      },
      {
        id: 5,
        role: "add_communities"
      },
      {
        id: 6,
        role: "view_documents"
      },
      {
        id: 7,
        role: "view_communities"
      }
    ]
    roles.forEach( (r) => {
      findRole(r)
    })
  });
});

Here we are creating a simple array of roles, which looks similar to the one that is already present in our seeds/01_roles.js seed file, but importantly it is not copied from that file! Instead, we should go back to the original design documentation for this application, if any, and read the roles from there to make sure they are all correctly added to the database. In this case we don’t have an original design document so we won’t worry about that here.

With all of that in place, let’s run our unit tests and confirm they are working:

$ npm run test

If everything is correct, we should find the following in our output showing all tests are successful:

  /api/v1/roles
    GET /
      ✔ should list all roles
      ✔ all roles should match schema
      ✔ should contain 'manage_users' role
      ✔ should contain 'manage_documents' role
      ✔ should contain 'add_documents' role
      ✔ should contain 'manage_communities' role
      ✔ should contain 'add_communities' role
      ✔ should contain 'view_documents' role
      ✔ should contain 'view_communities' role

There we go! We now have working unit tests for our roles. Now is a great time to lint, format, and then commit and push our work to GitHub before continuing. Below are a couple of important discussions on unit test structure and design that are highly recommended before continuing.

Unit Tests Based on Seed Data

In this application, we are heavily basing our unit tests on the seed data we created in the seeds directory. This is a design choice, and there are many different ways to approach this in practice:

  • Seed data for unit tests could be included as a hook that runs before each unit test
  • Unit tests could assume the database is completely blank and manually insert data as needed as part of the test
  • Different seed data files could be used for testing and production
  • A sample database file or connection could be used for testing instead of seed data

In this case, we believe it makes sense for the application we are testing to have a number of pre-defined roles and users that are populated via seed data when the application is tested and when it is deployed, so we chose to build our unit tests based on the assumption that the existing seed data will be used. However, other application designs may require different testing strategies, so it is always important to consider which method will work best for a given application!

Duplicated Unit Test Code

A keen-eyed observer may notice that the three unit test functions in the test/api.js file are nearly identical to the functions included in the test/api/v1/roles.js file. This is usually the case in unit testing - there is often a large amount of repeated code used to test different parts of an application, especially a RESTful API like this one.

This leads to two different design options:

  • Refactor the code to reduce duplication across unit tests, adding some complexity and interdependence between tests
  • Keep duplicated code to make unit tests more readable and independent of each other

For this application, we will follow the second approach. We feel that unit tests are much more useful if the large majority of the test can be easily seen and understood in a single file. This also means that a change in one test method will not impact other tests, both for good and for bad. So, it may mean modifying and updating the entire test suite is a bit more difficult, but updating individual tests should be much simpler.

Again, this is a design choice that we feel is best for this application, and other applications may be better off with other structures. It is always important to consider these implications when writing unit tests for an application!

Retrieve All

YouTube Video

Users Routes

Now that we have written and tested the routes for the Role model, let’s start working on the routes for the User model. These routes will be much more complex, because we want the ability to add, update, and delete users in our database.

To do this, we’ll create several RESTful routes, which pair HTTP verbs and paths to the various CRUD operations that can be performed on the database. Here is a general list of the actions we want to perform on most models in a RESTful API, based on their associated CRUD operation:

  • Create New (HTTP POST)
  • Retrieve All / Retrieve One (HTTP GET)
  • Update One (HTTP PUT)
  • Delete One (HTTP DELETE)

As we build this new API router, we’ll see each one of these in action.

Retrieve All Route

The first operation we’ll look at is the retrieve all operation, which is one we’re already very familiar with. To begin, we should start by copying the existing file at routes/users.js to routes/api/v1/users.js and modifying it a bit to contain this content:

/**
 * @file Users router
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports router an Express router
 *
 * @swagger
 * tags:
 *   name: users
 *   description: Users Routes
 */

// Import libraries
import express from "express";

// Create Express router
const router = express.Router();

// Import models
import {
  User,
  Role,
} from "../../../models/models.js";

// Import logger
import logger from "../../../configs/logger.js";

/**
 * Gets the list of users
 *
 * @param {Object} req - Express request object
 * @param {Object} res - Express response object
 * @param {Function} next - Express next middleware function
 *
 * @swagger
 * /api/v1/users:
 *   get:
 *     summary: users list page
 *     description: Gets the list of all users in the application
 *     tags: [users]
 *     responses:
 *       200:
 *         description: the list of users
 *         content:
 *           application/json:
 *             schema:
 *               type: array
 *               items:
 *                 $ref: '#/components/schemas/User'
 */
router.get("/", async function (req, res, next) {
  try {
    const users = await User.findAll({
      include: {
        model: Role,
        as: "roles",
        attributes: ["id", "role"],
        through: {
          attributes: [],
        },
      },
    });
    res.json(users);
  } catch (error) {
    logger.error(error);
    res.status(500).end();
  }
});

export default router;

This is very similar to the code we included in our roles route. The major difference is that the users route will also output the list of roles assigned to the user. There is a lot of great information in the Sequelize Documentation for how to properly query associated records.

We’ll also need to remove the line from our app.js file that directly imports and uses that router:

// -=-=- other code omitted here -=-=-

// Import routers
import indexRouter from "./routes/index.js";
import usersRouter from "./routes/users.js"; // delete this line
import apiRouter from "./routes/api.js";

// -=-=- other code omitted here -=-=-

// Use routers
app.use("/", indexRouter);
app.use("/users", usersRouter); // delete this line
app.use("/api", apiRouter);

// -=-=- other code omitted here -=-=-

Instead, we can now import and link the new router in our routes/api.js file:

// -=-=- other code omitted here -=-=-

// Import v1 routers
import rolesRouter from "./api/v1/roles.js";
import usersRouter from "./api/v1/users.js";

// Create Express router
const router = express.Router();

// Use v1 routers
router.use("/v1/roles", rolesRouter);
router.use("/v1/users", usersRouter);

// -=-=- other code omitted here -=-=-

Before moving on, let’s run our application and make sure that the users route is working correctly:

$ npm run dev

Once it loads, we can navigate to the /api/v1/users URL to see the output:

Retrieve All Ouptut Retrieve All Ouptut

Retrieve All Unit Tests

As we write each of these routes, we’ll also explore the related unit tests. The first three unit tests for this route are very similar to the ones we wrote for the roles routes earlier, so we won’t go into too much detail on these. As expected, we’ll place all of the unit tests for the users routes in the test/api/v1/users.js file:

/**
 * @file /api/v1/users Route Tests
 * @author Russell Feldhausen <russfeld@ksu.edu>
 */

// Load Libraries
import request from "supertest";
import { use, should } from "chai";
import Ajv from "ajv";
import addFormats from "ajv-formats";
import chaiJsonSchemaAjv from "chai-json-schema-ajv";
import chaiShallowDeepEqual from "chai-shallow-deep-equal";

// Import Express application
import app from "../../../app.js";

// Configure Chai and AJV
const ajv = new Ajv();
addFormats(ajv);
use(chaiJsonSchemaAjv.create({ ajv, verbose: true }));
use(chaiShallowDeepEqual);

// Modify Object.prototype for BDD style assertions
should();

// User Schema
const userSchema = {
  type: "object",
  required: ["id", "username"],
  properties: {
    id: { type: "number" },
    username: { type: "string" },
    createdAt: { type: "string", format: "iso-date-time" },
    updatedAt: { type: "string", format: "iso-date-time" },
    roles: {
      type: "array",
      items: {
          type: 'object',
          required: ['id', 'role'],
          properties: {
              id: { type: 'number' },
              role: { type: 'string' },
          },
      },
    }
  },
  additionalProperties: false,
};

/**
 * Get all Users
 */
const getAllUsers = () => {
  it("should list all users", (done) => {
    request(app)
      .get("/api/v1/users")
      .expect(200)
      .end((err, res) => {
        if (err) return done(err);
        res.body.should.be.an("array");
        res.body.should.have.lengthOf(4);
        done();
      });
  });
};

/**
 * Check JSON Schema of Users
 */
const getUsersSchemaMatch = () => {
  it("all users should match schema", (done) => {
    const schema = {
      type: "array",
      items: userSchema
    };
    request(app)
      .get("/api/v1/users")
      .expect(200)
      .end((err, res) => {
        if (err) return done(err);
        res.body.should.be.jsonSchema(schema);
        done();
      });
  });
};

/**
 * Check User exists in list
 */
const findUser = (user) => {
  it("should contain '" + user.username + "' user", (done) => {
    request(app)
      .get("/api/v1/users")
      .expect(200)
      .end((err, res) => {
        if (err) return done(err);
        const foundUser = res.body.find((u) => u.id === user.id);
        foundUser.should.shallowDeepEqual(user);
        done();
      });
  });
};

// List of all expected users in the application
const users = [
  {
    id: 1,
    username: "admin",
  },
  {
    id: 2,
    username: "contributor",
  },
  {
    id: 3,
    username: "manager",
  },
  {
    id: 4,
    username: "user",
  }
];

/**
 * Test /api/v1/users route
 */
describe("/api/v1/users", () => {
  describe("GET /", () => {
    getAllUsers();
    getUsersSchemaMatch();
    
    users.forEach((u) => {
      findUser(u);
    });
  });
});

The major difference to note is in the highlighted section, where we have to add some additional schema information to account for the roles associated attribute that is part of the users object. It is pretty self-explanatory; each object in the array has a set of attributes that match what we used in the unit test for the roles routes.

We also moved the schema for the User response object out of that unit test so we can reuse it in other unit tests, as we’ll see later in this example.

However, we also should add a couple of additional unit tests to confirm that each user has the correct roles assigned, since that is a major part of the security and authorization mechanism we’ll be building for this application. While we could do that as part of the findUser test, let’s go ahead and add separate tests for each of these, which is helpful in debugging anything that is broken or misconfigured.

/**
 * @file /api/v1/users Route Tests
 * @author Russell Feldhausen <russfeld@ksu.edu>
 */

// Load Libraries
import request from "supertest";
import { use, should, expect } from "chai";
import Ajv from "ajv";
import addFormats from "ajv-formats";
import chaiJsonSchemaAjv from "chai-json-schema-ajv";
import chaiShallowDeepEqual from "chai-shallow-deep-equal";

// -=-=- other code omitted here -=-=-

/**
 * Check that User has correct number of roles
 */
const findUserCountRoles = (username, count) => {
  it("user '" + username + "' should have " + count + " roles", (done) => {
    request(app)
      .get("/api/v1/users")
      .expect(200)
      .end((err, res) => {
        if (err) return done(err);
        const foundUser = res.body.find((u) => u.username === username);
        foundUser.roles.should.be.an("array");
        foundUser.roles.should.have.lengthOf(count);
        done();
      });
  });
};

/**
 * Check that User has specific role
 */
const findUserConfirmRole = (username, role) => {
  it("user '" + username + "' should have '" + role + "' role", (done) => {
    request(app)
      .get("/api/v1/users")
      .expect(200)
      .end((err, res) => {
        if (err) return done(err);
        const foundUser = res.body.find((u) => u.username === username);
        expect(foundUser.roles.some((r) => r.role === role)).to.equal(true)
        done();
      });
  });
};

// -=-=- other code omitted here -=-=-

// List of all users and expected roles
const user_roles = [
  {
    username: "admin",
    roles: ["manage_users", "manage_documents", "manage_communities"]
  },
  {
    username: "contributor",
    roles: ["add_documents", "add_communities"]
  },
  {
    username: "manager",
    roles: ["manage_documents", "manage_communities"]
  },
  {
    username: "user",
    roles: ["view_documents", "view_communities"]
  },
];

/**
 * Test /api/v1/users route
 */
describe("/api/v1/users", () => {
  describe("GET /", () => {

    // -=-=- other code omitted here -=-=-
    
    user_roles.forEach((u) => {
      // Check that user has correct number of roles
      findUserCountRoles(u.username, u.roles.length)
      u.roles.forEach((r) => {
        // Check that user has each expected role
        findUserConfirmRole(u.username, r)
      })
    });
  });
});

This code uses an additional assertion, expect, from the chai library, so we have to import it at the top on the highlighted line. These two tests will confirm that the user has the expected number of roles, and also explicitly confirm that each user has each of the expected roles.

Testing Arrays for Containment

When writing unit tests that deal with arrays, it is always important to not only check that the array contains the correct elements, but also that it ONLY contains those elements and no additional elements. A great way to do this is to explicitly check each element the array should contain is present, and then also check the size of the array so that it can only contain those listed elements. Of course, this assumes that each element is only present once in the array!

If we aren’t careful about how these unit tests are constructed, it is possible for arrays to contain additional items. In this case, it might mean that a user is assigned to more roles than they should be, which would be very bad for our application’s security!

With all of these tests in place, let’s go ahead and run them to confirm everything is working properly. Thankfully, with the mocha test runner, we can even specify a single file to run, as shown below:

$ npm run test test/api/v1/users.js

If everything is correct, we should see that this file has 19 tests that pass:

  /api/v1/users
    GET /
      ✔ should list all users
      ✔ all users should match schema
      ✔ should contain 'admin' user
      ✔ should contain 'contributor' user
      ✔ should contain 'manager' user
      ✔ should contain 'user' user
      ✔ user 'admin' should have 3 roles
      ✔ user 'admin' should have 'manage_users' role
      ✔ user 'admin' should have 'manage_documents' role
      ✔ user 'admin' should have 'manage_communities' role
      ✔ user 'contributor' should have 2 roles
      ✔ user 'contributor' should have 'add_documents' role
      ✔ user 'contributor' should have 'add_communities' role
      ✔ user 'manager' should have 2 roles
      ✔ user 'manager' should have 'manage_documents' role
      ✔ user 'manager' should have 'manage_communities' role
      ✔ user 'user' should have 2 roles
      ✔ user 'user' should have 'view_documents' role
      ✔ user 'user' should have 'view_communities' role


  19 passing (1s)

Great! Now is a great time to lint, format, and then commit and push our work to GitHub before continuing.

Retrieve One

YouTube Video

Retrieve One Route

Many RESTful web APIs also include the ability to retrieve a single object from a collection by providing the ID as a parameter to the route. So, let’s go ahead and build that route in our application as well.

Unused in Practice

While this route is an important part of many RESTful web APIs, it can often go unused since most frontend web applications will simply use the retrieve all endpoint to get a list of items, and then it will just cache that result and filter the list to show a user a single entry. However, there are some use cases where this route is extremely useful, so we’ll go ahead and include it in our backend code anyway.

In our routes/api/v1/users.js file, we can add a new route to retrieve a single user based on the user’s ID number:

// -=-=- other code omitted here -=-=-

/**
 * Gets a single user by ID
 *
 * @param {Object} req - Express request object
 * @param {Object} res - Express response object
 * @param {Function} next - Express next middleware function
 *
 * @swagger
 * /api/v1/users/{id}:
 *   get:
 *     summary: get single user
 *     description: Gets a single user from the application
 *     tags: [users]
 *     parameters:
 *       - in: path
 *         name: id
 *         required: true
 *         schema:
 *           type: integer
 *         description: user ID
 *     responses:
 *       200:
 *         description: a user
 *         content:
 *           application/json:
 *             schema:
 *               $ref: '#/components/schemas/User'
 */
router.get("/:id", async function (req, res, next) {
  try {
    const user = await User.findByPk(req.params.id, {
      include: {
        model: Role,
        as: "roles",
        attributes: ["id", "role"],
        through: {
          attributes: [],
        },
      },
    });
    // if the user is not found, return an HTTP 404 not found status code
    if (user === null) {
      res.status(404).end();
    } else {
      res.json(user);
    }
  } catch (error) {
    logger.error(error);
    res.status(500).end();
  }
});

In this route, we have included a new route parameter id in the path for the route, and we also documented that route parameter in the Open API documentation comment. We then use that id parameter, which will be stored as req.params.id by Express, in the findByPk method available in Sequelize. We can even confirm that our new method appears correctly in our documentation by visiting the /docs route in our application:

Retrieve One Route Retrieve One Route

When we visit that route, we’ll need to include the ID of the user to request in the path, as in /api/v1/users/1. If it is working correctly, we should see data for a single user returned in the browser:

Retrieve One Route Retrieve One Route

Retrieve One Unit Tests

The unit tests for the route to retrieve a single object are nearly identical to the ones use for the retrieve all route. Since we have already verified that each user exists and has the correct roles, we may not need to be as particular when developing these tests.

// -=-=- other code omitted here -=-=-

/**
 * Get single user
 */
const getSingleUser = (user) => {
  it("should get user '" + user.username + "'", (done) => {
    request(app)
      .get("/api/v1/users/" + user.id)
      .expect(200)
      .end((err, res) => {
        if (err) return done(err);
        res.body.should.be.an("object");
        res.body.should.shallowDeepEqual(user);
        done();
      });
  });
};

/**
 * Get single user check schema
 */
const getSingleUserSchemaMatch = (user) => {
  it("user '" + user.username + "' should match schema", (done) => {
    request(app)
      .get("/api/v1/users/" + user.id)
      .expect(200)
      .end((err, res) => {
        if (err) return done(err);
        res.body.should.be.jsonSchema(userSchema);
        done();
      });
  });
};

// -=-=- other code omitted here -=-=-

/**
 * Test /api/v1/users route
 */
describe("/api/v1/users", () => {
  // -=-=- other code omitted here -=-=-

  describe("GET /{id}", () => {
    users.forEach((u) => {
      getSingleUser(u);
      getSingleUserSchemaMatch(u);
    })
  });
});

For these unit tests, we are once again simply checking that we can retrieve each individual user by ID, and also that the response matches the expected userSchema object we used in earlier tests.

However, these unit tests are only checking for the users that we expect the database to contain. What if we receive an ID parameter for a user that does not exist? We should also test that particular situation as well.

// -=-=- other code omitted here -=-=-

/**
 * Tries to get a user using an invalid id
 */
const getSingleUserBadId = (invalidId) => {
  it("should return 404 when requesting user with id '" + invalidId + "'", (done) => {
    request(app)
      .get("/api/v1/users/" + invalidId)
      .expect(404)
      .end((err) => {
        if (err) return done(err);
        done();
      });
  });
};

// -=-=- other code omitted here -=-=-

/**
 * Test /api/v1/users route
 */
describe("/api/v1/users", () => {
  // -=-=- other code omitted here -=-=-

  describe("GET /{id}", () => {
    users.forEach((u) => {
      getSingleUser(u);
      getSingleUserSchemaMatch(u);
    })

    getSingleUserBadId(0)
    getSingleUserBadId("test")
    getSingleUserBadId(-1)
    getSingleUserBadId(5)
  });
});

With this unit test, we can easily check that our API properly returns HTTP status code 404 for a number of invalid ID values, including 0, -1, "test", 5, and any others we can think of to try.

Create

YouTube Video

Create Route

Now that we’ve explored the routes we can use to read data from our RESTful API, let’s look at the routes we can use to modify that data. The first one we’ll cover is the create route, which allows us to add a new entry to the database. However, before we do that, let’s create some helpful utility functions that we can reuse throughout our application as we develop more advanced routes.

Success Messages

One thing we’ll want to be able to do is send some well-formatted success messages to the user. While we could include this in each route, it is a good idea to abstract this into a utility function that we can write once and use throughout our application. By doing so, it makes it easier to restructure these messages as needed in the future.

So, let’s create a new utilities folder inside of our server folder, and then a new send-success.js file with the following content:

/**
 * @file Sends JSON Success Messages
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports sendSuccess a function to send JSON Success Messages
 */

/**
 * Send JSON Success Messages
 *
 * @param {string} message - the message to send
 * @param {integer} status - the HTTP status to use
 * @param {Object} res - Express response object
 *
 * @swagger
 * components:
 *   responses:
 *     Success:
 *       description: success
 *       content:
 *         application/json:
 *           schema:
 *             type: object
 *             required:
 *               - message
 *               - id
 *             properties:
 *               message:
 *                 type: string
 *                 description: the description of the successful operation
 *               id:
 *                 type: integer
 *                 description: the id of the saved or created item
 *             example:
 *               message: User successfully saved!
 */
function sendSuccess(message, id, status, res) {
  res.status(status).json({
    message: message,
    id: id
  });
}
  
export default sendSuccess;

In this file, we are defining a success message from our application as a JSON object with a message attribute, as well as the id of the object that was acted upon. The code itself is very straightforward, but we are including the appropriate Open API documentation as well, which we can reuse in our routes elsewhere.

To make the Open API library aware of these new files, we need to add it to our configs/openapi.js file:

// -=-=- other code omitted here -=-=-

const options = {
  definition: {
    openapi: "3.1.0",
    info: {
      title: "Lost Communities",
      version: "0.0.1",
      description: "Kansas Lost Communities Project",
    },
    servers: [
      {
        url: url(),
      },
    ],
  },
  apis: ["./routes/*.js", "./models/*.js", "./routes/api/v1/*.js", "./utilities/*.js"],
};

Validation Error Messages

Likewise, we may also want to send a well-structured message anytime our database throws an error, or if any of our model validation steps fails. So, we can create another file handle-validation-error.js with the following content:

/**
 * @file Error handler for Sequelize Validation Errors
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports handleValidationError a handler for Sequelize validation errors
 */

/**
 * Gracefully handle Sequelize Validation Errors
 * 
 * @param {SequelizeValidationError} error - Sequelize Validation Error
 * @param {Object} res - Express response object
 * 
 * @swagger
 * components:
 *   responses:
 *     Success:
 *     ValidationError: 
 *       description: model validation error
 *       content:
 *         application/json:
 *           schema:
 *             type: object
 *             required:
 *               - error
 *             properties:
 *               error: 
 *                 type: string
 *                 description: the description of the error
 *               errors:
 *                 type: array
 *                 items:
 *                    type: object
 *                    required: 
 *                      - attribute
 *                      - message
 *                    properties:
 *                      attribute:
 *                        type: string
 *                        description: the attribute that caused the error
 *                      message:
 *                        type: string
 *                        description: the error associated with that attribute
 *             example:
 *               error: Validation Error
 *               errors:
 *                 - attribute: username
 *                   message: username must be unique
 */
function handleValidationError(error, res) {
  if (error.errors?.length > 0) {
    const errors = error.errors
    .map((e) => {
      return {attribute: e.path, message: e.message}
    })
    res.status(422).json({ 
      error: "Validation Error",
      errors: errors
    });
  } else {
    res.status(422).json({
      error: error.parent.message
    })
  }
}

export default handleValidationError;

Again, the code for this is not too complex. It builds upon the structure in the Sequelize ValidationError class to create a helpful JSON object that includes both an error attribute as well as an optional errors array that lists each attribute with a validation error, if possible. We also include the appropriate Open API documentation for this response type.

Trial & Error

If we look at the code in the handle-validation-error.js file, it may seem like it came from nowhere, or it may be difficult to see how this was constructed based on what little is given in the Sequelize documentation.

In fact, this code was actually constructed using a trial and error process by iteratively submitting broken models and looking at the raw errors that were produced by Sequelize until a common structure was found. For the purposes of this example, we’re leaving out some of these steps, but we encourage exploring the output to determine the best method for any given application.

Creating a New User

Now that we have created helpers for our route, we can add the code to actually create that new user when an HTTP POST request is receive4d.

In our routes/api/v1/users.js file, let’s add a new route we can use to create a new entry in the users table:

// -=-=- other code omitted here -=-=-

// Import libraries
import express from "express";
import { ValidationError } from "sequelize";

// Create Express router
const router = express.Router();

// Import models
import { User, Role } from "../../../models/models.js";

// Import logger
import logger from "../../../configs/logger.js";

// Import database
import database from "../../../configs/database.js"

// Import utilities
import handleValidationError from "../../../utilities/handle-validation-error.js";
import sendSuccess from "../../../utilities/send-success.js";

// -=-=- other code omitted here -=-=-

/**
 * Create a new user
 *
 * @param {Object} req - Express request object
 * @param {Object} res - Express response object
 * @param {Function} next - Express next middleware function
 *
 * @swagger
 * /api/v1/users:
 *   post:
 *     summary: create user
 *     tags: [users]
 *     requestBody:
 *       description: user
 *       required: true
 *       content:
 *         application/json:
 *           schema:
 *             $ref: '#/components/schemas/User'
 *           example:
 *             username: newuser
 *             roles:
 *               - id: 6
 *               - id: 7
 *     responses:
 *       201:
 *         $ref: '#/components/responses/Success'
 *       422:
 *         $ref: '#/components/responses/ValidationError'         
 */
router.post("/", async function (req, res, next) {
  try {
    // Use a database transaction to roll back if any errors are thrown
    await database.transaction(async t => {
      const user = await User.create(
        // Build the user object using body attributes
        {
          username: req.body.username,
        },
        // Assign to a database transaction
        {
          transaction: t
        }
      );
  
      // If roles are included in the body
      if (req.body.roles) {
        // Find all roles listed
        const roles = await Promise.all(
          req.body.roles.map(({ id, ...next }) => {
            return Role.findByPk(id);
          }),
        );
  
        // Attach roles to user
        await user.setRoles(roles, { transaction: t });
      }
  
      // Send the success message
      sendSuccess("User saved!", user.id, 201, res);
    })
    
  } catch (error) {
    if (error instanceof ValidationError) {
      handleValidationError(error, res);
    } else {
      logger.error(error);
      res.status(500).end();
    }
  }
});

At the top of the file, we have added several additional import statements:

  • ValidationError - we import the ValidationError type from the Sequelize library
  • database - we import our Sequelize instance from configs/database.js so we can create a transaction
  • handleValidationError and sendSuccess - we import our two new utilities from the utilities folder

This route itself is quite a bit more complex that our previous routes, so let’s break down what it does piece by piece to see how it all works together.

  1. Start a database transaction
// -=-=- other code omitted here -=-=-
    await database.transaction(async t => {

      // perform database operations here

    });
// -=-=- other code omitted here -=-=-

First, since we will be updating the database using multiple steps, we should use a database transaction to ensure that we only update the database if all operations will succeed. So, we use the Sequelize Transactions feature to create a new managed database transaction. If we successfully reach the end of the block of code contained in this statement, the database transaction will be committed to the database and the changes will be stored.

  1. Create the User itself
// -=-=- other code omitted here -=-=-
      const user = await User.create(
        // Build the user object using body attributes
        {
          username: req.body.username,
        },
        // Assign to a database transaction
        {
          transaction: t
        }
      );
// -=-=- other code omitted here -=-=-

Next, we use the User model to create a new instance of the user and store it in the database. The Sequelize Create method will both build the new object in memory as well as save it to the database. This is an asynchronous process, so we must await the result before moving on. We also must give this method a reference to the current database transaction t in the second parameter.

  1. Associate Roles
// -=-=- other code omitted here -=-=-
      // If roles are included in the body
      if (req.body.roles) {
        // Find all roles listed
        const roles = await Promise.all(
          req.body.roles.map(({ id, ...next }) => {
            return Role.findByPk(id);
          }),
        );
  
        // Attach roles to user
        await user.setRoles(roles, { transaction: t });
      }
// -=-=- other code omitted here -=-=-

After that, we check to see if the roles attribute was provided as part of the body of the HTTP POST method. If it was, we need to associate those roles with the new user. Here, we are assuming that the submission includes the ID for each role at a minimum, but it may also include other data such as the name of the role. So, before doing anything else, we must first find each Role model in the database by ID using the findByPk method. Once we have a list of roles, then we can add those roles to the User object using the special setRoles method that is created as part of the Roles association on that model. If any roles are null and can’t be found, this will throw an error that we can catch later.

  1. Send Success Messages
      // Send the success message
      sendSuccess("User saved!", user.id, 201, res);

Finally, if everything is correct, we can send the success message back to the user using the sendSuccess utility method that we created earlier.

  1. Handle Exceptions
// -=-=- other code omitted here -=-=-
  } catch (error) {
    if (error instanceof ValidationError) {
      handleValidationError(error, res);
    } else {
      logger.error(error);
      res.status(500).end();
    }
  }
// -=-=- other code omitted here -=-=-

Finally, at the bottom of the file we have a catch block that will catch any exceptions thrown while trying to create our User and associate the correct Role objects. Notice that this catch block is outside the database transaction, so any database changes will not be saved if we reach this block of code.

Inside, we check to see if the error is an instance of the ValidationError class from Sequelize. If so, we can use our new handleValidationError method to process that error and send a well-structured JSON response back to the user about the error. If not, we’ll simply log the error and send back a generic HTTP 500 response code.

Testing Create

YouTube Video

Manual Testing with Open API

Before we start unit testing this route, let’s quickly do some manual testing using the Open API documentation site. It is truly a very handy way to work with our RESTful APIs as we are developing them, allowing us to test them quickly in isolation to make sure everything is working properly.

So, let’s start our server:

$ npm run dev

Once it starts, we can navigate to the /docs URL, and we should see the Open API documentation for our site, including a new POST route for the users section:

Create API Documentation Create API Documentation

If we documented our route correctly, we can see that this documentation includes not only an example for what a new submission should look like, but also examples of the success and model validation error outputs should be. To test it, we can use the Try it out button on the page to try to create a new user.

Create Example Create Example

Let’s go ahead and try to create the user that is suggested by our example input, which should look like this:

{
  "username": "newuser",
  "roles": [
    {
      "id": 6
    },
    {
      "id": 7
    }
  ]
}

This would create a user with the username newuser and assign them to the roles with IDs 6 (view_documents) and 7 (view_communities). So, we can click the Execute button to send that request to the server and see if it works.

Create Success Create Success

Excellent! We can see that it worked correctly, and we received our expected success message as part of the response. We can also scroll up and try the GET /api/v1/users API endpoint to see if that user appears in our list of all users in the system with the correct roles assigned. If we do, we should see this in the output:

  {
    "id": 6,
    "username": "newuser",
    "createdAt": "2025-02-21T18:34:54.725Z",
    "updatedAt": "2025-02-21T18:34:54.725Z",
    "roles": [
      {
        "id": 6,
        "role": "view_documents"
      },
      {
        "id": 7,
        "role": "view_communities"
      }
    ]
  }

From here, we can try a couple of different scenarios to see if our server is working properly.

Duplicate Username

First, what if we try and create a user with a duplicate username? To test this, we can simply resubmit the default example again and see what happens. This time, we get an HTTP 422 response code with a very detailed error message:

Create Failure - Duplicate Username Create Failure - Duplicate Username

This is great! It tells us exactly what the error is. This is the output created by our handleValidationError utility function from the previous page.

Missing Attributes

We can also try to submit a new user, but this time we can accidentally leave out some of the attributes, as in this example:

{
  "user": "testuser"
}

Here, we have mistakenly renamed the username attribute to just user, and we’ve left off the roles list entirely. When we submit this, we also get a helpful error message:

Create Failure - Username Null Create Failure - Username Null

Since the username attribute was not provided, it will be set to null and the database will not allow a null value for that attribute.

However, if we correct that, we do see that it will accept a new user without any listed roles! This is by design, since we may need to create users that don’t have any roles assigned.

Invalid Roles

Finally, what if we try to create a user with an invalid list of roles:

{
  "username": "baduser",
  "roles": [
    {
      "id": 6
    },
    {
      "id": 8
    }
  ]
}

In this instance, we’ll get another helpful error message:

Create Failure - Role Null Create Failure - Role Null

Since there is no role with ID 8 in the database, it finds a null value instead and tries to associate that with our user. This causes an SQL constraint error, which we can send back to our user.

Finally, we should also double-check that our user baduser was not created using GET /api/v1/users API endpoint above. This is because we don’t want to create that user unless a list of valid roles are also provided.

Unit Testing

Now that we have a good handle on how this endpoint works in practice, let’s write some unit tests to confirm that it works as expected in each of these cases. First, we should have a simple unit test that successfully creates a new user:

// -=-=- other code omitted here -=-=-

/**
 * Creates a user successfully
 */
const createUser = (user) => {
  it("should successfully create a user '" + user.username + "'", (done) => {
    request(app)
      .post("/api/v1/users/")
      .send(user)
      .expect(201)
      .end((err, res) => {
        if (err) return done(err);
        res.body.should.be.an("object");
        res.body.should.have.property("message");
        res.body.should.have.property("id")
        const created_id = res.body.id
        // Find user in list of all users
        request(app)
          .get("/api/v1/users")
          .expect(200)
          .end((err, res) => {
            if (err) return done(err);
            const foundUser = res.body.find(
              (u) => u.id === created_id,
            );
            foundUser.should.shallowDeepEqual(user);
            done();
          });
      });
  });
};

// -=-=- other code omitted here -=-=-

// New user structure for creating users
const new_user = {
  username: "test_user",
  roles: [
    {
      id: 6
    },
    {
      id: 7
    }
  ]
}

/**
 * Test /api/v1/users route
 */
describe("/api/v1/users", () => {
  // -=-=- other code omitted here -=-=-

  describe("POST /", () => {
    createUser(new_user);
  })
});

This first test is very straightforward since it just confirms that we can successfully create a new user in the system. It also confirms that the user now appears in the output from the get all route, which is helpful.

While this at least confirms that the route works as expected, we should write several more unit tests to confirm that the route works correctly even if the user provides invalid input.

Missing Attributes

First, we should confirm that the user will be created even with the list of roles missing. We can do this just by creating a second new_user object that is missing the list of roles.

// -=-=- other code omitted here -=-=-

// New user structure for creating users without roles
const new_user_no_roles = {
  username: "test_user_no_roles",
}

/**
 * Test /api/v1/users route
 */
describe("/api/v1/users", () => {
  // -=-=- other code omitted here -=-=-

  describe("POST /", () => {
    createUser(new_user);
    createUser(new_user_no_roles);
  })
});

We should also write a test to make sure the process will fail if any required attributes (in this case, just username) are missing. We can even check the output to make sure the missing attribute is listed:

// -=-=- other code omitted here -=-=-

/**
 * Fails to create user with missing required attribute
 */
const createUserFailsOnMissingRequiredAttribute = (user, attr) => {
  it("should fail when required attribute '" + attr + "' is missing", (done) => {
    // Create a copy of the user object and delete the given attribute
    const updated_user = {... user}
    delete updated_user[attr]
    request(app)
      .post("/api/v1/users/")
      .send(updated_user)
      .expect(422)
      .end((err, res) => {
        if (err) return done(err);
        res.body.should.be.an("object");
        res.body.should.have.property("error");
        res.body.should.have.property("errors")
        res.body.errors.should.be.an("array")
        // the error should be related to the deleted attribute
        expect(res.body.errors.some((e) => e.attribute === attr)).to.equal(true);
        done();
      });
  });
}

// -=-=- other code omitted here -=-=-

/**
 * Test /api/v1/users route
 */
describe("/api/v1/users", () => {
  // -=-=- other code omitted here -=-=-

  describe("POST /", () => {
    createUser(new_user);
    createUser(new_user_no_roles);

    createUserFailsOnMissingRequiredAttribute(new_user, "username");
  })
});

Duplicate Username

We also should write a unit test that will make sure we cannot create a user with a duplicate username.

// -=-=- other code omitted here -=-=-

/**
 * Fails to create user with a duplicate username
 */
const createUserFailsOnDuplicateUsername = (user) => {
  it("should fail on duplicate username '" + user.username + "'", (done) => {
    request(app)
      .post("/api/v1/users/")
      .send(user)
      .expect(201)
      .end((err, res) => {
        if (err) return done(err);
        res.body.should.be.an("object");
        res.body.should.have.property("message");
        res.body.should.have.property("id")
        const created_id = res.body.id
        // Find user in list of all users
        request(app)
          .get("/api/v1/users")
          .expect(200)
          .end((err, res) => {
            if (err) return done(err);
            const foundUser = res.body.find(
              (u) => u.id === created_id,
            );
            foundUser.should.shallowDeepEqual(user);
            // Try to create same user again
            request(app)
              .post("/api/v1/users/")
              .send(user)
              .expect(422)
              .end((err, res) => {
                if (err) return done(err);
                res.body.should.be.an("object");
                res.body.should.have.property("error");
                res.body.should.have.property("errors");
                res.body.errors.should.be.an("array");
                // the error should be related to the username attribute
                expect(
                  res.body.errors.some((e) => e.attribute === "username"),
                ).to.equal(true);
                done();
              });
          });
      });
  });
};


// -=-=- other code omitted here -=-=-

/**
 * Test /api/v1/users route
 */
describe("/api/v1/users", () => {
  // -=-=- other code omitted here -=-=-

  describe("POST /", () => {
    createUser(new_user);
    createUser(new_user_no_roles);

    createUserFailsOnMissingRequiredAttribute(new_user, "username");
    createUserFailsOnDuplicateUsername(new_user);
  })
});

This test builds upon the previous createUser test by first creating the user, and then confirming that it appears in the output, before trying to create it again. This time, it should fail, so we can borrow some of the code from the createUserFailsOnMissingRequiredAttribute to confirm that it is failing because of a duplicate username.

Invalid Roles

Finally, we should write a unit test that makes sure a user won’t be created if any invalid role IDs are used, and also that the database transaction is properly rolled back so that the user itself isn’t created.

// -=-=- other code omitted here -=-=-

/**
 * Fails to create user with bad role ID
 */
const createUserFailsOnInvalidRole = (user, role_id) => {
  it("should fail when invalid role id '" + role_id + "' is used", (done) => {
    // Create a copy of the user object
    const updated_user = { ...user };
    // Make a shallow copy of the roles array
    updated_user.roles = [... user.roles]
    // Add invalid role ID to user object
    updated_user.roles.push({
      id: role_id,
    });
    request(app)
      .post("/api/v1/users/")
      .send(updated_user)
      .expect(422)
      .end((err, res) => {
        if (err) return done(err);
        res.body.should.be.an("object");
        res.body.should.have.property("error");
        // User with invalid roles should not be created
        request(app)
          .get("/api/v1/users")
          .expect(200)
          .end((err, res) => {
            if (err) return done(err);
            expect(res.body.some((u) => u.username === updated_user.username)).to.equal(
              false,
            );
            done();
          });
      });
  });
};


// -=-=- other code omitted here -=-=-

/**
 * Test /api/v1/users route
 */
describe("/api/v1/users", () => {
  // -=-=- other code omitted here -=-=-

  describe("POST /", () => {
    createUser(new_user);
    createUser(new_user_no_roles);

    createUserFailsOnMissingRequiredAttribute(new_user, "username");
    createUserFailsOnDuplicateUsername(new_user);

    createUserFailsOnInvalidRole(new_user, 0)
    createUserFailsOnInvalidRole(new_user, -1)
    createUserFailsOnInvalidRole(new_user, 8)
    createUserFailsOnInvalidRole(new_user, "test")
  })
});

This test will try to create a valid user, but it appends an invalid role ID to the list of roles to assign to the user. It also confirms that the user itself is not created by querying the get all endpoint and checking for a matching username.

There we go! We have a set of unit tests that cover most of the situations we can anticipate seeing with our route to create new users. If we run all of these tests at this point, they should all pass:

    POST /
      ✔ should successfully create a user 'test_user'
      ✔ should successfully create a user 'test_user_no_roles'
      ✔ should fail when required attribute 'username' is missing
      ✔ should fail on duplicate username 'test_user'
      ✔ should fail when invalid role id '0' is used
      ✔ should fail when invalid role id '-1' is used
      ✔ should fail when invalid role id '8' is used
      ✔ should fail when invalid role id 'test' is used

Great! Now is a great time to lint, format, and then commit and push our work to GitHub before continuing.

Update

YouTube Video

Update Route

Next, let’s look at adding an additional route in our application that allows us to update a User model. This route is very similar to the route used to create a user, but there are a few key differences as well.

// -=-=- other code omitted here -=-=-

/**
 * Update a user
 *
 * @param {Object} req - Express request object
 * @param {Object} res - Express response object
 * @param {Function} next - Express next middleware function
 *
 * @swagger
 * /api/v1/users/{id}:
 *   put:
 *     summary: update user
 *     tags: [users]
 *     parameters:
 *       - in: path
 *         name: id
 *         required: true
 *         schema:
 *           type: integer
 *         description: user ID
 *     requestBody:
 *       description: user
 *       required: true
 *       content:
 *         application/json:
 *           schema:
 *             $ref: '#/components/schemas/User'
 *           example:
 *             username: updateduser
 *             roles:
 *               - id: 6
 *               - id: 7
 *     responses:
 *       201:
 *         $ref: '#/components/responses/Success'
 *       422:
 *         $ref: '#/components/responses/ValidationError'
 */
router.put("/:id", async function (req, res, next) {
  try {
    const user = await User.findByPk(req.params.id)

    // if the user is not found, return an HTTP 404 not found status code
    if (user === null) {
      res.status(404).end();
    } else {
      await database.transaction(async (t) => {
        await user.update(
          // Update the user object using body attributes
          {
            username: req.body.username,
          },
          // Assign to a database transaction
          {
            transaction: t,
          },
        );
  
        // If roles are included in the body
        if (req.body.roles) {
          // Find all roles listed
          const roles = await Promise.all(
            req.body.roles.map(({ id, ...next }) => {
              return Role.findByPk(id);
            }),
          );
  
          // Attach roles to user
          await user.setRoles(roles, { transaction: t });
        } else {
          // Remove all roles
          await user.setRoles([], { transaction: t });
        }
  
        // Send the success message
        sendSuccess("User saved!", user.id, 201, res);
      });
    }
  } catch (error) {
    if (error instanceof ValidationError) {
      handleValidationError(error, res);
    } else {
      logger.error(error);
      res.status(500).end();
    }
  }
});

// -=-=- other code omitted here -=-=-

As we can see, overall this route is very similar to the create route. The only major difference is that we must first find the user we want to update based on the query parameter, and then we use the update database method to update the existing values in the database. The rest of the work updating the related Roles models is exactly the same. We can also reuse the utility functions we created for the previous route.

Just like we did earlier, we can test this route using the Open API documentation website to confirm that it is working correctly before we even move on to testing it.

Unit Testing Update Route

The unit tests for the route to update a user are very similar to the ones used for creating a user. First, we need a test that will confirm we can successfully update a user entry:

// -=-=- other code omitted here -=-=-

/**
 * Update a user successfully
 */
const updateUser = (id, user) => {
  it("should successfully update user ID '" + id + "' to '" + user.username + "'", (done) => {
    request(app)
      .put("/api/v1/users/" + id)
      .send(user)
      .expect(201)
      .end((err, res) => {
        if (err) return done(err);
        res.body.should.be.an("object");
        res.body.should.have.property("message");
        res.body.should.have.property("id");
        expect(res.body.id).equal(id)
        // Find user in list of all users
        request(app)
          .get("/api/v1/users")
          .expect(200)
          .end((err, res) => {
            if (err) return done(err);
            const foundUser = res.body.find(
              (u) => u.id === id,
            );
            foundUser.should.shallowDeepEqual(user);
            done();
          });
      });
  });
};

// -=-=- other code omitted here -=-=-

/**
 * Test /api/v1/users route
 */
describe("/api/v1/users", () => {
  // -=-=- other code omitted here -=-=-

  describe("PUT /{id}", () => {
    updateUser(3, new_user);
  });
});

Next, we also want to check that any updated users have the correct roles attached, including instances where the roles were completely removed:

// -=-=- other code omitted here -=-=-

/**
 * Update a user and roles successfully
 */
const updateUserAndRoles = (id, user) => {
  it("should successfully update user ID '" + id + "' roles", (done) => {
    request(app)
      .put("/api/v1/users/" + id)
      .send(user)
      .expect(201)
      .end((err, res) => {
        if (err) return done(err);
        res.body.should.be.an("object");
        res.body.should.have.property("message");
        res.body.should.have.property("id");
        expect(res.body.id).equal(id)
        // Find user in list of all users
        request(app)
          .get("/api/v1/users")
          .expect(200)
          .end((err, res) => {
            if (err) return done(err);
            const foundUser = res.body.find(
              (u) => u.id === id,
            );
            // Handle case where user has no roles assigned
            const roles = user.roles || []
            foundUser.roles.should.be.an("array");
            foundUser.roles.should.have.lengthOf(roles.length);
            roles.forEach((role) => {
              expect(foundUser.roles.some((r) => r.id === role.id)).to.equal(true);
            })
            done();
          });
      });
  });
};


// -=-=- other code omitted here -=-=-

/**
 * Test /api/v1/users route
 */
describe("/api/v1/users", () => {
  // -=-=- other code omitted here -=-=-

  describe("PUT /{id}", () => {
    updateUser(3, new_user);
    updateUserAndRoles(3, new_user);
    updateUserAndRoles(2, new_user_no_roles);
  });
});

We also should check that the username is unchanged if an update is sent with no username attribute, but the rest of the update will succeed. For this test, we can just create a new mock object with just roles and no username included.

// -=-=- other code omitted here -=-=-

// Update user structure with only roles
const update_user_only_roles = {
  roles: [
    {
      id: 6,
    },
    {
      id: 7,
    },
  ],
};

/**
 * Test /api/v1/users route
 */
describe("/api/v1/users", () => {
  // -=-=- other code omitted here -=-=-

  describe("PUT /{id}", () => {
    updateUser(3, new_user);
    updateUserAndRoles(3, new_user);
    updateUserAndRoles(2, new_user_no_roles);
    updateUserAndRoles(1, update_user_only_roles);
  });
});

Finally, we should include a couple of tests to handle the situation where a duplicate username is provided, or where an invalid role is provided. These are nearly identical to the tests used in the create route earlier in this example:

// -=-=- other code omitted here -=-=-

/**
 * Fails to update user with a duplicate username
 */
const updateUserFailsOnDuplicateUsername = (id, user) => {
  it("should fail on duplicate username '" + user.username + "'", (done) => {
    request(app)
      .put("/api/v1/users/" + id)
      .send(user)
      .expect(422)
      .end((err, res) => {
        if (err) return done(err);
        res.body.should.be.an("object");
        res.body.should.have.property("error");
        res.body.should.have.property("errors");
        res.body.errors.should.be.an("array");
        // the error should be related to the username attribute
        expect(
          res.body.errors.some((e) => e.attribute === "username"),
        ).to.equal(true);
        done();
      });
  });
};

/**
 * Fails to update user with bad role ID
 */
const updateUserFailsOnInvalidRole = (id, user, role_id) => {
  it("should fail when invalid role id '" + role_id + "' is used", (done) => {
    // Create a copy of the user object
    const updated_user = { ...user };
    // Make a shallow copy of the roles array
    updated_user.roles = [... user.roles]
    // Add invalid role ID to user object
    updated_user.roles.push({
      id: role_id,
    });
    request(app)
      .put("/api/v1/users/" + id)
      .send(updated_user)
      .expect(422)
      .end((err, res) => {
        if (err) return done(err);
        res.body.should.be.an("object");
        res.body.should.have.property("error");
        // User with invalid roles should not be updated
        request(app)
          .get("/api/v1/users")
          .expect(200)
          .end((err, res) => {
            if (err) return done(err);
            expect(res.body.some((u) => u.username === updated_user.username)).to.equal(
              false,
            );
            done();
          });
      });
  });
};

// -=-=- other code omitted here -=-=-

// Update user structure with duplicate username
const update_user_duplicate_username = {
  username: "admin",
};

/**
 * Test /api/v1/users route
 */
describe("/api/v1/users", () => {
  // -=-=- other code omitted here -=-=-

  describe("PUT /{id}", () => {
    updateUser(3, new_user);
    updateUserAndRoles(3, new_user);
    updateUserAndRoles(2, new_user_no_roles);
    updateUserAndRoles(1, update_user_only_roles);

    updateUserFailsOnDuplicateUsername(2, update_user_duplicate_username);
    updateUserFailsOnInvalidRole(4, new_user, 0);
    updateUserFailsOnInvalidRole(4, new_user, -1);
    updateUserFailsOnInvalidRole(4, new_user, 8);
    updateUserFailsOnInvalidRole(4, new_user, "test");
  })
});

There we go! We have a set of unit tests that cover most of the situations we can anticipate seeing with our route to update users. If we run all of these tests at this point, they should all pass:

    PUT /{id}
      ✔ should successfully update user ID '3' to 'test_user'
      ✔ should successfully update user ID '3' roles
      ✔ should successfully update user ID '2' roles
      ✔ should successfully update user ID '1' roles
      ✔ should fail on duplicate username 'admin'
      ✔ should fail when invalid role id '0' is used
      ✔ should fail when invalid role id '-1' is used
      ✔ should fail when invalid role id '8' is used
      ✔ should fail when invalid role id 'test' is used

Great! Now is a great time to lint, format, and then commit and push our work to GitHub before continuing.

Delete

YouTube Video

Delete Route

Finally, the last route we need to add to our users routes is the delete route. This route is very simple - it will remove a user based on the given user ID if it exists in the database:

// -=-=- other code omitted here -=-=-

/**
 * Delete a user
 *
 * @param {Object} req - Express request object
 * @param {Object} res - Express response object
 * @param {Function} next - Express next middleware function
 *
 * @swagger
 * /api/v1/users/{id}:
 *   delete:
 *     summary: delete user
 *     tags: [users]
 *     parameters:
 *       - in: path
 *         name: id
 *         required: true
 *         schema:
 *           type: integer
 *         description: user ID
 *     responses:
 *       200:
 *         $ref: '#/components/responses/Success'
 */
router.delete("/:id", async function (req, res, next) {
  try {
    const user = await User.findByPk(req.params.id)

    // if the user is not found, return an HTTP 404 not found status code
    if (user === null) {
      res.status(404).end();
    } else {
      await user.destroy();

      // Send the success message
      sendSuccess("User deleted!", req.params.id, 200, res);
    }
  } catch (error) {
    console.log(error)
    logger.error(error);
    res.status(500).end();
  }
});

// -=-=- other code omitted here -=-=-

Once again, we can test this route using the Open API documentation website. Let’s look at how we can quickly unit test it as well.

Unit Testing Delete Route

The unit tests for this route are similarly very simple. We really only have two cases - the user is found and successfully deleted, or the user cannot be found and an HTTP 404 response is returned.

// -=-=- other code omitted here -=-=-

/**
 * Delete a user successfully
 */
const deleteUser = (id) => {
  it("should successfully delete user ID '" + id, (done) => {
    request(app)
      .delete("/api/v1/users/" + id)
      .expect(200)
      .end((err, res) => {
        if (err) return done(err);
        res.body.should.be.an("object");
        res.body.should.have.property("message");
        res.body.should.have.property("id")
        expect(res.body.id).to.equal(String(id))
        // Ensure user is not found in list of users
        request(app)
          .get("/api/v1/users")
          .expect(200)
          .end((err, res) => {
            if (err) return done(err);
            expect(res.body.some((u) => u.id === id)).to.equal(false);
            done();
          });
      });
  });
};

/**
 * Fail to delete a missing user
 */
const deleteUserFailsInvalidId= (id) => {
  it("should fail to delete invalid user ID '" + id, (done) => {
    request(app)
      .delete("/api/v1/users/" + id)
      .expect(404)
      .end((err) => {
        if (err) return done(err);
        done();
      });
  });
};

// -=-=- other code omitted here -=-=-

/**
 * Test /api/v1/users route
 */
describe("/api/v1/users", () => {
  // -=-=- other code omitted here -=-=-

  describe("DELETE /{id}", () => {
    deleteUser(4);
    deleteUserFailsInvalidId(0)
    deleteUserFailsInvalidId(-1)
    deleteUserFailsInvalidId(5)
    deleteUserFailsInvalidId("test")
  });
});

There we go! That will cover all of the unit tests for the users route. If we try to run all of our tests, we should see that they succeed!

DELETE /{id}
      ✔ should successfully delete user ID '4
      ✔ should fail to delete invalid user ID '0
      ✔ should fail to delete invalid user ID '-1
      ✔ should fail to delete invalid user ID '5
      ✔ should fail to delete invalid user ID 'test

All told, we write just 5 API routes (retrieve all, retrieve one, create, update, and delete) but wrote 53 different unit tests to fully test those routes.

Now is a great time to lint, format, and then commit and push our work to GitHub.

In the next example, we’ll explore how to add authentication to our RESTful API.

Authentication

This example project builds on the previous RESTful API project by adding user authentication. This will ensure users are identified within the system and are only able to perform operations according to the roles assigned to their user accounts.

Project Deliverables

At the end of this example, we will have a project with the following features:

  1. An authentication system using Passport.js and CAS
  2. Valid JSON Web Tokens (JWTs) for authentication within the RESTful API
  3. Proper middleware to verify users have the correct role for each operation in the API
Prior Work

This project picks up right where the last one left off, so if you haven’t completed that one yet, go back and do that before starting this one.

Let’s get started!

Subsections of Authentication

Bypass Auth

YouTube Video

Authentication Libraries

There are many different authentication libraries and methods available for Node.js and Express. For this project, we will use the Passport.js library. It supports many different authentication strategy, and is a very common way that authentication is handled within JavaScript applications.

For our application, we’ll end up using several strategies to authenticate our users:

Let’s first set up our unique token strategy, which allows us to test our authentication routes before setting up anything else.

Authentication Router

First, we’ll need to create a new route file at routes/auth.js to contain our authentication routes. We’ll start with this basic structure and work on filling in each method as we go.

/**
 * @file Auth router
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports router an Express router
 *
 * @swagger
 * tags:
 *   name: auth
 *   description: Authentication Routes
 * components:
 *   responses:
 *     AuthToken:
 *       description: authentication success
 *       content:
 *         application/json:
 *           schema:
 *             type: object
 *             required:
 *               - token
 *             properties:
 *               token:
 *                 type: string
 *                 description: a JWT for the user
 *             example:
 *               token: abcdefg12345
 */

// Import libraries
import express from "express";
import passport from "passport";

// Import configurations
import "../configs/auth.js";

// Create Express router
const router = express.Router();

/**
 * Authentication Response Handler
 *
 * @param {Object} req - Express request object
 * @param {Object} res - Express response object
 * @param {Function} next - Express next middleware function
 */
const authSuccess = function (req, res, next) {

};

/**
 * Bypass authentication for testing
 *
 * @param {Object} req - Express request object
 * @param {Object} res - Express response object
 * @param {Function} next - Express next middleware function
 *
 * @swagger
 * /auth/bypass:
 *   get:
 *     summary: bypass authentication for testing
 *     description: Bypasses CAS authentication for testing purposes
 *     tags: [auth]
 *     parameters:
 *       - in: query
 *         name: token
 *         required: true
 *         schema:
 *           type: string
 *         description: username
 *     responses:
 *       200:
 *         description: success
 */
router.get("/bypass", function (req, res, next) {

});

/**
 * CAS Authentication
 *
 * @param {Object} req - Express request object
 * @param {Object} res - Express response object
 * @param {Function} next - Express next middleware function
 *
 * @swagger
 * /auth/cas:
 *   get:
 *     summary: CAS authentication
 *     description:  CAS authentication for deployment
 *     tags: [auth]
 *     responses:
 *       200:
 *         description: success
 */
router.get("/cas", function (req, res, next) {

});

/**
 * Request JWT based on previous authentication
 *
 * @param {Object} req - Express request object
 * @param {Object} res - Express response object
 * @param {Function} next - Express next middleware function
 *
 * @swagger
 * /auth/token:
 *   get:
 *     summary: request JWT 
 *     description: request JWT based on previous authentication
 *     tags: [auth]
 *     responses:
 *       200:
 *         $ref: '#/components/responses/AuthToken'
 */
router.get("/token", function (req, res, next) {

});

export default router;

This file includes a few items to take note of:

  • In the top-level Open API comment, we define a new AuthToken response that we’ll send to the user when they request a token.
  • We create three routes. The first two, /auth/bypass and /auth/cas, for each of our authentication strategies. The last one, /auth/token will be used by our frontend to request a token to access the API.
  • Finally, we’ll build a authSuccess function to handle actually sending the response to the user

Before moving on, let’s go ahead and add this router to our app.js file along with the other routers:

// -=-=- other code omitted here -=-=-

// Import routers
import indexRouter from "./routes/index.js";
import apiRouter from "./routes/api.js";
import authRouter from "./routes/auth.js";

// -=-=- other code omitted here -=-=-

// Use routers
app.use("/", indexRouter);
app.use("/api", apiRouter);
app.use("/auth", authRouter);

// -=-=- other code omitted here -=-=-

We’ll come back to this file once we are ready to link up our authentication strategies.

Unique Token Authentication

Next, let’s install both passport and the passport-unique-token authentication strategy:

$ passport passport-unique-token

We’ll configure that strategy in a new configs/auth.js file with the following content:

/**
 * @file Configuration information for Passport.js Authentication
 * @author Russell Feldhausen <russfeld@ksu.edu>
 */

// Import libraries
import passport from "passport";
import { UniqueTokenStrategy } from "passport-unique-token";

// Import models
import { User, Role } from "../models/models.js";

// Import logger
import logger from "./logger.js";

/**
 * Authenticate a user
 * 
 * @param {string} username the username to authenticate
 * @param {function} next the next middleware function
 */
const authenticateUser = function(username, next) {
  // Find user with the username
  User.findOne({ 
    attributes: ["id", "username"],
    include: {
      model: Role,
      as: "roles",
      attributes: ["id", "role"],
      through: {
        attributes: [],
      },
    },
    where: { username: username },
  })
  .then((user) => {
    // User not found
    if (user === null) {
      logger.debug("Login failed for user: " + username);
      return next(null, false);
    }

    // User authenticated
    logger.debug("Login succeeded for user: " + user.username);

    // Convert Sequelize object to plain JavaScript object
    user = JSON.parse(JSON.stringify(user))
    return next(null, user);
  });
}

// Bypass Authentication via Token
passport.use(new UniqueTokenStrategy(
  // verify callback function
  (token, next) => {
    return authenticateUser(token, next);
  }
))

// Default functions to serialize and deserialize a session
passport.serializeUser(function(user, done) {
  done(null, user);
});

passport.deserializeUser(function(user, done) {
  done(null, user);
});

In this file, we created an authenticateUser function that will look for a user based on a given username. If found, it will return that user by calling the next middleware function. Otherwise, it will call that function and provide false.

Below, we configure Passport.js using the passport.use function to define the various authentication strategies we want to use. In this case, we’ll start with the Unique Token Strategy, which uses a token provided as part of a query to the web server.

In addition, we need to implement some default functions to handle serializing and deserializing a user from a session. These functions don’t really have any content in our implementation; we just need to include the default code.

Finally, since Passport.js acts as a global object, we don’t even have to export anything from this file!

Testing Authentication

To test this authentication strategy, let’s modify routes/auth.js to use this strategy. We’ll update the /auth/bypass route and also add some temporary code to the authSuccess function:

// -=-=- other code omitted here -=-=-

// Import libraries
import express from "express";
import passport from "passport";

// Import configurations
import "../configs/auth.js";

// -=-=- other code omitted here -=-=-
const authSuccess = function (req, res, next) {
  res.json(req.user);
};

// -=-=- other code omitted here -=-=-
router.get("/bypass", passport.authenticate('token', {session: false}), authSuccess);

// -=-=- other code omitted here -=-=-

In the authSuccess function, right now we are just sending the content of req.user, which is set by Passport.js on a successful authentication (it is the value we returned when calling the next function in our authentication strategy earlier). We’ll come back to this later when we implement JSON Web Tokens (JWT) later in this tutorial.

The other major change is that now the /auth/bypass route calls the passport.authenticate method with the 'token' strategy specified. It also uses {session: false} as one of the options provided to Passport.js since we aren’t actually going to be using sessions. Finally, if that middleware is satisfied, it will call the authSuccess function to handle sending the response to the user. This takes advantage of the chaining that we can do in Express!

With all of that in place, we can test our server and see if it works:

$ npm run dev

Once the page loads, we want to navigate to the /auth/bypass?token=admin path to see if we can log in as the admin user. Notice that we are including a query parameter named token to include the username in the URL.

Successful Authentication Successful Authentication

There we go! We see that it successfully finds our admin user and returns data about that user, including the roles assigned. This is what we want to see. We can also test this by providing other usernames to make sure it is working.

Securing Authentication

Of course, we don’t want to have this bypass authentication system available all the time in our application. In fact, we really only want to use it for testing and debugging; otherwise, our application will have a major security flaw! So, let’s add a new environment variable BYPASS_AUTH to our .env, .env.test and .env.example files. We should set it to TRUE in the .env.test file, and for now we’ll have it enabled in our .env file as well, but this option should NEVER be enabled in a production setting.

# -=-=- other settings omitted here -=-=-
BYPASS_AUTH=true

With that setting in place, we can add it to our configs/auth.js file to only allow bypass authentication if that setting is enabled:

// -=-=- other code omitted here -=-=-

// Bypass Authentication via Token
passport.use(new UniqueTokenStrategy(
  // verify callback function
  (token, next) => {
    // Only allow token authentication when enabled
    if (process.env.BYPASS_AUTH === "true") {
      return authenticateUser(token, next);
    } else {
      return next(null, false);
    }
  }
))

Before moving on, we should make sure we test both enabling and disabling this setting actually disables bypass authentication. We want to be absolutely sure it works as intended!

Disabled Authentication Disabled Authentication

Cookie Sessions

YouTube Video

One of the most common methods for keeping track of users after they are authenticated is by setting a cookie on their browser that is sent with each request. We’ve already explored this method earlier in this course, so let’s go ahead and configure cookie sessions for our application, storing them in our existing database.

We’ll start by installing both the express-session middleware and the connect-session-sequelize library that we can use to store our sessions in a Sequelize database:

$ npm install express-session connect-session-sequelize

Once those libraries are installed, we can create a configuration for sessions in a new configs/sessions.js file:

/**
 * @file Configuration for cookie sessions stored in Sequelize
 * @author Russell Feldhausen <russfeld@ksu.edu>
 * @exports sequelizeSession a Session instance configured for Sequelize
 */

// Import Libraries
import session from 'express-session'
import connectSession from 'connect-session-sequelize'

// Import Database
import database from './database.js'
import logger from './logger.js'

// Initialize Store
const sequelizeStore = connectSession(session.Store)
const store = new sequelizeStore({
    db: database
})

// Create tables in Sequelize
store.sync();

if (!process.env.SESSION_SECRET) {
    logger.error("Cookie session secret not set! Set a SESSION_SECRET environment variable.")
}

// Session configuration
const sequelizeSession = session({
    secret: process.env.SESSION_SECRET,
    store: store, 
    resave: false,
    proxy: true,
})

export default sequelizeSession;

This file loads our Sequelize database connection and initializes the Express session middleware and the Sequelize session store. We also have a quick sanity check that will ensure there is a SESSION_SECRET environment variable set, otherwise an error will be printed. Finally, we export that session configuration to our application.

So, we’ll need to add a SESSION_SECRET environment variable to our .env, .env.test and .env.example files. This is a secret key used to secure our cookies and prevent them from being modified.

There are many ways to generate a secret key, but one of the simplest is to just use the built in functions in Node.js itself. We can launch the Node.js REPL environment by just running the node command in the terminal:

$ node

From there, we can use this line to get a random secret key:

> require('crypto').randomBytes(64).toString('hex')
Documenting Terminal Commands

Just like we use $ as the prompt for Linux terminal commands, the Node.js REPL environment uses > so we will include that in our documentation. You should not include that character in your command.

If done correctly, we’ll get a random string that you can use as your secret key!

Secret Key Secret Key

We can include that key in our .env file. To help remember how to do this in the future, we can even include the Node.js command as a comment above that line:

# -=-=- other settings omitted here -=-=-
# require('crypto').randomBytes(64).toString('hex')
SESSION_SECRET='46a5fdfe16fa710867102d1f0dbd2329f2eae69be3ed56ca084d9e0ad....'

Finally, we can update our app.js file to use this session configuration:

// -=-=- other code omitted here -=-=-

// Import libraries
import compression from "compression";
import cookieParser from "cookie-parser";
import express from "express";
import helmet from "helmet";
import path from "path";
import swaggerUi from "swagger-ui-express";
import passport from "passport";

// Import configurations
import logger from "./configs/logger.js";
import openapi from "./configs/openapi.js";
import sessions from "./configs/sessions.js";

// -=-=- other code omitted here -=-=-

// Use libraries
app.use(express.json());
app.use(express.urlencoded({ extended: false }));
app.use(helmet());
app.use(compression());
app.use(cookieParser());

// Use sessions
app.use(sessions);
app.use(passport.authenticate('session'));

// Use middlewares
app.use(requestLogger);

// -=-=- other code omitted here -=-=-

There we go! Now we can enable cookie sessions in Passport.js by removing the {session: false} setting in our /auth/bypass route in the routes/auth.js file:

// -=-=- other code omitted here -=-=-
router.get("/bypass", passport.authenticate('token'), authSuccess);

// -=-=- other code omitted here -=-=-

Now, when we navigate to that route and authenticate, we should see our application set a session cookie as part of the response.

Cookie Session Cookie Session

We can match the SID in the session cookie with the SID in the Sessions table in our database to confirm that it is working:

Cookie Session in Database Cookie Session in Database

From here, we can use these sessions throughout our application to track users as they make additional requests.

JSON Web Token

YouTube Video

JSON Web Tokens (JWT)

Now that we have a working authentication system, the next step is to configure a method to request a valid JSON Web Token, or JWT, that contains information about the authenticated user. We’ve already learned a bit about JWTs in this course, so we won’t cover too many of the details here.

To work with JWTs, we’ll need to install the jsonwebtoken package from NPM:

$ npm install jsonwebtoken

Next, we’ll need to create a secret key that we can use to sign our tokens. We’ll add this as the JWT_SECRET_KEY setting in our .env, .env.test and .env.example files. We can use the same method discussed on the previous page to generate a new random key:

# -=-=- other settings omitted here -=-=-
# require('crypto').randomBytes(64).toString('hex')
JWT_SECRET_KEY='46a5fdfe16fa710867102d1f0dbd2329f2eae69be3ed56ca084d9e0ad....'

Once we have the library and a key, we can easily create and sign a JWT in the /auth/token route in the routes/auth.js file:

// -=-=- other code omitted here -=-=-

// Import libraries
import express from "express";
import passport from "passport";
import jsonwebtoken from "jsonwebtoken"

// -=-=- other code omitted here -=-=-
router.get("/token", function (req, res, next) {
  // If user is logged in
  if (req.user) {
    const token = jsonwebtoken.sign(
      req.user,
      process.env.JWT_SECRET_KEY,
      {
        expiresIn: '6h'
      }
    )
    res.json({
      token: token
    })
  } else {
    // Send unauthorized response
    res.status(401).send()
  }
});

Now, when we visit the /auth/token URL on our working website (after logging in through the /auth/bypass route), we should receive a JWT as a response:

JWT Response JWT Response

Of course, while that data may seem unreadable, we already know that JWTs are Base64 encoded, so we can easily view the content of the token. Thankfully, there are many great tools we can use to debug our tokens, such as Token.dev, to confirm that they are working correctly.

JWT Debugger JWT Debugger

Do Not Share Live Keys!

While sites like this will also help you confirm that your JWTs are properly signed by asking for your secret key, you SHOULD NOT share a secret key for a live production application with these sites. There is always a chance it has been compromised!

Vue.js Starter Project

This example project builds on the previous RESTful API project by scaffolding a frontend application using Vue.js. This will become the basis for a full frontend for the application over the next few projects.

Project Deliverables

At the end of this example, we will have a project with the following features:

Prior Work

This project picks up right where the last one left off, so if you haven’t completed that one yet, go back and do that before starting this one.

Let’s get started!

Vue.js CRUD App

This example project builds on the previous Vue.js starter project by scaffolding a CRUD frontend for the basic users and roles tables.

Project Deliverables

At the end of this example, we will have a project with the following features:

Prior Work

This project picks up right where the last one left off, so if you haven’t completed that one yet, go back and do that before starting this one.

Let’s get started!

Vue.js Components

This example project builds on the previous Vue.js CRUD app by building a few custom components to view and update data in the application.

Project Deliverables

At the end of this example, we will have a project with the following features:

Prior Work

This project picks up right where the last one left off, so if you haven’t completed that one yet, go back and do that before starting this one.

Let’s get started!