Automated Testing with REST Example Part 2

This is part two of a series about developing and testing a simple REST interface.  Code can be cloned or viewed from https://github.com/tscott8706/restexample.

I encourage you to follow along in this series if you want to learn any of the following:

  • TDD or unit testing in general
  • VIM/screen
  • Docker
  • REST implementation using Python
  • My normal development process – this is the process I use to develop code, as well as my thought process!

Docker… for Everything!

Docker is used for the following in this REST project:

  • Launching the application
  • Unit testing the application
  • System testing the application
Docker Whale
Ok, it’s not the Docker whale, but it is a whale. Close enough?

There are four files related to Docker that are used:

  • Dockerfile
  • docker-compose.yml
  • docker-compose-unit-test.yml
  • docker-compose-system-test.yml

The way to use each of these has been documented in the readme of the project.  This post is going to look into each of those files and what they do.

All of the docker-compose files depend on the Dockerfile.  At the time of this writing, here is the contents of the Dockerfile:

  1. This starts with a Python image for Docker from the Internet (the FROM statement).
  2. It then copies the code from the local directory into the container under the /restexample directory (the ADD command).
  3. The restexample itself is installed as a Python package.
  4. Unit testing packages are installed (I was a bit lazy and decided to use the same Docker image to both run and test the code rather than split it into two images).
  5. Port 5000 is used so the host machine can connect to the container through that port.
  6. When the image is launched, it runs restexample by default (the Python package, when installed, puts ‘restexample’ as a command to be run in the command line)

Now let’s look at each of the functions the docker-compose files are doing.

Launching the Application

docker-compose.yml is used to start the REST application.WebThe contents of that file at the time of this writing:

This file starts two services:

  1. restexample
    • The web server application.
    • Uses the Dockerfile to build the image for this service and names it restexample:latest (the build and image commands).
    • Shares all the code from the project into the container, so changes made locally will show up inside this container without restarting it (the volumes command).
    • Maps the port inside the container, 5000, to the host’s port 5000 (the ports command).  FYI, the left port is the host’s port and the right port is the container’s port.
    • Defines the IP address to 192.168.1.1.  So to access the web server from the host, you would use 192.168.1.1:5000.
  2. mongodb
    • The database that restexample uses to create/read/update/delete person objects.
    • Uses the mongo image from the Internet (I can tell it’s not local, because I don’t use the build command).
    • Maps the databases /data/db directory to a Docker volume.  I can tell it’s a Docker volume because of the volumes section near the end of the docker-compose file.
    • Exposes port 27017 from the container to the same port on the host.
    • Is assigned IP 192.168.1.2.

By running the appropriate docker-compose command (see the readme file in the project), this REST application can be launched very easily.

Unit testing the application

Technical Issues

I’m a big fan of unit testing and TDD.  So before I started the first line of code for this project, I determined how to test it and stuck it within a container.  At the time of this writing, the code within docker-compose-unit-test.yml is the following:

This one is pretty easy actually.

  • I start the restexample image
  • Don’t give it a custom network (the bridge network is the Docker default bridge network)
  • Take all the code in the project and put it in the container under /restexample
  • Change to that directory I just mapped in there and run nosetests

Unit test parameters are found in setup.cfg under the nosetests header, which looks like the following at the time of this writing:

That config does the following:

  • Allow tests to be exe files (not sure if this is required, but I’ve seen cases on Windows before where I needed this since it treats all Linux files as executable.  Again, not sure if I need this)
  • nocapture shows the stdout output.  So if I print() something in code or in a test case, it shows up in the console output.
  • Turn on coverage, show coverage on branches, and put it in an HTML report.
  • For coverage, show files that may not be hit by any tests (cover-inclusive).
  • Only show coverage for the restexample package.
  • And my favorite, with-watch.  This nose plugin makes nosetests continuously monitor the file system for changes.  So once you launch this docker-compose file, it will keep running nosetests over and over and over when a file gets saved (whether it is source code or test code).

I actually use nose-watch and screen (a Linux application) together for extremely quick feedback on my source and test code.  This will be the topic of another post.  For now, just know that with-watch makes tests run automatically after every file change event.

System testing the application

Crash test

I did not start with this docker-compose file at the very beginning of the application.  It did not really make sense at first.  Once I had the server up with one resource I could post and get from (the person object), I started manually testing.  It did not make sense to keep doing this manually… so I searched for ways to do it automatically.  Just so happens there is a pyresttest testing suite out there for testing REST interfaces (full disclosure: I still don’t know how to use this testing suite properly at the time of this writing, but I plan on learning it in the near future.  For that reason, the system test currently fails but is planned on being fixed soon.).

Let’s look at the current docker-compose-system-test.yml file:

This file is almost identical to the docker-compose.yml file (the one that launches the regular application).  This makes sense, because in order to run a system test, I need to run the application itself.  The only addition is the systemtests service.

This service takes the test directory from the project and runs the pyresttest command (this docker-compose file doesn’t show it, but you can see that if you inspect the pyresttest Dockerfile online).  The command here just passes arguments, which is the REST API and a config file indicating what to test.

The results of this systemtests image is console output that shows the passes and failures of all the tests defined in the system-test.yml file.

Conclusion

There are two things you can learn from this post:

  • Tests are very important.  If you do your testing up front (using TDD and a little later on, system testing), you don’t have the manual drudgery at the end that everyone likes to skip.
  • Docker is a very powerful tool for both launching and testing applications.

Next time we will look at how I used the unit testing container and screen to do TDD as I wrote this code.

Leave a Reply

Your email address will not be published.