Put on your aprons people – it’s recipe time. Today we’ll be cooking up a delicious batch of automated API tests. For this recipe, you’ll need a buttery Postman base, a squeeze from a random user data API, and a CI system to bake it all in (I used Azure Devops). The result is a set of golden-brown integration tests that use realistic fake data and run in the cloud on a schedule of your choosing. Sound like the sort of recipe you’d like to master? Read on…
Testing APIs can be hard (to do well)
In my day job, I work for a company that is making an app (in fact, we’ve just launched it). Like a lot of apps, we glue the frontend to the backend with an API. This makes the API quite important. Certainly, the sort of thing that we’d prefer to work reliably.
And yet, APIs like ours are kinda awkward to test. With databases to the left of them and networks to the right, they’re stuck in the middle (with you 🎵). I mean, yes, if your API implements something like MVC, you could unit test your controller. And yes, you could mock all of your dependencies (have fun with that!), but really those dependencies are important too. Surely, for something as central to our application, we want to keep our testing as close to reality as possible.

Shamelessly nicked from @q-leap on twitter. Thanks guys!
So what are we left with? Making actual calls to our API and verifying that the responses are correct. For read-only queries, we can even do this against our production API, although in reality this tends to get reduced to simple uptime monitoring.
Today, however, we’re going to look at how we can perform calls that will cause changes to be written to the database, so we’re going to be targeting our staging environment. Another reason for targeting our staging (aka “testing” or “regression”) environment is that it means we can usually catch issues before they hit production.
Testing the user onboarding process
Writing tests for a lot of the API’s endpoints was trivial. For most endpoints, it’s possible to recycle the same request over and over and have it work every time without needing to change anything. These calls can go straight into a Postman collection, complete with tests written in Javascript that are automatically executed when the request is made. For example:
1 2 3 4 5 6 7 8 9 10 |
pm.test("Status code is 200", function () { pm.response.to.have.status(200); }); pm.test("Response time is less than 200ms", function () { pm.expect(pm.response.responseTime).to.be.below(200); }); pm.test("Item not yet completed", function () { var jsonData = pm.response.json(); pm.expect(jsonData.completed).to.eql(false); }); |
As they say here in Australia: too easy.
The difficult bits are what I like to think of as “single-use” requests: those where the act of making the request changes the state of the application in such a way that the same request cannot be made again. The example normally given is that of deletion, but the same effect also occurs during user registration – the same user details (e.g. email address) can’t be used more than once.
When I’ve come up against this in the past, I’ve tended to use timestamps and randomly generated hashes to keep things from colliding. But in this situation, I wanted to keep everything as organic as possible, which meant using realistic values for my fake user’s names, email addresses and so forth.
Obtaining realistic fake user data
So here’s the problem: I need a steady stream of fake, but realistic user data to populate my test requests. Given that I’m running my tests in a tool designed to make calls to RESTful APIs, it sure would be neat if there were an API I could call that would provide me with some realistic fake user data…
As luck would have it, the venerable (since 2013!) randomuser.me does exactly that. Doing a simple GET on https://randomuser.me/api/ gives us a chonking great JSON object full of fake user data. Go try it out!
OK, so we’ve got our user data… What now? Well, we have to be sneaky and pretend we’re writing tests, but with a few lines of Javascript we can extract some values from the JSON response into some (Postman) environment variables:
1 2 3 4 5 6 7 8 9 10 11 |
var jsonData = pm.response.json(); pm.environment.set("email", jsonData.results[0].email); pm.environment.set("username", jsonData.results[0].login.username); pm.environment.set("password", jsonData.results[0].login.password); // we can do stuff to the data here to ensure it fits our needs // for example, perhaps we want to ensure the user is over 18 var age = parseInt(jsonData.results[0].dob.age, 10); if (age<18) age = age + 18; pm.environment.set("age", age); |
And once we’ve got the data in variables, we’re on the home straight. We can now refer to these variables using Postman’s handlebars-esque syntax in the request itself. For example, it could be something like this:
1 2 3 4 5 6 |
{ "username": "{{username}}", "password": "{{password}}", "email": "{{email}}", "age": "{{age}}" } |
Awesome! Now, all we need to do is to make the two calls in sequence. Get the fake data, then use it in our test request. For this, we’ll use one of Postman’s killer features: the Collection Runner. As you might have guessed from its name, the Collection Runner will sequentially issue each of the requests in a collection (or a folder within a collection) and execute their tests.
Here I’m using httpbin.org to echo my requests instead of pointing it at our actual API, but you get the idea. And if you’d like to play along at home, I’ve shared the demo collection that I wrote for this post. You should be able to run it yourself by clicking on the button below.
(As an aside, this collection uses three free APIs: JSONPlaceholder for the basic example of tests, randomuser.me for the fake user data, and httpbin.org for echoing my requests. Isn’t it great that we can throw demos together using free APIs?)
In real life, we can (do) expand this pattern to check for all sorts of things. For instance:
- Are our password requirements being enforced properly?
- Once registered, can we immediately log in?
- Do we return an error if someone tries to register with an existing email address?
- Does the email validation work as expected?
As well as some more basic checks, like whether the response code is as expected and whether the response comes back within a reasonable time. Your requirements will obviously be unique to you, so (as with all testing) take some time to properly think about what behaviour you should be testing.
Dance little robot, dance!
Whilst it’s pretty great that we’ve got all these tests and we run them whenever we like, the whole point was to have this happen frequently and automatically. Ideally, we should be able to pretty much forget about it until it helpfully alerts us to something having broken. To achieve automation nirvana, we’re going to create a new pipeline in our CI system.

Photo courtesy of BMW, apparently.
But how are we going to run our collection in our CI system? The Collection Runner is UI built in Electron – hardly something we’d want to have to coerce into playing nicely on a build agent. Luckily the good people at Postman have also released a CLI tool called Newman which will happily run our collections.
The first step is to export our collection and (if needed) our environment file(s). These will need to be accessible to your CI system, so either include them in your main repo somewhere or create a new one for this purpose.
The next steps all happen in your CI system. We’re effectively creating a new pipeline that will install Newman and run it on our collection, optionally against a specified environment. Since I’m an Azure Devops user, I’ll be focusing on this, but the principles should translate into pretty much any system.
- First, we need to install Newman. We’ll use npm for this, which in Azure Devops means adding an npm task to the default agent job. We’ll specify that we want to run npm install with newman -g as our arguments. This means Newman will be installed globally.
- Next, we need to run Newman with the collection (and environment) files we exported from Postman. This means adding a “Command Line” task with the following command:
newman run "$(Build.SourcesDirectory)\YourPath\collection.json" -e $(Build.SourcesDirectory)\YourPath\environment.json --disable-unicode
Note the last argument in that command – although Newman does some very pretty unicode output by default, it’s easier to work with plain old ASCII in Azure Devops.
If you’re working in Azure, the end result will look something like this:
You’re now ready to run it. If everything goes well, you should get a page with a big green tick:
All that’s left to do is to schedule the thing and make sure the right people get notified when it fails. You’ll want to pick a schedule that fits with your development processes, bearing in mind that this will occupy one of your build agents for a (pretty short) time. At work, we’ve found that running it every couple of hours is enough to give us an early warning of issues.
Oh, and if you were hoping there would actually be dancing robots, check out this video.
Final verdict
I hope you’ll agree that the end result is pretty tasty. We’ve got realistic data being thrown at our staging API on the regular and we get shouty emails whenever something breaks. Besides occasionally clearing out the staging DB — which we do anyway for other reasons — we don’t have to worry about ensuring that any state is reset. Once set up, it all Just Works™ with very little maintenance.
One of my favourite things about this setup, however, is that, by virtue of the data being so realistic, it tests things that it would never have occurred to me to test. A great example of this is unicode characters in email addresses. When a user like this one was sent back to use from randomuser.me, we discovered that our email validation was too strict and was rejecting valid addresses. What I love about this example is that it flushed out a problematic use case we hadn’t considered, without us having to troubleshoot an issue with a real user.
All in all, I’m pretty happy with this setup, but in writing this post I’ve had to banish some feelings about “cheating”. You see, it feels a little bit like I’ve not had to do very much work for this – I’ve just cobbled together a load of other peoples’ tools. So I have to remind myself: sometimes the best engineering isn’t something you’ve built from scratch, but something you’ve assembled from top-quality ingredients. It’s the Unix philosophy in action – each tool does one thing well and plays nicely with others.
Awesome article dude!
Would it be possible to place the api tests in a release pipeline so that the tests are executed after the build has been released on the staging server?
Great read! Valuable information!! Thank you for putting the efforts!
@Teis – Yes, you could absolutely do that and it would be a great way to automate some of the tests on an environment other than prod.