Sharing Our Passion for Technology
& Continuous Learning
Node Reference - Integration Tests
Prerequisites
This article builds on the prior article: Node Reference - Validation.
Add Integration or Smoke Tests
So far, our testing strategy has consisted of only Unit Testing{:target="_blank"}, testing each module in isolation. While this type of testing covers most business logic, it does leave some gaps. It does not test that our modules interact correctly with each other. For example, it does not test if the parameters passed by one module are what is expected by another module. Unit tests also do not test that our environment is configured correctly or that required environment variables are set properly. They do not test that our application, as deployed, can reach the AWS services upon which it depends.
What we need is a suite of tests that test the integration between the components of our application and the components in our environment. To fulfill this, we are going to create a suite of tests called "Integration Tests". We can run these tests during local development, as well as immediately after each deployment, in order to test that the application components have been assembled correctly. In this respect, these tests may also be considered "Smoke Tests".
What exactly to test will, of course, vary from application to application, but a few Integration Testing rules of thumb can help guide us:
- We should test that the application is "up" and responding to requests. This tests that Joi{:target="_blank"} is configured correctly and properly started.
- We should hit real URLs (vs. mock URLs) to test that routing is working correctly.
- If we are dependent on another component (either a third party component in AWS or one within our organization), we should invoke that service at least once to ensure that we can reach it across the network and that our credentials are still valid.
- Our goal is not to test every business "requirement". This functionality should have been handled by unit tests. If business logic, such as validation or calculations, is not being tested by unit tests, then it is worth re-examining the design of the system to ensure that functionality of each component is cohesive enough{:target="_blank"} that it can be unit tested.
To test our application, we need to make requests to it as if we are a client application.
In order to make these requests it would help to have a simple HTTP client available.
NodeJS{:target="_blank"} supports the fetch api{:target="_blank"} directly, we can install node-fetch{:target="_blank"} by running the following npm
command:
npm install --save-dev node-fetch
We will also need a place to put our integration tests, so create an integration-tests
folder in the root of your project.
The simplest place to start testing is our "hello" endpoint mapped to the URL root because invoking this endpoint does not require authentication.
We can use Jasmine{:target="_blank"} to execute our Integration Tests just like our Unit Tests.
We simply need to configure it to look in a different place for these tests.
Start by creating a configuration file at integration-tests/jasmine.it.json
with these contents:
{
"spec_files": [
"integration-tests/**/*.it.js"
]
}
We can easily invoke Jasmine by adding the following entry to the "scripts" section of your package.json
:
"integration-test": "jasmine JASMINE_CONFIG_PATH=integration-tests/jasmine.it.json"
Create a dummy test file integration-tests/hello.it.js
:
describe('hello', function() {
});
And run npm run integration-test
to ensure that Jasmine is locating tests properly.
If that works, you should see output like this:
Randomized with seed 78032
Started
No specs found
Finished in 0.002 seconds
Incomplete: No specs found
Next, replace the contents of integration-tests/hello.it.js
with the following test:
const fetch = require('node-fetch');
describe('hello', function() {
beforeAll(async function() {
this.baseURL = process.env.BASE_URL || 'http://localhost:3000/';
this.response = await fetch(`${this.baseURL}hello`);
this.responseBody = this.response.ok && await this.response.json();
console.log('Response ', this.responseBody);
});
it('should return an ok status code', function() {
expect(this.response.status).toEqual(200);
});
it('should return an object', function () {
expect(this.responseBody).toEqual(jasmine.any(Object));
});
it('should return the correct message', function (){
expect(this.responseBody.message).toEqual('hello');
});
});
Ensure the BASE_URL
environment variable is set (or your local server is running).
export PRODUCTS_TABLE_NAME=$(aws cloudformation describe-stacks \
--stack-name ProductService-DEV \
--query 'Stacks[0].Outputs[?OutputKey==`ProductsTable`].OutputValue' \
--output text)
export USER_POOL_ID=$(aws cloudformation describe-stacks \
--stack-name Cognito \
--query 'Stacks[0].Outputs[?OutputKey==`UserPoolId`].OutputValue' \
--output text)
export AWS_REGION="us-east-1"
npm start
Then run npm run integration-test
to ensure the endpoint works.
We now have a local suite of integration tests that we can manually run. In order for these to be effective as a smoke test, it is necessary to run these automatically and subsequent to each deployment of the application. To do that we need to integrate these tests into the deploy process.
Because CodeBuild{:target="_blank"} can run arbitrary commands inside of a Docker container, we can leverage it to execute our tests as part of the deploy.
To start, we'll first create a second BuildSpec{:target="_blank"} that executes our tests inside our Docker container.
Create a integration-tests/integration.buildspec.yml
file with these contents:
version: 0.2
env:
variables: {}
phases:
pre_build:
commands:
- $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION)
# Load the url of the image we want to run
- export RELEASE_IMAGE_URL=$(cat RELEASE_IMAGE_URL.txt)
build:
commands:
- echo "About to exec $RELEASE_IMAGE_URL to $BASE_URL"
- |
docker run \
-e BASE_URL \
$RELEASE_IMAGE_URL npm run integration-test
We have to tell Docker explicitly what version of our Docker image to pull.
If we didn't, then we could end up pulling the image that was just built and not the one that was just deployed (if we have multiple commits in flight).
Currently, the only way to pass information from one CodePipeline step to another is via artifact files.
We could extract the "Image" property from our CloudFormation parameter files (i.e. parameters/dev.params.json), but doing so would be fragile and surprisingly complicated without installing additional tools.
Instead, we can create a text file that contains the URL of the docker image we want to execute (RELEASE_IMAGE_URL.txt) by adding a line to our main buildspec.yml
.
We also want to add our integration.buildspec.yml file to our build output artifacts so it is available during the integration test phase.
Modify the main buildspec.yml
file to look like this:
version: 0.2
env:
variables: {}
phases:
pre_build:
commands:
- export RELEASE_IMAGE_URL="$DOCKER_IMAGE_URL:$CODEBUILD_RESOLVED_SOURCE_VERSION"
build:
commands:
- docker build --tag "$RELEASE_IMAGE_URL" .
- sed --in-place='bak' --expression="s|RELEASE_IMAGE_URL|${RELEASE_IMAGE_URL}|" parameters/*.params.json
- echo $RELEASE_IMAGE_URL > RELEASE_IMAGE_URL.txt
- $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION)
- docker push "$RELEASE_IMAGE_URL"
artifacts:
discard-paths: yes
files:
- "cloudformation.template.yml"
- "RELEASE_IMAGE_URL.txt"
- "parameters/*"
- "integration-tests/integration.buildspec.yml"
Check this file into source control.
We need to add a second CodeBuild Project{:target="_blank"} to our pipeline.template.yml
.
Unlike our Docker build project, this project needs to specify a BuildSpec{:target="_blank"} location that points to integration.buildspec.yml
since the location is not the default.
Add this as an additional resource in pipeline.template.yml
:
IntegrationTest:
Type: AWS::CodeBuild::Project
DependsOn:
- PipelineRole
Properties:
ServiceRole: !GetAtt PipelineRole.Arn
Source:
Type: CODEPIPELINE
BuildSpec: integration.buildspec.yml
Environment:
Type: LINUX_CONTAINER
ComputeType: BUILD_GENERAL1_SMALL
Image: aws/codebuild/docker:17.09.0
EnvironmentVariables:
- Name: BASE_URL
Value: 'https://products.example.com/'
Artifacts:
Type: CODEPIPELINE
Next we have to add a step to our "Deploy_DEV" stage of our pipeline.
Update pipeline.template.yml
to add this section into the "Actions" array of the "Deploy_DEV" stage:
- Name: IntegrationTest
RunOrder: 2
ActionTypeId:
Category: Test
Owner: AWS
Provider: CodeBuild
Version: 1
InputArtifacts:
- Name: buildResults
Configuration:
ProjectName: !Ref IntegrationTest
Notice that the "RunOrder" parameter is set to "2". This tells CodePipeline to run this action after the deployment (which has "RunOrder" of "1"). If you have multiple actions that can occur in parallel, they can have the same RunOrder. If actions need to run sequentially (like here) simply put their run orders in order.
Finally, verify your parameter files (like parameters/dev.params.json
) from Node Reference - CodePipeline look something like this:
{
"Parameters": {
"VpcId": "vpc-65a65***",
"SubnetIds": "subnet-7d6d***,subnet-d897f***,subnet-ef42***,subnet-1fd8***",
"Subdomain": "products",
"BaseDomain": "example.com",
"Image": "RELEASE_IMAGE_URL"
}
}
Update the pipeline stack and you should see the new action run.
aws cloudformation deploy \
--stack-name=ProductService-Pipeline \
--template-file=pipeline.template.yml \
--capabilities CAPABILITY_IAM
Smoke testing secured endpoints
Testing our "hello" endpoint doesn't provide us a lot of value since AWS is already hitting that endpoint as part of the Application Load Balancer health checks{:target="_blank"}.
What we really need is the ability to exercise all of the routes in our application, including the secured routes. In order to do that, we will need to get credentials to the smoke tests in a secure way.
Any OpenID Connect compliant identity server (like AWS Cognito{:target="_blank"}) supports the concept of a "client credentials" flow.
In this flow, a client (a.k.a "service account") has a client_id
and client_secret
.
These values are given directly to the identity provider and exchanged for a JWT Bearer token.
This token is structured the same way as a token obtained by an end user, so the backend service can consistently validate and handle them.
It is also an important advantage of this structure that the server never sees the client_secret
, and the token it does see, has an expiration date.
Unfortunately, because a client_secret
needs to be protected, we cannot simply check it into source control like our other CloudFormation template parameters.
We could add the client_id
to another service and pull it at runtime, but that would require that access to that service be secured so we've only moved the problem.
Instead, if we encrypt the client_secret
with a secure key, we can check in the encrypted value to source control.
To do this, we need a secure public/private key pair.
We can get away with this because we can leverage the AWS Key Management Service{:target="_blank"}.
(AWS Secrets Manager{:target="_blank"} is an alternative AWS service for securing OAuth tokens.)
KMS{:target="_blank"} allows the generation and management of public/private key pairs.
Unlike generating keys with a tool like OpenSSL{:target="_blank"} where you manage the keys, the users of KMS never see the keys directly and AWS manages them for us.
To use KMS{:target="_blank"}, a user sends plaintext{:target="_blank"} to AWS and if that user has the "kms:encrypt" permission for the Key in IAM{:target="_blank"}, then AWS responds with the encrypted ciphertext{:target="_blank"}.
AWS never stores the unencrypted or encrypted data.
When we need to decrypt the ciphertext{:target="_blank"} with the Key, the ciphertext is sent to AWS and if the user has the "kms:Decrypt" permission then the plaintext{:target="_blank"} is returned.
We will need to first create KMS{:target="_blank"} Key and Alias resoures in our pipeline.template.yml
file.
PipelineKey:
Type: 'AWS::KMS::Key'
Properties:
KeyPolicy:
Version: '2012-10-17'
Statement:
- Sid: 'Allow administration of the key'
Effect: 'Allow'
Principal:
AWS: !Ref PipelineAdminArn
Action:
- 'kms:*'
Resource: '*'
- Sid: 'Allow use of the key'
Effect: 'Allow'
Principal:
AWS: !GetAtt PipelineRole.Arn
Action:
- 'kms:Decrypt'
Resource: '*'
- Sid: 'Allow Encryption by everyone in the account'
Effect: 'Allow'
Principal:
AWS: '*'
Action:
- 'kms:Encrypt'
Resource: '*'
Condition:
StringEquals:
'kms:CallerAccount': !Ref 'AWS::AccountId'
PipelineKeyAlias:
Type: 'AWS::KMS::Alias'
Properties:
AliasName: !Sub 'alias/${AWS::StackName}-key'
TargetKeyId: !Ref PipelineKey
There is a safety check in place whereby the user who creates the key must be named in the policy with the ability to modify the key.
This prevents a user from creating a key that they themselves do not have access to modify or delete.
In order to support this, and because there is no "current user ARN" Pseudo Parameter{:target="_blank"}, we need to add a Parameter{:target="_blank"} to our pipeline.template.yml
file:
PipelineAdminArn:
Type: String
Description: |
ARN of a user or role that can administrate this pipeline.
This can be obtained by running 'aws sts get-caller-identity --query='Arn' --output=text'
Deploy this template (using the above aws command to fetch your ARN).
MY_ARN=$(aws sts get-caller-identity --query='Arn' --output=text)
echo $MY_ARN
aws cloudformation deploy \
--stack-name=ProductService-Pipeline \
--template-file=pipeline.template.yml \
--capabilities CAPABILITY_IAM \
--parameter-overrides \
PipelineAdminArn="${MY_ARN}"
We should now have a key available to us under the alias "alias/{stack-name}-key". A value can be encrypted by running the following on the command line:
USER_POOL_ID=$(aws cloudformation describe-stacks \
--stack-name Cognito \
--query 'Stacks[0].Outputs[?OutputKey==`UserPoolId`].OutputValue' \
--output text)
export CLIENT_ID=$(aws cognito-idp list-user-pool-clients \
--user-pool-id "$USER_POOL_ID" \
--max-results 1 \
--query 'UserPoolClients[0].ClientId' --output text)
CLIENT_SECRET=$(aws cognito-idp describe-user-pool-client --user-pool-id "$USER_POOL_ID" --client-id "$CLIENT_ID" --query 'UserPoolClient.ClientSecret' --output text)
export ENCRYPTED_CLIENT_SECRET=$(aws kms encrypt \
--key-id=alias/ProductService-Pipeline-key \
--plaintext="${CLIENT_SECRET}" \
--query=CiphertextBlob \
--output=text) && \
echo $ENCRYPTED_CLIENT_SECRET | base64 --decode > encrypted.txt && \
DECRYPTED_CLIENT_SECRET=$(aws kms decrypt --ciphertext-blob fileb://encrypted.txt --output text --query Plaintext | base64 --decode)
rm encrypted.txt
export AUTH_NAME="theproducts"
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" && \
echo "CLIENT_ID: ${CLIENT_ID}" && \
echo "CLIENT_SECRET: ${CLIENT_SECRET}" && \
echo "ENCRYPTED_CLIENT_SECRET: ${ENCRYPTED_CLIENT_SECRET}" && \
echo "DECRYPTED_CLIENT_SECRET: ${DECRYPTED_CLIENT_SECRET}" && \
echo "TOKEN_ENDPOINT: https://${AUTH_NAME}.auth.us-east-1.amazoncognito.com/oauth2/token" && \
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
The aws kms encrypt{:target="_blank"} command outputs an encrypted and then Base64 Encoded{:target="_blank"} string.
We can now add the CLIENT_ID
, ENCRYPTED_CLIENT_SECRET
and TOKEN_ENDPOINT
"EnvironmentVariables" to our "IntegrationTest" CodeBuild Project in pipeline.template.yml
:
IntegrationTest:
Type: AWS::CodeBuild::Project
DependsOn:
- PipelineRole
Properties:
ServiceRole: !GetAtt PipelineRole.Arn
Source:
Type: CODEPIPELINE
BuildSpec: integration.buildspec.yml
Environment:
Type: LINUX_CONTAINER
ComputeType: BUILD_GENERAL1_SMALL
Image: aws/codebuild/docker:17.09.0
EnvironmentVariables:
- Name: BASE_URL
Value: 'https://theproducts.exmple.com/'
- Name: CLIENT_ID
Value: '4c8mmtv4fsagkdkmvt2k209u3fui'
- Name: ENCRYPTED_CLIENT_SECRET
Value: 'AaHnF6f354wmQfF4YdC0DUg...2YC8MdBg+HGF4H83LcAJCfPCv87qg=='
- Name: TOKEN_ENDPOINT
Value: 'https://theproducts.auth.us-east-1.amazoncognito.com/oauth2/token'
Proceed to deploy the pipeline stack.
aws cloudformation deploy \
--stack-name=ProductService-Pipeline \
--template-file=pipeline.template.yml \
--capabilities CAPABILITY_IAM
Nothing will change about the testing since we aren't using these parameters yet, but it helps prove we didn't make a syntax error. Note, the sequence of events we followed, as the order of operations is important:
- Deploy the stack to create the Key
- Encrypt stuff with the Key
- Deploy the stack again with the encrypted Key values
Alternatively, we could have stored these parameters in a different file in the code base and loaded them during our integration-test execution.
Next, we will need to pass these parameters from the CodeBuild environment into our Docker container in order to run integration tests against authenticated endpoints.
To do this, add more "-e" parameters to our Docker RUN command{:target="_blank"} so that the integration-tests/integration.buildspec.yml
looks like this:
version: 0.2
env:
variables: {}
phases:
pre_build:
commands:
- $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION)
- export RELEASE_IMAGE_URL=$(cat RELEASE_IMAGE_URL.txt)
build:
commands:
- echo "About to exec $RELEASE_IMAGE_URL to $BASE_URL"
- |
docker run \
-e AWS_REGION=$AWS_DEFAULT_REGION \
-e AWS_CONTAINER_CREDENTIALS_RELATIVE_URI \
-e BASE_URL \
-e CLIENT_ID \
-e ENCRYPTED_CLIENT_SECRET \
-e TOKEN_ENDPOINT \
$RELEASE_IMAGE_URL npm run integration-test
Now that we have an endpoint to get a token, a client_id and an encrypted secret, we need a way to convert these into a valid authentication header to use in our requests.
We can create a utility module to fetch a token and return it.
Create a integration-tests/getAuthorizationHeader.js
file with the following contents:
const AWS = require('aws-sdk');
const fetch = require('node-fetch');
// This function uses the AWS KMS api to decrypt the ENCRYPTED_CLIENT_SECRET environment variable
async function getDecryptedClientSecret() {
const kms = new AWS.KMS();
const result = await kms.decrypt({
CiphertextBlob: Buffer.from(process.env.ENCRYPTED_CLIENT_SECRET, 'base64')
}).promise();
return result.Plaintext.toString().trim();
}
// This function builds a 'Basic' auth header with the client_id/client_secret
// so that we can request a token from cognito
async function buildTokenAuthHeader() {
const client_id = process.env.CLIENT_ID.trim();
const client_secret = await getDecryptedClientSecret();
const encodedClientCredentials = new Buffer(`${client_id}:${client_secret}`).toString('base64');
return `Basic ${encodedClientCredentials}`;
}
// Request a token from cognito and return a built header that can be used in integration tests.
module.exports = async function getAuthHeader() {
const response = await fetch(process.env.TOKEN_ENDPOINT, {
method: 'POST',
body: 'grant_type=client_credentials',
headers: {
'Authorization': await buildTokenAuthHeader(),
'Content-Type': 'application/x-www-form-urlencoded'
}
});
const responseBody = await response.json();
return `Bearer ${responseBody.access_token}`;
};
Finally, we are able to write an integration test that uses the bearer token to create a product.
Create a integration-tests/products.it.js
file with these contents:
const fetch = require('node-fetch');
const url = require('url');
const getAuthorizationHeader = require('./getAuthorizationHeader');
describe('/products', function() {
describe('saving a product', function() {
beforeAll(async function createNewProduct() {
this.baseURL = process.env.BASE_URL || 'http://localhost:3000/';
const authHeader = await getAuthorizationHeader();
const product = {
name: 'test product',
imageURL: 'http://example.com/image.jpg'
};
console.log('posting', JSON.stringify(product));
this.response = await fetch(url.resolve(this.baseURL, 'products'), {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': authHeader
},
body: JSON.stringify(product)
});
this.responseBody = this.response.ok && await this.response.json();
console.log('Response ', this.responseBody);
});
it('should return an ok status code', function() {
expect(this.response.status).toEqual(200);
});
it('should return an object', function () {
expect(this.responseBody).toEqual(jasmine.any(Object));
});
it('should assign a product id', function (){
expect(this.responseBody.id).toBeDefined();
});
it('should return the name', function () {
expect(this.responseBody.name).toEqual('test product');
});
it('should return the imageURL', function () {
expect(this.responseBody.imageURL).toEqual('http://example.com/image.jpg');
});
});
});
Check in the changed files and we should be able to look at the Integration Test build log in the AWS Management Console and see the additional tests executed. We should also be able to run tests against our local server too:
export AWS_REGION=us-east-1
npm run integration-test
See the changes we made here{:target="_blank}.
Table of Contents
- Introduction
- Unit Testing
- Koa
- Docker
- Cloudformation
- CodePipeline
- Fargate
- Application Load Balancer
- HTTPS/DNS
- Cognito
- Authentication
- DynamoDB
- Put Product
- Validation
- Smoke Testing (this post)
- Monitoring
- List Products
- Get Product
- Patch Product
- History Tracking
- Delete
- Change Events
- Conclusion
If you have questions or feedback on this series, contact the authors at nodereference@sourceallies.com.