Localstack (mocking AWS)

When it comes to production or production-adjacent work, I have a debilitating amount of fear. Early on in my career I accidently deleted the entire MySQL database containing about 90 customer's worth of data. Luckily I had a 15 minute old backup, otherwise I probably would have been fired. The terror I felt as my boss screamed from the next room, when our app wasn't loading was heavy. The next day I made read-only credentials.

Since then, I don't mess around.

But at times this fear can prevent me from actually getting work done. At my current company our services live on AWS and I've found two tools that help ease this fear:

  1. Localstack
  2. Cloud Formation

This post is about a simple way to get started with Localstack. Perhaps I'll discuss the other in a future post.

Setting up Localstack

Localstack is a docker container that mocks AWS. I've been using it during development and I greatly enjoy being able to develop and experiment without having to worry about touching actual infrastructure. Working in this manner involves docker.

Lets create a simple development environment with:

  1. Our application
  2. Postgres
  3. Localstack

Each of these will be run in their own docker containers.

Your app

You need a minimal setup to get your application code in a container. This will vary from application to application. Whatever you end up doing: give the container a good name.

The rest

We don't actually need to acquire the other containers, they are in the dockerhub cloud. Instead we need to create a script that will run our app and link everything together. I use the following.

You will of course need to tweak instances of my-app to get this script to work. Its meant as an example not a literal template.

#!/bin/bash

trap "docker stop postgres-$USER localstack-$USER" EXIT

docker run \
    --name localstack-$USER \
    -e SERVICES='s3,dynamodb' \
    -e HOSTNAME_EXTERNAL='localstack' \
    --rm -d localstack/localstack

docker run \
    --name postgres-$USER \
    -e POSTGRES_PASSWORD=1337h4ck35 \
    --rm -d postgres:9.6 \-c 'log_destination=stderr' \-c 'logging_collector=on' \-c 'log_directory=pg_log' \-c 'log_filename=postgresql-%Y-%m-%d_%H%M%S.log'

docker run \
    --name myapp-$USER \
    --link postgres-$USER:postgres \
    --link localstack-$USER:localstack \
    -v "$PWD:/my-app" \
    -e USER_HOME=$HOME \
    -e USER_NAME=$USER \
    -e AWS_DEFAULT_REGION=us-east-1 \
    -e AWS_ACCESS_KEY_ID=abc \
    -e AWS_SECRET_ACCESS_KEY=def \
    -e LOCALSTACK_HOST='localstack' \
    --rm -ti myapp-$USER

Then set execution permissions: $ chmod +x run-myapp-docer and start it up: $ cd my-app && ./run-myapp-docker.

The key thing is the use of --link which ties our application to the Postgres and Localstack containers. Though --link is considered deprecated and the docker project recommends moving to docker compose I have not found a need to do so personally. Your results may vary.

Within our app container postgres is the hostname we would use to refer to our DB and localstack for AWS.

The final piece is getting this to work is setting the AWS endpoints. The localstack repo has a list of the various endpoints you would need to use as well as the general information on how to get started.

Overall I find this to be a really nice way to start developing a service that is going to live in AWS, while keeping everything local and under more simple control.