Using AWS single sign-on within your Docker containers
Treat your containers like first-class AWS instances with SSO
If you deploy applications to Amazon AWS EC2 instances, you probably make a lot of use of Amazon-specific features. In my case, I happen to use Amazon’s Systems Manager to store configuration parameters for my applications.
When developing code locally I would also like to be able to run these programs locally in Docker containers. Although you could have two separate sets of configuration files, one for local and one for deployment, we might as well put all our configuration files into AWS parameter store.
Let’s walk through how we can make this work.
Understanding an AWS single sign-on session
The Amazon AWS system provides a command-line interface program, AWSCLI. With this program, we can log on to an AWS session and link ourselves to our AWS account:
My local computer is now connected to AWS, and whenever I contact it for services it will use my cached credentials. I have retrieved a token from the AWS server and stored it in my local profile; whenever the program runs again it checks to see if that token is still valid.
I now launch Docker desktop and run a program within a container, that program won’t be able to use my AWS credentials — because each container is its own computer. We need some way to share our AWS session information with the container.
Sharing information from your computer to your container
Fortunately, we can do this using Docker volumes and environment variables. AWS stores all its session information in the folder .aws
within our home folder. When Docker runs, it runs as root
and searches for the same information. So we can simply project our personal credentials into the container by sending our folder ~/.aws
into the container as /root/.aws
. It’s worth marking this folder as read-only.
In fact, volumes are so useful it’s surprising we don’t use them for everything. The problem is that sharing a volume between your computer and your container breaks the concept of an immutable container. Your container can change unexpectedly because the data on the volume can change at will.
It’s worth remembering here: Your container will now only work if your AWS SSO session volume is correctly shared and valid. It’s possible for a container that works perfectly today to stop working tomorrow because the AWS SSO session has expired. So be careful!
Once we have shared the session information, we need to tell Docker where to look. We’ll set the environment variable AWS_CONFIG_FILE
to point to our folder, and then choose a session name that we can share between our computer and our container. Here’s what the resulting docker compose file looks like:
myapp:
build:
dockerfile: myapp.Dockerfile
volumes:
- ~/.aws:/root/.aws:ro
environment:
- AWS_CONFIG_FILE=/root/.aws/config
- AWS_SSO_SESSION=pm_local_session
The end result is that my container can share my login session. Programs that I run inside the container can read configuration variables from AWS Systems Manager — but only as long as I make sure my session name matches the one used by the container!
Launching an AWS SSO session
Fortunately, the AWS CLI makes it easy to start a new session. Choose a name that’s universal to your team. You’ll probably have to check in your Docker Compose file, and you don’t want the session name to be your personal username.
Follow these steps:
- Start a new PowerShell or Terminal window.
- Run
aws configure sso-session
- When it asks you for the session name, choose the name you specified in your Docker Compose file.
- You can then specify your company’s AWS start page, region, and the necessary scopes for your credentials.
Once you’ve created the session your containers should have access.
Ted Spence heads engineering at ProjectManager.com and teaches at Bellevue College. If you’re interested in software engineering and business analysis, I’d love to hear from you on Mastodon or LinkedIn.