Mixing Docker and native DotNet applications for local development

Ted Spence
tedspence.com
Published in
5 min readOct 22, 2023

--

How to write an application that can be run separately or inside a container

Perhaps you work in a company where developers are fond of microservices. In a microservice world, you might need literally dozens of separate small applications to be constantly running in order for your tech stack to be fully functional.

I’ve generally had some success launching infrastructure (databases, message queues, and proxy servers) via Docker. But I’m also curious if Docker would work for my particular use case — I would like to run most of my microservices in docker containers, but keep the one thing I’m actively developing running in my IDE.

After a bit of experimentation, I found some approaches that worked well for me. So let me walk through what I learned while experimenting with an environment where I mixed containers with locally run software.

Curious about whether your developers could live inside and outside containers? (stuff.co.nz)

The benefits of mixed containers and local applications

My original goal for Docker was to simplify new employee startup. Our team had a gigantic readme tutorial that explained “how to get set up.” It was complex and tedious and would take weeks even if you had senior developers trying to help you out. I fixed it by creating Docker Compose files and scripts for complex setup tasks like SQL Server.

The next challenge I faced was the complexity of my applications. To launch our tech stack, you must start four programs separately. But that’s not all — there are another five or six smaller microservices that can mostly be ignored.

A better alternative was to create a docker compose script that launches my entire site from top to bottom. I could easily launch my entire stack at once to make sure I had a working system. Then I could work on a single microservice and rebuild and relaunch it by itself.

Seems like a winning idea! So what are the challenges we’ll face?

Configuration files and the container networking model

When you run a tech stack locally on your computer, you will likely point every configuration file at 127.0.0.1. Talking to MongoDB? Your connection string probably points to 127.0.0.1:27017, which is the default Mongo port.

This approach works well even when local applications talk to Docker containers. If you run a MongoDB container mongodb that exposes port 27017, you can talk to it as 127.0.0.1:27017.

But when you run the same application within a Docker container, it cannot see 127.0.0.1:27017.

Why is that?

The networking model used by Docker is that each container is a fully separate machine with its own set of ports. Every container thinks of itself as 127.0.0.1, and it has a full and unused set of 65,535 ports available to programs who want to use them. It can’t see your desktop computer!

So if container A exposes a service on port 123 and container B wants to talk to it, how do they do that? Fortunately, Docker will automatically add all containers within the same docker-compose.yml file to a single network, and you can use container names as DNS names to communicate.

In this example docker compose file, you can create a MongoDB database, an API server that can talk to MongoDB via its container name, and a web server that can talk to the API server:

Configuration files and environment variables in DotNet

Our next challenge is to figure out how to set up configuration files that allow our API and web servers to run either within our IDE or within a Docker container. Prior to setting up Docker, all my applications had configuration files that pointed to 127.0.0.1. But if I change them to use Docker container names, that won’t work either:

While on my local desktop, I can’t talk to my Docker containers via DNS names (author’s screenshot)

One option is that you could configure your local DNS to contain names for all your docker containers. But another option is to use a configuration file overlay. The DotNet ecosystem allows me to use one configuration file as a default and another to layer on top of it.

In this example, I have three layers:

  • A basic configuration file appsettings.json
  • An environment-specific configuration file, appsettings.{env}.json
  • And finally environment variables that are prefixed with OVERRIDE_

I can now have an appsettings.json file with basic non-environment specific values, an appsettings.local.json file for running the app within my IDE, and an appsettings.docker.json file for running the same application within Docker. Within the local file all server addresses would be 127.0.0.1, but within the docker file server addresses would use the names of their respective containers.

After experimenting with this for a while, I found it was hard to keep track of what configuration values had been changed. I decided that I liked the usability of environment variables overlays better. In my local json file, I would have { "database": { "connectionString": "127.0.0.1" } }, but in my docker compose file I’d have an environment variable called OVERLAY_database__connectionString that would contain the appropriate value for a server within the Docker stack.

My result looked like this:

The end result is that I can run my application locally using my regular configuration files, and whenever I want to I can launch some or all of my microservices in Docker instead, and all the configuration values are easily editable and viewable within my docker-compose.yaml file.

The challenges of multiple docker compose files

If you’re unlucky, you may face an even more tricky challenge: a world where a single Docker Compose file isn’t enough for your whole tech stack. In such a world, you are faced with an even less obvious problem: Docker gives each compose file its own separate nework.

This means that even if you want a container within the pmcom_default network to talk to a mongo_default network, Docker considers them separate infrastructure and won’t allow them to talk. Even if you give each docker compose file the same network name, it won’t allow them to mix!

The first potential solution to this is to merge your docker compose files into a single unified one. This simplifies your work greatly and I recommend it.

Alternatively, if you’re stuck and you just want to test things out, you can use command line instructions to add existing containers to a cross-compose network:

docker network create merged-network
docker network connect merged-network composefileA-containerA
docker network connect merged-network composefileB-containerB

I found this technique surprisingly useful. I could keep two separate docker compose files side by side, one containing all the scripts that I know work, and the other compose file for tinkering with new application scripts. This way, if I tell Docker to do a force rebuild, it won’t force rebuild everything — it would only rebuild the one application I needed.

The value of tinkering with Docker

So far I’ve found it extremely useful to migrate my applications into Docker while retaining the ability to run just one of them outside the tech stack. Because I often switch from one application to the next, it’s easy to bounce back and forth from one project to another.

Ted Spence heads engineering at ProjectManager.com and teaches at Bellevue College. If you’re interested in software engineering and business analysis, I’d love to hear from you on Mastodon or LinkedIn.

--

--

Software development management, focusing on analytics and effective programming techniques.