J Cole Morrison
J Cole Morrison

J Cole Morrison

Startup Engineering, former Techstars Hackstar and AWS Solutions Architect. Based out of Sacramento, California.

Practical and Professional Devops with AWS, Docker, and Node.js

Docker for a Fresh MySQL or MongoDB Instance in Any Project

Posted by J Cole Morrison on .

Docker for a Fresh MySQL or MongoDB Instance in Any Project

Posted by J Cole Morrison on .

Docker for a Fresh MySQL or MongoDB Instance in Any Project

In this post I'm going to cover a very simple and powerful use case for Docker in a local development workflow. Using docker to create fresh, separate instances of MySQL and MongoDB for any project or app. I'm not going to spend time explaining what Docker is, why we should use it, etc. There's enough wacky definitions out there without me adding to them. Instead let's apply it to an actual development problem.

A simple day-to-day bottleneck I face is switching projects that use a datastore. One project might be on MySQL 5.6, one on MySQL 5.7, one on MongoDB 3.0, one on MongoDB 3.4, one on some antiquated MySQL 5.5...sound familiar? If not, it will at some point.

Beyond just different versions of a database, there's also projects that use the same version of the database. So maybe we have 3 different projects that all use MySQL 5.7. Sure it's more than doable to keep them all in one instance of the MySQL 5.7 and separate it out, but it's not very convenient. It'd be better to just have this workflow of...

a) begin new project

b) create new database of whatever version we want for said project

c) when coming back to said project, run a command and have the database available

d) be able to do (a) through (c) for any number of projects

We'll walk through doing this with both MySQL and MongoDB. Let's start with MySQL.

Before Anything Though, You Need Docker

Almost forgot to add this in. Installing docker is required. Nowadays it's as simple as downloading and installing almost any app. Just hover over Get Docker and select your preferred installation.

The Codebase

All the code for this can be found at this repository:


New and Separate MySQL Database for Any Project

The workflow here is as follows:

1. Go to the project directory of your desire. If you don't have one, create a new directory called example-mysql and cd into it.

This can be any directory of your desire, whether it's a wordpress instance, node.js app, rails app, etc. Basically, wherever you run the commands to get your app up and running.

2. Create a new file called docker-compose.yml in our example-mysql directory (or in your project directory of choice)

Input the following:

# Version 3 only recently came out, so we'll stick with 2 since it's
# tried and true at this point.  This would convert directly over to
# Version 3 anyway.
version: '2'

  # Name of the service as Docker will reference.

    # The image, change 5.7 to any of the supported docker versions.
    image: mysql:5.7

    # Required environment variables.  Creates a Database with a
    # root user, non-root user both with passwords.
    # MYSQL_ROOT_PASSWORD defines the root password of the root user
    # MYSQL_DATABASE names the DB
    # MYSQL_USER is the non-root user
    # MYSQL_PASSWORD is the non-root user password
      MYSQL_ROOT_PASSWORD: "rootpwd"
      MYSQL_DATABASE: "devdb"
      MYSQL_USER: "devuser"
      MYSQL_PASSWORD: "devpwd"

    # What port do you want this MySQL instance to be available on?
    # The left hand number is what port it will be available on from your machine
    # perspective.  The right hand number is the post that it's linking up to.
    # In this case we're saying "link our local 3306 to the docker container's 3306"
    # which works here, because docker MySQL exposes the MySQL DB to the container's
    # 3306 port.  If we wanted this available on port 3307 on our local machine
    # we'd change this to 3307:3306
      - 3306:3306

    # We're using a named volume here that docker manages for us.  This is a special
    # place just for this particular dockerized MySQL instance.
      - devmysqldb:/var/lib/mysql

# if you use a named volume, you must also define it here.

This is the file used by Docker Compose to create docker containers from our specified images and settings. Docker Compose is just a tool in the Docker ecosystem that makes managing multiple containers much more pleasant. Yes, it's a part of Docker and not some 3rd party tool.

In a nutshell, this file is telling docker compose that we'd like it to create a container from the official MySQL 5.7 Docker image. It should use our defined environment variables to create the root and non-root user. It should store our data in a local volume on our machine instead of in the container itself.

For those newer to docker, I mean that when we install docker and signup, we gain access to the docker hub. This hub is full of official docker images that we can freely pulldown and create containers from. The official MySQL one allows us to provide it with the 4 above environment variables. If so, it will take those and create the root and non-root user and setup a database.

Let's start it up.

3. Open up your shell to this directory and simply run:

$ docker-compose up

And your MySQL instance will prep itself and fire up! What happens:

a) Docker Compose will docker pull down the MySQL image

b) Create a unique network for the MySQL container

c) Create a container from the MySQL image with

d) for (b) and (c) give them names based on your directory name and service name

Watch the DB logs and give it a minute to warm up and become available. Once that's completed, the database is live on port 3306 and ready to be used! Now let's actually dive in and use it.

4. Open up a new shell tab

Let's verify that our independent MySQL container (which by the way, a container = "instance" of an image), is up by running the following to login:

docker-compose exec mysqlDb mysql -u devuser -p

And enter the password we specified in the docker-compose.yml file - devpwd.

Breaking down this command:

  1. mysqlDb is the name of the service we defined in our docker-compose.yml
  2. mysql is the command we want to run
  3. -u devuser -p are the options we're passing to mysql

After running this we'll be prompted for our password that we defined. Input it and you'll be inside of the Db!

For access as the root user, run the following:

docker-compose exec mysqlDb mysql -p

Extending This Workflow

There's a few more commands we should know about in order to make our lives easier.

A) Run the DB in the Background

First off, head over to the shell tab with mysql running and stop it with ctrl+c. Because of how we started it, as soon as we do so, it will kill the container and our DB will go down. However, we probably don't want these logs up all the time.

Running the following command will start our MySQL container and "detach" it so that it's in the background:

docker-compose up -d

And now we can continue business as usual in our current shell.

B) Check the Logs of Background Containers

If we want to check the logs of containers in the background run the following:

docker-compose logs -f

This will show all the logs for our DB. Really, it will show all the logs combined for any services we have defined in our docker-compose.yml file, but since we only have the database, that's all we'll get.

C) Take Down Containers

So let's say we're done for the day. Just run:

docker-compose down

And it will remove your containers and the network.

Notes About This Workflow

A) Keep Our DB Data in A Local Directory Instead of a Named Volume

When we use a name volume, which in our case is called devmysqldb, docker-compose does the equivalent of a:

$ docker volume create <name of volume>

Think of these as a set of global data folders that we can use to store the data we want our containers to operate on locally. For example, we could have another container use this volume by doing:

$ docker run --name <some container> -v devmysqldb:/path/in/container <some image>

And we'd use the volume that we created for our mysqlDb above for this new container. A list of volumes can be found by running:

$ docker volume ls

We don't have to use named volumes though. We could just as easily create a directory anywhere, and use that. For example in our example-mysql directory, make a new directory called db (or anything you'd like). After that, change the volumes portion of our mysqlDb service in the docker-compose.yml file to be:

# .. rest of file above ^
      - ./db:/var/lib/mysql


And now it will store all the data in the directory ./db relative to our docker-compose.yml file! A huge benefit of using named volumes vs. directories like this is that we can more easily reference them and use them in other docker-compose workflows. We can still do that with defined directories like above as well...but relative paths are only fun for so long.

B) Docker Compose uses the current directory as its context

When we run the command docker-compose it looks for a docker-compose.yml file and uses that in the context. This is what allows us to easily reference our mysqlDb service just as its name, instead of having to dig through docker ps and using it's actual name.

C) Docker Compose isn't required

We could do all of the above without Docker Compose, but the docker run command would be something we'd need to input each and every-time we wanted to start our DB up:

$ docker run --name some-mysql -p 3306:3306 -v testmysqldb:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=roopwd -e MYSQL_DATABASE=devdb -e MYSQL_USER=devuser -e MYSQL_PASSWORD=devpwd -d mysql:5.7

After that, we'd need to remember the name every time, and docker start some-mysql each time we're ready to go. When we want to login to the container we'd do:

$ docker exec -it code_mysqlDb_1 mysql -u devuser -p

for the regular user


$ docker exec -it code_mysqlDb_1 mysql -p

for the root user.

After we're done with the container we'd just do a:

$ docker stop some-mysql && docker rm some-mysql

Not that any of it's difficult, but it's a lot more pleasant and version controllable to keep all of this stuff in a command.

New and Separate MongoDB Database for Any Project

The above workflow can be repurposed for MongoDB as well. In addition to setting up the local mongodb, we'll also dive into how to setup a simple root and non-root user instead of just leaving mongo wide open.

1. Go to the project directory of your desire. If you don't have one, create a new directory called example-mongo and cd into it.

2. Create a docker-compose.yml file inside of the example-mongo directory and input the following:

version: '2'

  # Name our service will be known by

    # version of mongo we'll use
    image: mongo:3.4

    # local port mapping to container port.  Remember, this
    # doesn't have to be 27017, you could make this availble
    # on port 3000 on our local machine via 3000:27017
    # if we really wanted to.
      - 27017:27017

    # using a named volume
      - devmongo:/data/db

    # OPTIONAL: This enforces the need for authentication
    command: mongod --auth


Most of the above should look familiar. The only major difference here is that we're overriding the default command with our own:

command: `mongod --auth`  

This just turns on authentication to our local mongodb and is entirely optional. I'm including just so that those who wish to use it's functionality have a nice little how-to.

Just as before though, Docker Compose will docker pull down Mongo for us and up the containers and networks.

3. Run the following to up our MongoDb Container:

$ docker-compose up -d

We'll see our go up. Optionally run:

$ docker-compose logs -f

To see where it is the process of warming up.

4. Run the following command to login to the container mongo instance:

$ docker-compose exec mongoDb mongo

This runs the mongo command in our mongoDb service, and puts us into our MongoDb!

Now, if we didn't want to use the --auth, we'd actually be done. The DB is now available on port 27017 and can accept connections from our local developed apps etc. We literally don't need to do anything else. We could stop here and go about our business.

If we did stop here, we'd be able to freely connect to and use this MongoDB Container. There's be no concept of users or authorization to jump in and use it. And for local development, this is generally what you want.

However, since we've turned on --auth (for educational purposes), we get a one time, free login, to setup our admin user...after that, we won't be able to do anything. So we need to create an admin user.

5. In the mongo shell run:

use admin  

This switches us over to the Mongo Admin database.

6. Create a user:

    user: "devadmin",
    pwd: "devadmin",
    roles: [ { role: "root", db: "admin" } ]

This creates a user called devadmin with the root permissions.

7. Exit the mongo shell console

If we try to log back in now, without authenticating, it won't allow us to do anything other than switch between dbs. Let's use our new user.

8. Log back in as the admin:

$ docker-compose exec mongoDb mongo -u devadmin -p devadmin --authenticationDatabase "admin"

The --authenticationDatabase is just which DB to use when authenticating. Had we made devadmin in a different database, we'd need to specify that one instead of admin.

9. Make a new database

use test  

10. Insert a document

db.tests.insert({ name: 'doc' })  

and of course we can find them by:


Awesome. Now if let's create a non-root user.

11. Switch back over to the Admin Database:

use admin  

12. Run the following to create a different user for just read/writes into the test database.

    user: "testuser",
    pwd: "testuser",
    roles: [ { role: "readWrite", db: "test" } ]

13. Exit the mongo shell console

14. Log back in as the testuser:

$ docker-compose exec mongoDb mongo -u testuser -p testuser --authenticationDatabase "admin"

15. Run use test to use the test db

16. Insert a document

db.tests.insert({ name: 'doc2' })  

Woohoo! Now we have an root and non-root user to work with in our Docker MongoDb. Again, the --auth part is completely optional. If you just want a quick mongo instance up for a local dev (which you don't really need auth for), just stop at atep 4.

Extending This Workflow

The exact same commands and in the MySQL Section apply here:

  1. docker-compose up -d runs mongo in the background
  2. docker-compose logs -f get the logs
  3. docker-compose down removes the container and network

Notes About the Workflow

Once again, the same from MySQL Section also apply here.

  1. We can store our DB in a local folder of our choosing vs. named volume
  2. Docker compose uses the current directory as the context for its commands
  3. We can use raw docker run commands instead of docker-compose at the cost of convenience

Notes About Both

One major thing to remember. If we switch out our volume being used for another, all the data in our database is gone. Sure, we'll still be using the same docker container, but it won't be using the same data (and thus databases).

For example, in our Dockerfile, if we renamed the volume property from devmongo to devmongotwo, and boot up our mongo container, we won't have access to any data we may have written will devmongo was our volume. We can simply switch back over to devmongo if we want to access that again.


In this post we covered setting up separate, independent MySQL and MongoDB databases via docker. Having a clean database instance for each project frees up a lot of mind space since clashing settings and versions become a non-issue. It's also incredibly convenient to have the databases go up and ready for usage in a few simple commands.

For a tl;dr mental framework - to leverage this with any database (or service for that matter):

  1. Create a docker-compose.yml file in the project of your desire
  2. In the docker-compose specify the DB Docker Image you'd like and any other services
  3. Expose the service of desire, in our cases a database, to a port on your local machine
  4. Use it!

As usual, if you find any technical glitches or hiccups PLEASE leave a comment or hit me up on twitter or with a message!

Be sure to signup for weekly updates!!

More from the blog

Practical and Professional Devops with AWS, Docker and Node.js

Practical and Professional Devops with AWS, Docker and Node.js Video Series
  • • Full Series with 80+ Videos
  • Zero to Everything Setup
  • Full Development Environment for Teams
  • Seamless Continuous Deployment Pipeline
  • Reusable CloudFormation Template
  • Service Oriented, Database Ready
  • Complete Conceptual Explanations
J Cole Morrison

J Cole Morrison


Startup Engineering, former Techstars Hackstar and AWS Solutions Architect. Based out of Sacramento, California.

J Cole Morrison

J Cole Morrison

Startup Engineering, former Techstars Hackstar and AWS Solutions Architect. Based out of Sacramento, California.

Practical and Professional Devops with AWS, Docker, and Node.js