It’s been a few days since I started learning Docker. This Tweet pretty much sums up where I am now.
This course that I picked up on Udemy, “Docker Mastery: with Kubernetes +Swarm from a Docker Captain“, is about 19 hours long, but about 1/2 of that is dedicated to subject matter that, for me, right now, is not entirely relevant. That being the case, I’ve been messing around with the basics, and feel that I have at least a “hanging from a cliff by four out of 10 fingers”-style grasp of what Docker is about, and how I might use it.
Virtual Concepts
Docker is to virtual machines what virtual machines are to physical servers. Starting with the largest investment, we have physical servers, which is how enterprise computing has been done for several decades. The upside to physical servers is that everything that the hardware provides is dedicated to the operational processes and that we own the entire ecosystem. The downside is that they are expensive to build and upgrade, and they take up…well…physical space and crank up our electric bills.
Then there are virtual servers. These are operating systems running inside operating systems, and we can run as many of them as we want…up to a point. Pros include compartmentalized instantiation (turn them on and off when we want, and they are their own individual sandboxes), portability, and they are easy to distribute. The cons include the fact that they have to share the resources of the physical servers they reside on, and require an additional level of management knowledge as well as giving up on the benefit of knowing the server is dead by the fact that there’s smoking pouring out of it.
Now we have Docker. Whereas physical and even virtual servers allow us to dump as many applications and purposes into them as we need, Docker relies on the “one container, one concern” concept. Although it’s still virtualized, Docker containers focus more on a single application, like a web or database server. These applications are called “images” and are the files and instructions needed to install the app into a “container”, which is Docker’s virtualized environment. Docker containers are not virtual machines, but they are small, portable, virtual environments that can be started, stopped, added, or removed entirely on demand. Rather than installing an app into a physical or virtual server, with all of the important files placed all over creation (Windows! Program Files! And the actual path you’ve specified!), Docker will keep the image files secure in a container, making clean-up a breeze!
What’s It Good For?
This is what I’ve been asking myself. Originally I though that maybe I could do my app development inside the container, get it working, and then just…push the container to a host and it would work without a lot of server side configuration or environmental concerns. Being a virtual bucket, if it works on my machine it should work everywhere! And to some extent, this is true, but I’m realizing that, considering my workflow, Docker isn’t the panacea I had hoped it would be.
For my ASP.NET apps, I work in Visual Studio. I test with IIS Express, which comes with VS. Pressing F5 puts me into debug mode, my app compiles, and the app is served using this small web server. When I am ready to deploy the app, I use Visual Studio’s built in publish mode and point it at the proper endpoint destination. Sites are served by a fully fledged IIS.
For the React apps, I use VSCode. Using the “create-react-app” bootstrap package, I get access to a bunch of different scripts that allow me to build and test live. When the app is ready to deploy, I run the “npm run build” command, the site is packed up and placed into a “build” directory which I deploy to the endpoint. Again, it’s served by IIS.
Right now, my workflows wouldn’t really benefit from shoe-horning Docker into the flow. Code is stored in Git, so all changes are tracked and branched into a DEV, QA, and MAIN buckets for keeping changes organized. I am also the sole developer on 95% of the projects I work on, so any updates that need to be done are done by Your’s Truly (i.e. very limited collaboration).
Yes, I could include Docker in my processes, but would it benefit me, or would it simply be because I can? Instead, I came up with a better plan for at least some of my work.
Frankenstein’s Docker
So one of the strengths of Docker is that there are official images up on Docker Hub that we can draw from. If I need to have a Mongo database, there’s an image for that, or an Apache server, or a WordPress installation. I can just use a single command and BAM! I have a container running That App.
But the real power comes from custom images. Images can be built from other images. They can be built on small Linux distros, and when they are, then we can access all of the things that such a distro offers.
Consider a website with a database. What I might do under normal circumstances would be to create Docker containers for a web server, a database server, and then use a shared path on the host system to provide the source files to the web server. This would allow me to use my editor of choice to build and test the website. The problem here is that I’ve got two containers and a set of files external to either of them. With this configuration, there’s no way I can use this to distribute the app unless I hope that my host of choice is similar enough to my development environment that I won’t receive any show-stopping issues.
Instead, I’d probably continue working using this three-legged stool of a solution simply to ensure the next phase works as expected. When my app is done and ready for distribution, I’d commit to the MAIN branch at GitHub, and then build a custom Docker image based on a Linux OS which would:
- Install a web server.
- Install a database.
- Pull the MAIN branch of my GitHub repo.
- Execute whatever build commands need to be issued to get the code in a working state and in the proper web server directory within the container.
- Seed or pull database data to prime the application, if necessary.
I could execute this image locally and test it, and assuming it all works as planned, build a container with it within a public hosting environment. One of my biggest problems historically is that I am a C# developer, and most hosting out there is Linux based. With the new .NET Core (now just called .NET to confuse people) I have the ability to run on Linux, but I usually also need to have access to a database, which isn’t always available in hosting, or can be had for an additional cost. This is why this site runs on WordPress as it’s a one-button solution offered by my hosting provider.
With a container solution, I don’t have to worry about finding a host that uses a compatible web server, or where I can find a decent database provider that I like working with. This is the kind of solution that I believe Docker does well with, where I can work locally and then create a script to simply replicate the environment elsewhere that has been tested locally. That I can script all of the steps in creating the container, from pulling images from Docker Hub to pulling branches from Git, to compiling code as needed, it a pretty big time and effort-saver.
I’m still a ways from needing to focus on trying this out, as I don’t have an application that I need to deploy, but I have started by setting up a MySQL image in a local container and have connected to it using MySQL Workbench. Once I get that set up, I’ll probably create another container, this time with Apache, and expose that working directory to the host so I can continue to use my installed copy of VSCode to work on the website. When I have something to test (anything, really), I’ll take a stab at creating that “Frankenimage” of the OS, web and database servers, and source code, and we’ll see how things work out.
1 Comment
Nimgimli
September 20, 2021 - 3:27 PMWhat I use Docker for is local Linux development of sites on a Windows system. So we have 150+ sites. I have images for about 30 of them on my local machine so when I need to work on one of them I spin it up. We put our wordpress themes and plugins, only, in git, so I can just push the themes from my local to the production server. But I’m kind of 1 level removed from the OS. As long as both my local and the production are running the same version of WordPress I feel pretty safe, though I HAVE run into issues now and then when the PHP versions didn’t match.
I’m using “old” Docker, pre WSL, for my work stuff. I’m just not figuring out WSL2 docker. I have some VS Code plugins that let me open files in the container via VS Code, but I’m still wrapping my head around that. It seems like it’ll be cool once I figure it out but I’m hesitant to mess with my actual work work-flow, if you know what I mean.