Docker Engine is a client-server application with these major components: The Docker daemon is a service that runs on your host operating system. When you type any command, it is interpreted by the demon and it takes necessary actions. A REST API to talk to the daemon and instruct it what to do. Use the Debian package manager to install the apache2 package: sudo apt update && sudo apt -y install apache2 After installing Apache, the operating system automatically starts the Apache server. Overwrite the Apache web server default web page with the following command.
Apache Docker File
We faced a problem with our traditional LAMP servers. We were running multiple sites on them and were running into issues regarding the number of connections accumulating with all the sites on one guest. We addressed this by adding a second Linux server but that soon had the same issue as we continued to add sites and grow. These sites were mostly WordPress sites with a few older legacy sites scattered about. We needed a way to handle the connection problem and also allow us to easily scale and load-balance in the future. I found that Docker could do this for us.
First off, I had to learn Docker and containers. You have to wrap your head around it as it isn’t like traditional virtualization, but it has similar concepts. The biggest issue I had with understanding it was the concept that each container should only run one service. They are not and should not be full virtualized guest servers. I figured that one out later after I made a few mistakes, but worked it out, eventually.
I decided to build three new virtual guests in our VMWare cluster. One would be a new NFS file server to simplify the shared storage for these new Docker servers. I set that up and built the shares for the Docker servers. The other two guest servers were to be the Docker/Web servers. I decided to use Ubuntu Server 16.04 as that was what I was more familiar with and what most of the documentation referenced. As far as documentation, I found it pretty simple to follow an article on DigitalOcean , and before I knew it, I had a Docker server running!

First steps were to find the images that I should be using. I played around with the base Ubuntu image to learn how to build and run the containers. Once I was pretty secure in that, I found a way to run my own repository so I could keep track of my own images. After I had my own repository, I wanted to build a test WordPress site using a container. I found an already existing image for PHP and Apache and just decided to utilize that. Now I needed to setup a container running that image to run WordPress with all the plugins that we use on our sites.

Pulling the image was easy enough, and I propped up a container to test a base WordPress install. That worked fine after I enabled a few Apache plugins. I then was able to quickly install GhostScript to enable one of the big plugin features, and it seemed to be working. I pulled over an existing site to test it by copying the content and the DB. It worked! Now it was time to just wait to pull the trigger, for real.

In the meantime, I created a BASH script to automatically deploy a new container to allow us to quickly build a new site. I setup the second Docker/Web server to essentially mirror the first so we could easily point our F5 Load-balancer to each node and port.

When it came time to migrate some of the sites, we chose the Legacy sites first since they were the least used and would be a decent test without causing much disruption. We moved them over, and after a bit of tweaking, they were working except for sending email. The old framework they used required a local SMTP server, but I found an alternative with MSMTP that would act the same in the container to mimic a server by offloading the message to our relay instead. After those sites were working, we decided to move the remaining big WordPress sites.
The first one went quite well since I had already tested it. We just ran it on one Docker/Web server to test and we got instant feedback from the users that it was moving much faster than it had on the previous server! We then deployed it to the second Docker/Web server and then load-balanced it. There were no issues, whatsoever! Then we just moved through the sites and migrated one at a time. Now all the sites were running faster and more reliable since there were two servers serving each site instead of just one shared Apache instance for multiple sites.
After all this we decided that we needed a way to keep these sites updated on a regular basis.
Docker Apache Web Server
I wrote a new BASH script to automatically pull the image I had stored on the local repository, run updates, commit it, and then push it back into the repository with an updated label. To update the containers I wrote another script to just destroy the existing container and then redeploy from the new image in the repository.
Cached
At some point we might even automate this update process on a schedule so we won’t have to deal with it, but we’ll need to also do some type of validation after upgrade to make sure the site doesn’t break after an upgrade. That, however, is a future project!
