So far on my Kubernetes journey I’ve only ever had one container per pod.
But I needed to run php-fpm fronted by nginx - with static assets served direct by nginx.
A lot of online examples skip this complexity by serving both php and static assets via Apache.
While it seemed complex at first - like a lot of Kubernetes it’s fairly straightforward once you have made the leap.
Why Bother ?
php-fpm is optimised for serving PHP with cached php code ready to serve the next request - meanwhile nginx is well optimised to serve the static files efficiently while handing off the php work.
Why Not Use Two Pods ?
There are arguments on both sides here.
Two pods means you could scale independently.
How Does It Work ?
It is as simple as the deployment.yaml file having two entries in the containers list.
In my case the containers are built from the same multi-stage Dockerfile - with different targets.
This ensures that my static files are form the same build - one ending up in an nginx container and the other php-fpm
The two containers are in the same pod and share an IP address - which means they can communicate on 127.0.0.1 and I don’t even need to expose the php-fpm service to any other pod. This seems great for security as the fhp-fpm process is a very trusting one and could easily be abused if over exposed.
They can also share files - and this might be useful but I already had teh same files on each container so haven’t investigated.
How To Connect To a Container
Usually when you do something like exec on a pdo there is only one container and you connect to that
If you have more than one container exec will connect you to the first container
Top specify a container add it as a parameter
kubectl exec --stdin --tty mypod -c mycontainer /bin/sh
- add a second container in you deployments containers list
- the two containers can communicate via the shared localhost IP
- add a container parameter to kubectl commands