Tangible Bytes

A Web Developer’s Blog

Kubernetes Multi Container Pod

So far on my Kubernetes journey I’ve only ever had one container per pod.

But I needed to run php-fpm fronted by nginx - with static assets served direct by nginx.

A lot of online examples skip this complexity by serving both php and static assets via Apache.

While it seemed complex at first - like a lot of Kubernetes it’s fairly straightforward once you have made the leap.

Why Bother ?

I’ve seen PHP site served by Apache really suffer under load. What happens is that PHP grabs a lot of memory by doing some complex work - and then the same process gets busy serving static assets (images or javascript files) but the memory doesn’t get efficiently recycled.

php-fpm is optimised for serving PHP with cached php code ready to serve the next request - meanwhile nginx is well optimised to serve the static files efficiently while handing off the php work.

Why Not Use Two Pods ?

There are arguments on both sides here.

Two pods means you could scale independently.

But I wanted to ensure both are properly in sync so that the sight minimised javascript files are served to match the php that is being served - so relaising these as a single unit makes sense.

How Does It Work ?

It is as simple as the deployment.yaml file having two entries in the containers list.

In my case the containers are built from the same multi-stage Dockerfile - with different targets.

This ensures that my static files are form the same build - one ending up in an nginx container and the other php-fpm

The two containers are in the same pod and share an IP address - which means they can communicate on 127.0.0.1 and I don’t even need to expose the php-fpm service to any other pod. This seems great for security as the fhp-fpm process is a very trusting one and could easily be abused if over exposed.

They can also share files - and this might be useful but I already had teh same files on each container so haven’t investigated.

How To Connect To a Container

Usually when you do something like exec on a pdo there is only one container and you connect to that

If you have more than one container exec will connect you to the first container

Top specify a container add it as a parameter

kubectl exec --stdin --tty mypod -c mycontainer /bin/sh

Summary

  • add a second container in you deployments containers list
  • the two containers can communicate via the shared localhost IP
  • add a container parameter to kubectl commands