Laravel Route Caching With FrankenPHP and Kubernetes
I just has the most painful time configuring domain based routing in Laravel to work on dev/stage/prod according to env vars.
Everything worked fine in dev - only for me to stuck getting 404s when I deployed to kubernetes.
What I figured out (eventually)
- I forgot that my Dockerfile contained a
artisan route:cache
line - this happened before my env was populated causing bad route caching artisan route:list
doesn’t show the cached routes - so it looks like it all ought to work- Env data is cached - better to use config in routes, and read the env vars in config/app.php than trying to read env vars in routes
- Clearing caches didn’t fix anything
What fixed it
Set relevant env files
Add to config/app.php
'root_domain' => env('ROOT_DOMAIN', 'test'),
'admin_domain' => env('ADMIN_DOMAIN', 'know.test'),
in my routes files
// routes for my admin site like ..
Route::domain(config('app.admin_domain'))->get('/', [AuthenticatedSessionController::class, 'create'])
->name('login');
Route::domain('{site:domain}.' . config('app.root_domain'))->group(function () {
// routes for public sites on a different subdomain here
});
NB each site has a domain entry in the sites table - which allows routing to lookup which site each subdomain is for/
Remove all caching steps from the Dockerfile
Instead I call a script
ENTRYPOINT ["/app/entrypoint.sh"]
#!/bin/bash
php artisan optimize
php artisan octane:frankenphp
This calls optimize at startup - when all env variables are present
Key Takeaway
Caching can be painful - artisan not showing the cached routes made it much harder.
Sometimes it’s tricky to debug containers - you can’t just change a file and restart because restarting is a new instance and you loose the change.
But with my optimize step in the right place it all worked OK
I still need to tweak my routes - and probably add the admin domain explicitly to all admin routes. Otherwise I may end up with conflicts - or public users accidentally requesting pages from the admin site.