This post is raw. No grammar is checked.
When doing local development or just tinkering with this or that, quite often you want to expose something to the Internet.
When it’s just single “something”, then you can just forward your port 80 from router, to your machine’s “something” port.
But this falls appart if you have two “something” you want to expose to the
Internet. You can’t forward single 80
port from your router to two different
applications running on your machine. Because one application will be running
for example on 8080
port and the other on 8081
.
To solve this, you need Revese Proxy. Not the Forward Proxy. That is different thing often used in corporate networks to limit and secure outgoing employee traffic.
So… Reverse Proxy.
There are many solutions you can pick from. Nginx (which can act as reverse proxy, but in it’s core it is just pure WEB server), Traefik, Envoy and others.
But today I will talk about HAProxy. Really cappable Layer 4 proxy.
I will not go deep in the weeds about it in this post.
I just want to show, how to run it locally on your machine as root-less Podman Quadlet.
Personally I am using Ansible templates to render all the files and resources, but in this post I will show just MINIMAL working files which you need to get it running.
I will not tell, how to configure TLS certificates. And I will not explain all in’s and out’s of HAProxy configuration.
But we will enable Dataplane API so you can dynamically tweak the configuration of your HAProxy instance.
We need to create 3 files in ~/.config/containers/systemd
directory.
haproxy-configmap.yaml
haproxy.kube
haproxy.yaml
Let’s go with the haproxy-configmap.yaml
first:
|
|
This is our minimal HAProxy configurations to get it running.
We have defined 2 sets of configurations which will be mounted as files in the
container. First one is haproxy.cfg
and the second one is dataplaneapi.yaml
.
Dataplane and Haproxy will run as 2 separate processes within the single container.
Dataplane also can be configured in HashiCorp Configuration Language HCL
, but
we will use yaml
there.
Next, we need to create Systemd unit file haproxy.kube
. It will be used by
Quadlet generator to generate proper Systemd unit.
|
|
As you can see, we use unpriviledged ports. And we point to the 2 other files there.
Next goes our almost last file - haproxy.yaml
which is just a Podman Pod
manifest.
|
|
Nothing too fancy there. We are running container as 1000
user which is
haproxy
. We drop all container capabilities. And we mount our configmap items
as files there.
But there are single issue which you might noticed.
We are using some kind of localhost/your-namespace/haproxy:2.9.7-rootless
image.
Well… yeah… official HAProxy image is not tailored to run as root-less. So we need to build our own custom image to tweak some little things.
Don’t panic. It’s simple.
We need to create simple Containerfile
:
|
|
That’s it.
your-namespace
can be whatever you want. Use your username if you are not
sure. And of course you can bump the HAProxy version.
Now, within the directory where your Containerfile
is located execute: podman build -t localhost/your-namespace/haproxy:2.9.7-rootless
Podman will pick your Containerfile
automatically. But you can pass the
location of the Containerfile by using -f path/to/Containerfile
.
As you can see, in our custom image we simply ensure that config directories are
presented and we set the permissions so that haproxy
user in the container can
access these directories.
Use podman image ls | grep haproxy
to see, if the image is there.
So, once our image is built, we can deploy our Podman Quadlet.
Simply run systemctl --user daemon-reload
for the Systemd to pick up our 3 new
config/unit files and then systemctl --user start haproxy.service
.
This should start the HAProxy Pod.
podman pod ps
should tell you that haproxy is running.
Basically, that’s it!
HAProxy should be functional, but not very usefull as it basically has no configuration.
To do the configuration… you have few options.
- You can extend our ConfigMap with the config sections you want.
- You can use
curl
and Dataplane API to “inject” the configs dynamically. - You can use Terraform and Dataplane API. Or Ansible.
I would recommend to use just Dataplane API and at least simple Bash scripts
with bunch of curl
commands to do the config. Don’t create manual edits in the
config. Even more - don’t create manual edits in the container itself.
To interact with Dataplane API use:
|
|
There we are “curling” into local container over local 127.0.0.1
IP and then just
using regular REST endpoints to GET or POST the config fragments as JSON.
|
|
For example, there we are creating “skeleton” for some backend.
Pay attention to the ?version=1
at the end of the query. Dataplane API will
automatically increase the version number of the haproxy.cfg
file every time
you make some changes. It is so that you can roll back or to protect against
concurent config updates if some of the other team members do the updates to
the config at the same time as you do.
So… in essence… that’s it. You should have functional HAProxy.
Forward your router’s 80
and 443
port to your HAProxy’s host 8080
and 8443
ports.
As somebody will hit your domain and will get redirected to your home’s static
IP, your router will redirect that request to the 8080
or 8443
port. What
happens from there… now is up to your HAProxy’s config.
To stop the HAProxy use systemctl --user stop haproxy.service
.
To get into container use podman exec -it haproxy-server /bin/bash
To get into container as root user use podman exec -u 0 -it haproxy-server /bin/bash
To see the logs journalctl --user -xeu haproxy.service -f
Things to improve:
- Systemd lingering. So that Pod starts as you turn on your machine and uses dedicated user for that.
- TLS for the Dataplane API
- Securing Dataplane API and Stats user/s credentials
- Automated setup via Ansible
- Forward all http traffic to https
- Enable Let’sEncrypt ACME path rule (so that you can use Certbot without DNS method)
- Use
crt-list
to combine different TLS certificates - SSH handling. In case if you run your own Git server, you might want to proxy to multiple SSH targets.
So… before going deeper into the weeds… make sure you have minimal working example to which you can always return when things goes south.