David's Blog

Articles

Read the latest posts from David's Blog.

from David Claeys

In a previous article we explained how you could deploy a .NET application with Docker. The content of this article will be applicable whether you use a .NET backend or not.

Possible pitfalls

A possible issue is that you only want to make your backend available for use to your front-end. This is quite nice since it significantly decreased the possible attack surface. But at a first glance this is not possible since the clients running the application wouldn't be able to perform any API call.

Or maybe as per convention you host all your backends at api.example.com/apiName while you want to give your front-end applications a more recognizable domain. If you've tried to just point your client requests to a different domain you've probably noticed the following problems : – it's quite annoying to hardcode domains since these can change over time – CORS on won't let you do it

The solution

These problems can both be solved through building a Docker image. The proposed example is based on Node but with some creativity you could tweak it with any front-end solution. To be clear since we're using Node we can build any framework based on it (like React or Angular).

We will split up the building process in two stages.

First build stage : Compiling

The first stage is intended to build or node application. If you want to build an application that's not based on Node this is where you would change the base image. If for some reason your build process requires multiple steps this is the place where you would do it.

FROM node:22-alpine AS builder
# all subsequent commands will be performed in the /app directory
WORKDIR /app/
# copy all the source code into the current directory
COPY . .
# update the system ,after that install all dependencies and run build
RUN apk update && apk upgrade --available && npm install \
    && npm run build

Second build stage : Hosting

The following stage will be responsible for running a http server (Nginx) hosting the application and also will proxy requests to the backend.

The contents of this stage would be something like this :

FROM nginx:mainline-alpine
# define environment variables for later subsitution
ENV API_PROTOCOL="https"
ENV API_HOST="localhost"
ENV API_PORT="80"
# change the working directory to the main nginx directory
WORKDIR /usr/share/nginx/html
# update and adding system dependencies
# default nginx configurations are also wiped out
RUN apk update && apk upgrade --available \
    && apk add envsubst \
    && rm -rf ./*
# copy the build output to the current folder
COPY --from=builder /app/build .
# add nginx configuration template file
COPY nginx.conf.template /etc/nginx/nginx.conf
# add script for variable substitution at runtime
COPY entrypoint.sh /docker-entrypoint.d/05-docker-entrypoint.sh
# set correct file permissions and remove files that are not needed
RUN chmod +x /docker-entrypoint.d/05-docker-entrypoint.sh \
    && apk del envsubst \
    && rm -rf /var/cache/apk/* \
    && rm -rf /etc/nginx/conf.d

Nginx configuration overview

Let's take a look at a file we will call nginx.conf.template.

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {

 map $http_upgrade $connection_upgrade {
        default upgrade;
        ''      close;
    }


    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
        keepalive_timeout  65;

     server{
        listen 80;
        
        location / {
            root /usr/share/nginx/html;
        }

        location /hubs {
            allow all;
            # App server url
            proxy_pass $API_PROTOCOL://$API_HOST:$API_PORT;

            # Configuration for WebSockets
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
            proxy_cache off;
            proxy_cache_bypass $http_upgrade;

            # WebSockets were implemented after http/1.0
            proxy_http_version 1.1;

            # Configuration for ServerSentEvents
            proxy_buffering off;

            # Configuration for LongPolling or if your KeepAliveInterval is longer than 60 seconds
            proxy_read_timeout 100s;

            proxy_ssl_server_name off;
            proxy_ssl_verify off;

            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
}
Discussing locations

The first thing to note is that we define two locations : / (web location) and location /hubs (proxy location). The / location will host the build output of our application, while the /hubs location is the location of the requests that will be proxied. It's important that in order for the web location to work the build files must be present in the indicated root directory.

The reason that we did not call the proxy location /api is that our front-end application uses SignalR to communicate to the backend. The configuration provided in this example enables features like web sockets and long polling. However you can tweak the example provided to meet your needs.

If you look deeper into the proxy configuration you probably will notice $API_PROTOCOL://$API_HOST:$API_PORT. If you would try this configuration directly in nginx it will fail pointing out your configuration is incorrect.

Don't worry though since these are simply placeholders (that's the reason we've called this file a template) that will be replaced later on. Our front-end application can simply point API communication to /hubs/whatever and our proxy will take care of it.

Variable substitutions

Let me ask you a question : When do you replace the placeholders with it's final value ? If you do it at build time each time a domain changes you'll be forced to rebuild. Or worse if you host multiple instances this means you'll need to build a separate image for each instance. I think it's obvious this method is not desirable at all.

Instead of performing variable substitutions at build time they should be performed at run time. Modifying the entry point of an existing Docker image can be quite tricky, luckily we won't need to. The nginx image provides a feature that when you put scripts into the /docker-entrypoint.d folder of the container it will run these scripts at startup time.

We will substitute the following variables : APIPROTOCOL_, APIHOST_ and APIPORT_. Let's have a look at our entrypoint.sh file :

#!/usr/bin/env sh
set -eu

echo "$(envsubst '${API_PROTOCOL},${API_HOST},${API_PORT}' < /etc/nginx/nginx.conf)" > /etc/nginx/nginx.conf
exec "$@"

This script is quite easy, it uses the _envsubst _ command in order to read and substitute the contents of /etc/nginx/nginx.conf and writes them afterwards into the same file. So during our docker image process we will need to locate our template file at /etc/nginx/nginx.conf and at runtime this script will substitute the contents of the file with the provided environment variables.

Considerations and thoughts

In this example we used Nginx as our http server, however you can use the server that best fits your use-case. However if you choose so you will need to figure out how to setup a proxy on your own. To be honest most common http servers provide plentiful documentation, so it really shouldn't be a problem.

You might have noticed the use of envsubst. The placeholder substitution at runtime has been one of the parts where I struggled most. For some reason it has been quite tricky to get the values of the environment variables in a bash script and putting them in the configuration file. The most annoying part is that you specify the variables you want to substitute. If you have a large amount of placeholders to replace this can become quite cumbersome.

 
Read more...

from David Claeys

The problem

In a previous post I went through the process of setting up your own epg provider with iptiv-org/epg. That process is still valid but it has some important drawbacks. First of all the setup process is quite lengthy, which may scare potential users away. Secondly the installation process is performed directly on the host. Which might be a dealbreaker if you like hosting applications through Docker.

The solution

Introduction

This is where one of my personal projects comes into place epg-info-docker. The purpose of this repository is to take the code in iptiv-org/epg and to build a Docker image out of it.

If you want to take a look at it, the code is available through my git server or github. You obviously can take this code and build it yourself, but this is not the most convenient.

For your convenience images are made available at different registries : – git.claeyscloud.com/david/epg-info – ghcr.io/davidclaeysquinones/epg-info – docker.io/davidquinonescl/epg-info

Each of these images is the same, so you can pick the image from where you prefer.

Setup

You can use this image in the following way :

version: '3.3'
services:
  epg:
    image: git.claeyscloud.com/david/epg-info:latest
    #image: ghcr.io/davidclaeysquinones/epg-info:latest
    #image: davidquinonescl/epg-info:latest
    volumes:
      # add a mapping in order to add the channels file
      - /docker/epg:/config
    ports:
      - 6080:3000
    environment:
      # specify the time zone for the server
      - TZ=Etc/UTC
      # uncomment the underlying line if you want to enable custom fixes
      #- ENABLE_FIXES=true
    restart: unless-stopped

In order to setup the program you need a channels.xml file. This files describes which providers and channels you want the program to generate epg information. An example of the contents for this file looks like this :

<?xml version="1.0" encoding="UTF-8"?>
<channels>
 <channel site="movistarplus.es" lang="es" xmltv_id="24Horas.es" site_id="24H">24 Horas</channel>
</channels>

In the repo you can look for all available providers. Each provider has a list with it's available channels.

And that's it ! You've just setup your own epg provider.

 
Read more...

from David Claeys

In recent years streaming services have gained a lot of popularity. However for a multiple of reasons sometimes we might want to watch Live TV.

Depending on the place you live your ISP or cable provider might (or not) provide some kind of app to watch TV on your mobile devices. However some apps are crappy, other are limited in the channels you can watch or other might have a very limited feature set. For these reasons you might want to watch Live Tv on your own terms.

In this article we will look at how you would go about setting up Live Tv on your own infrastructure. In the end you'll be able to stream Tv through web, mobile devices in a very convenient way.

In order to reach our end goal we will perform the following steps: – Installing and setting up iptiv-org/epg to acquire EPG data – Installing and setting up Threadfin – Installing and setting up Jellyfin

Disclaimer : This article's assumption is that you have some knowledge about the Linux network stack and Docker.

Setting up EPG

Getting schedules for the channels you want is quite essential in order to have a good experience. However depending on the country where you live getting EPG (Electronic Programme Guide) can be very easy or almost impossible.

By example if you live in Spain dobleM provides EPG information for almost any channel you can imagine.

However if you live in Belgium getting decent EPG information is very challenging. I've looked through forums and not found any source available.

Setting up your own EPG provider

So what do you do there are no EPG sources available for your country or for a particular channel ?

This is where iptiv-org/epg comes to the rescue.

Let's get through the necessary steps in order to set it up.

First of all you'll want a system with a static IP address. We will be using Ubuntu 22.04 in order to perform the setup process. As always feel free to use any Linux flavor you like but be aware that you might get through some roadblocks (or not) if you do so.

Updating and installing dependencies

First of all we want to make sure all our system dependencies are up to date and and we will install our necessary dependencies.

sudo apt-get update \
  && sudo apt-get upgrade -y -q \
  && sudo apt-get install curl -y \
  && sudo apt-get install git -y

Installing Nodejs

In order to install the latest supported NodeJs version we will be using NodeSource. There are other ways you could do the same but this is the most convenient way to do it.

Note : At the moment NodeJS 22 is not compatible with the software we're installing.

curl -fsSL https://deb.nodesource.com/setup_21.x -o nodesource_setup.sh
sudo -E bash nodesource_setup.sh
sudo apt-get install -y nodejs

Once you've performed these steps the command node -v should return v21.x.x.

Installing iptiv-org/epg

Now we can proceed to the actual installation of our EPG provider. First we will make a directory where we will perform the installation

mkdir /bin/epg -p

Now we want to go into the directory we just made by typing cd /bin/epg

At this point we are ready to clone the git repository into our server.

git -C /bin clone --depth 1 -b master https://github.com/iptv-org/epg.git

Once the source code is on our machine we can install the necessary dependencies.

npm install

In order to serve our files over the network we also want to install an npm module called pm2

npm install pm2 -g

Now we will create two scripts that will enable us to start our EPG provider at startup. start.sh :

#!/bin/bash

pm2 --name epg start npm -- run serve
npm run grab -- --channels=channels.xml --cron="0 0,12 * * *" --maxConnections=10 --days=14 --gzip

stop.sh :

#!/bin/bash

pm2 delete 0

To use these scripts we need to create our service file typing nano /etc/systemd/system/epg.service Put the following content in the file :

[Unit]
Description=Epg
After=network.target

[Service]
ExecStart=/bin/epg/start.sh
ExecStop=/bin/epg/stop.sh
WorkingDirectory=/bin/epg

[Install]
WantedBy=default.target 

As a last step we need to tell the system is should reload it's services by typing systemctl daemon-reload.

We've just completed the installation of our own EPG provider but in order to get actual EPG information we need to tell it which channels we want information for.

We do this by creating a file called channels.xml by typing nano channels.xml. An example of the contents for this file looks like this :

<?xml version="1.0" encoding="UTF-8"?>
<channels>
 <channel site="movistarplus.es" lang="es" xmltv_id="24Horas.es" site_id="24H">24 Horas</channel>
</channels>

The contents of this file depend on which providers and channels you want to use. In the repo you can look for all available providers. Each provider has a list with it's available channels.

Be aware that not all providers are equal. For example telenet.tv is rock solid but lacks program thumbnails for most channels. And in contrast pickx.be keeps breaking because of intentional API changes but most programs have thumbnails.

Finding the right providers for the right channels is a process of trial and error and also depends on what you're willing to deal with.

These some providers you could use :

This list is by any means extensive and if you're looking for other countries you should check which providers are available

Setting up Live Tv streams

The next piece of the puzzle is getting the streams for the channels you want. The options you have depend a lot on where you live and on your goals.

For example in the US you could use a HD HomeRun. In some countries (like Spain) you could install a DVB-T2 decoder into your system and setup tvheadend to stream over the network. However if you live in countries where open standards were purposely not adopted (like Belgium) you're only option is to resort to an IPTV provider.

There are some IPTV list available like iptv-org/iptv or TDTChannels that just list publicly available streams and that are completely legal.

If you still choose to use an IPTV provider that infringes copyright please be aware that depending on legislation you could be sanctioned for just being a customer. Be also aware that getting scammed while sourcing an IPTV provider is a real possibility. I don't want to encourage neither recommend you to source an IPTV provider that infringes copyright. If you make that decision you do so under your own responsibility. Please be careful and try to minimize risks as much as possible.

Some pieces of software (like Jellyfin) offer a direct integration to the HD HomeRun. If you have such a device you can directly integrate it. However I would recommend to use Threadfin as an intermediate layer in order to manage EPG and channel numbering. If you're using an m3u stream from tvheadend or an IPTV provider you can't get around using this piece of software.

Installing Threadfin

This is how a Docker compose file would look like for Threadfin without any additional precaution :

version: "3.5"
services:
  threadfin:
    image: fyb3roptik/threadfin
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TIME_ZONE}
    volumes:
      - ${THREADFIN_CONFIG_DIR}:/home/threadfin/conf
    ports:
      - 34400:34400
    restart: unless-stopped

If you would like to take some precaution gluetun is a very good option. This is basically a Docker image that allows you to configure almost any VPN provider.

In the wiki you can find information about how to setup your particular VPN provider.

So if you would like to take precautions your compose file would look like this :

version: "3.5"
services:
  vpn:
    image: qmcgaw/gluetun
    cap_add:
      - NET_ADMIN
    devices:
      - /dev/net/tun:/dev/net/tun
    sysctls:
      - net.ipv6.conf.all.disable_ipv6=0
    environment:
      - TZ=${TIME_ZONE}
      - VPN_SERVICE_PROVIDER=${YOUR_PROVIDER}
      ....
      # some provider specific variavles
      ....
      - FIREWALL_OUTBOUND_SUBNETS=${YOUR_SUBNET}/24
    ports:
      - 34400:34400
    volumes:
      -  ${VPN_CONFIG_DIR}:/config
    restart: unless-stopped
  threadfin:
    image: fyb3roptik/threadfin
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TIME_ZONE}
    depends_on:
      - vpn
    network_mode: service:vpn
    volumes:
         - ${THREADFIN_CONFIG_DIR}:/home/threadfin/conf
    restart: unless-stopped

Setting up Threadfin

Once Threadfin is installed we need to set it up.

Basic settings

Threadfin settings page

Before we continue we want to open the settings page. We want to change the following things : – EPG Source to XEPG – Replace missing program images should be checked – Stream Buffer: to VLC

If you notice that your streams are stuttering you can experiment with increasing Buffer Size.

The Number of Tuners setting sets a system wide maximum number of streams. Choose a realistic number based on your needs and system performance. This setting can also be overridden at playlist level to a lower value.

If you're going to use TVHeadend the Ignore Filters setting will make things easier later on.

Playlist settings

Threadfin playlist settings

The first time you open this page you will be greeted by an empty page.

When you press on the new button you will be greeted by the following dialog. New playlist dialog

Choose M3U if you're using an stream (IPTV or TvHeadend) or choose HdHomeRun if you're using that particular device.

Depending on your choice you will see once of these dialogs.

New playlist M3U playlist

New playlist HDHomeRun playlist

The M3U file or HDHomeRun IP fields are the most crucial part. Fill in the address to the M3U file or your HDHomeRun device on your local network.

You also want to set the Tuner/Streams amount to a reasonable amount. If you're using TV Headend, a public IPTV list or HdHomeRun this will be hardware constrained (number or tuners and general system performance. If you're using a IPTV provider this will be whatever their general policy permits.

XMLTV settings

Threadfin XMLTV settings

This page will also be empty when you open it up for the first time. In my opinion this is one of the strengths of Threadfin. Regardless of whether you have any EPG information you can mix and match different sources to the combination you like.

When you press on the new button you will be greeted by the following dialog. New XMLTV dialog

You can give it whatever name and description you like. The XMLTV File field is the part that really matters. If you want to use a publicly available source you just fill in the corresponding URL according to their documentation. If you followed along and set up your own EPG provider the address will be <EPG IP ADDRESS>:3000/guide.xml.

Filter settings

If you plan to use TvHeadend and enabled the Ignore Filters setting you can skip this section.

Otherwise open this page and since we're getting started it will be empty. The general idea of this page is that in most cases IPTV lists contain hundreds if not thousands of streams. In order to not affect system performance and keep things manageable we need to choose the categories we'll want to map later on. Choosing one particular category doesn't mean we are forced to map all channels in it.

New filter dialog

Threadfin offers two different filter types M3U and custom filters. The M3U type is pretty basic and limits itself to the categories contained in group titles contained in the M3U file. The custom filter is powerful because it enables to make filters on specific patterns.

Now I need to be honest, at some point I've tried to use custom filters but I didn't figure it out. I think that depending on playlist size it might take quite some time to process since it needs to check for a pattern for each stream in the playlist. However that's just an assumption since I've not really used this feature. Feel free to try it out but I won't go into any more dept since I'm not able to.

New M3U filter dialog The field we want to look for is group title. This will make the chosen group title available in the mapping tab. You can have a look at the include/exclude settings if you want so but it's not strictly necessary.

Mapping settings

When opening the mappings page you won't be greeted by an empty list. Most probably you'll be greeted with a list with unmapped/inactive channels. You can make the distinction because of the red line on the left end of the table. List of unmapped channels

Before activating a channel you should first assign it the number of your liking. You do this by typing the desired value in the text field.

In order to continue click on the desired channel in order to open the map channel popup.

Map channel popup

The most important settings are : – Active to activate the channel – Channel name to edit the channel name – Logo Url to assign the channel a logo – Group title to group the channel to your liking – XMLTV File in order to choose the XMLTV file you want to use – XMLTV Channel to choose the right channel in the XMLTV file

Once you've chosen your desired settings click on the done button. Now there also should be a list with active/mapped channels. You can make the distinction because of the green line on the left end of the table.

List of mapped channels

Mapping all desired channels can be a repetitive task but as you'll see in the end the effort is worth it.

Note : In the next steps we'll be talking about setting up and installing Jellyfin. However you can use Threadfin with any software that supports the HD HomeRun since it functions as an emulation layer. Other software of the likes of Plex Media Server, Kodi and Emby exist that enables you to do the same. However Jellyfin is the only open source solution that enables this feature without any paid plan and on the server side (Kodi is a client application).

Installing Jellyfin

This is how a compose file for a Jellyfin installation looks like :

version: "3.5"
services:
  jellyfin:
    image: jellyfin/jellyfin
    user: ${PUID}:${PGID}
    ports:
      - 8096:8096
    volumes:
      - ${CONFIG_FOLDER}:/config
      - ${CACHE_FOLDER}:/cache
      - ${MOVIES_FOLDER}:/Movies
      - ${TV_SHOWS_FOLDER}:/Tv Shows
      - ${RECORDINGS_FOLDER}:/recordings:/recordings
    restart: unless-stopped
    depends_on:
    environment:
      #use this variable if you want to access your Jellyfin server through a domain name
      - JELLYFIN_PublishedServerUrl=http://jellyfin.yourdomain.com

Once you deploy this compose file Jellyfin will be available through port 8096 or through the domain you've set up. Complete the setup wizard and setup your libraries.

After this click on your user icon and open the administration panel

Jellyfin admin panel

We want to go to the Live Tv section of the admin panel. Click on the + button under Tuner Device.

Add tuner dialogl

Select HD Homerun as the Tuner Type and check the Allow hardware transcoding checkbox. Under Tuner IP Address you should type http://<THREADFIN IP ADDRESS>/. Once that's done click on the save button.

Last but not least click on the + button under TV Guide Data Providers and choose XMLTV.

Add XMLTV dialogl

The only thing you need to do is type http://<THREADFIN IP ADDRESS>:34400/xmltv/threadfin.xml under File or URL. Click on the save button and you're all set. Jellyfin will need some time in order to gather all necessary information but after a while live tv will be available.

Jellyfin is available through the web interface and different apps. The UI is pretty straightforward so we won't go into detail on this topic. You've just setup up live tv on your server on your terms.

 
Read more...

from David Claeys

Since Microsoft started to transition .NET they also started offering Docker images to package your applications. To be more specific at Docker Hub Microsoft lists their images and intended purposes.

I wanted to take myself up for a challenge and try to package a .NET API project into a Docker container. The purpose of this article isn't to tell you how to build an API project since this topic is broadly covered on the web. I want to tell you one of the roadblocks I ran against and how I managed to solve it.

If you want to get started the following tutorials could be useful : – Containerize a .NET appStep By Step Dockerizing .NET Core APISmaller Docker Images for ASP.NET Core Apps

Slim Docker images

It's best practice to make the Docker images you publish as slim as possible. The main benefit of doing this is that consuming your image will take less space on your host if you do so. There are many ways to make your image slimmer but one of the most effective ways is picking the right base image with the right tag.

For example if we look at the tags for the ASP.NET Core Runtime we see among others the following sections : Linux amd64, Nano Server 2022 amd64 , Windows Server Core 2022 amd64 and so on. If you want to make your Docker image multi platform compatible (one of the main benefits of .NET and Docker) you should automatically discard the tags representing a Windows environment. First of all it's probably not the most lightweight base OS to build your image but more importantly Windows Docker containers can't run on any system that isn't Windows based.

This limits our choice to Linux based images, but even there we have lots of choice. By example at this moment in time we can choose among others between 8.0-bookworm-slim (Debian), 8.0-alpine-amd64 (Alpine) and 8.0-jammy (Ubuntu). Microsoft marks the Debian variant with the latest tag since this distribution is pretty lightweight and also is quite widespread. However if we want to take things up a notch we should go for alpine since this is a lightweight no frills distribution.

The roadblock

When publishing a .NET API it is served by Kestrel. When making an API it is recommended to use HTTPS for security reasons. Furthermore when making a production build it is even required.

When reading the documentation we see we should use the following commands : – dotnet dev-certs https -ep $env:USERPROFILE\.aspnet\https\aspnetapp.pfx -p crypticpassworddotnet dev-certs https --trust

This is simple enough, what's the problem then? Well the second of those command is only supported on Windows based systems.

The solution

After a lot of trial and error I came to the following solution :

# Password for the certificate
ARG CERT_PASSWORD_ARG=SUPERSECRET
# this image contains the entire .NET SDK and is ideal for creation the build
FROM mcr.microsoft.com/dotnet/sdk:8.0-alpine-amd64 AS build-env
ARG CERT_PASSWORD_ARG
ENV CERT_PASSWORD=$CERT_PASSWORD_ARG
WORKDIR /App
COPY . ./
# Restore dependencies for your application
RUN dotnet restore
# Build your application
RUN dotnet publish test.csproj --no-restore --self-contained false -c Release -o out /p:UseAppHost=false 
# Make the directory for certificate export
RUN mkdir /config
# Generate certificate with specified password
RUN dotnet dev-certs https --export-path /config/aspnetapp.pem --password "$CERT_PASSWORD" --format PEM

# this image contains the ASP.NET Core and .NET runtimes and libraries 
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine-amd64
ARG CERT_PASSWORD_ARG
ENV CERT_PASSWORD=$CERT_PASSWORD_ARG
WORKDIR /App
# add dependency in system to setup certificates
RUN apk add ca-certificates 
# create directory to store certificate config
RUN mkdir /config 
# create necessary config directory
RUN mkdir -p /usr/local/share/ca-certificates/
# copy compiled files to runtime
COPY --from=build-env /App/out . 
# copy generated certificate
COPY --from=build-env /config /config
# Disable Big Brother
ENV DOTNET_CLI_TELEMETRY_OPTOUT=1
# Set the environment to production
ENV ASPNETCORE_ENVIRONMENT=Production
# Set the urls where Kestrel is going to listen
ENV ASPNETCORE_URLS=http://+:80;https://+:443
# location of the certificate file
ENV ASPNETCORE_Kestrel__Certificates__Default__Path=/usr/local/share/ca-certificates/aspnetapp.crt
# location of the certificate key
ENV ASPNETCORE_Kestrel__Certificates__Default__KeyPath=/usr/local/share/ca-certificates/aspnetapp.key
# specify password in order to open certificate key
ENV ASPNETCORE_Kestrel__Certificates__Default__Password=$CERT_PASSWORD
# copy certificate files to config directory
RUN cp /config/aspnetapp.pem $ASPNETCORE_Kestrel__Certificates__Default__Path 
RUN cp /config/aspnetapp.key $ASPNETCORE_Kestrel__Certificates__Default__KeyPath
# set file permisions for certificate file
RUN chmod 755 $ASPNETCORE_Kestrel__Certificates__Default__Path 
RUN chmod +x $ASPNETCORE_Kestrel__Certificates__Default__Path
# change file ownership for certificate file
# add generated certificate to trusted certificate list on the system
RUN cat $ASPNETCORE_Kestrel__Certificates__Default__Path >> /etc/ssl/certs/ca-certificates.crt
# set file permissions for key file
RUN chmod 755 $ASPNETCORE_Kestrel__Certificates__Default__KeyPath
RUN chmod +x $ASPNETCORE_Kestrel__Certificates__Default__KeyPath
# change file ownership for key file
RUN update-ca-certificates

ENTRYPOINT ["dotnet", "test.dll"]
EXPOSE 80 
EXPOSE 443

The above file is for demonstration purposes, in practice you shouldn't use consecutive RUN instructions, you should update system dependencies and perform some cleanup. I've excluded those steps in order to focus on this article's subject.

Deep dive

The first step I want to focus on is the following :

RUN dotnet dev-certs https --export-path /config/aspnetapp.pem --password "$CERT_PASSWORD" --format PEM

By default the command to generate certificates generates a certificate in the PFX format. While it is theoretically possible to use that format on Linux systems it's an overly complicated mess. So in order to make things easier we tell the generator tool to use the PEM format. This way of using certificates is much better supported in Linux and much easier to setup. This command will generate two files : a certificate file and a key file. The key file is encrypted with the password that is specified in CERT_PASSWORD_ARG.

The next important part is :

# location of the certificate file
ENV ASPNETCORE_Kestrel__Certificates__Default__Path=/usr/local/share/ca-certificates/aspnetapp.crt
# location of the certificate key
ENV ASPNETCORE_Kestrel__Certificates__Default__KeyPath=/usr/local/share/ca-certificates/aspnetapp.key
# specify password in order to open certificate key
ENV ASPNETCORE_Kestrel__Certificates__Default__Password=$CERT_PASSWORD

These environment variables tell the Kestrel server where it needs to look for the certificate files. The ASPNETCORE_Kestrel__Certificates__Default__Password is key, since if it is not specified or correctly populated Kestrel won't be able to use the certificate and will crash. This variable isn't anywhere to be found on Microsoft's documentation and I only was able to find it looking at the .NET source code published on GitHub.

The next important part is

RUN cat $ASPNETCORE_Kestrel__Certificates__Default__Path >> /etc/ssl/certs/ca-certificates.crt
RUN update-ca-certificates

This tells the system to trust the certificate we generated. If we wouldn't do that Kestrel also wouldn't be able to run and would crash.

Security implications

Maybe the elephant in the room is that in this setup we are using a self signed certificate in order to serve our application in a container. Many might be eager to discard this whole setup for this reason. But before doing that hear me out.

To start with, it's bad practice to hardcode the certificate you'll deploy in production environments in code. So in fact your Docker image should always use a development certificate. Yes, this example also contains a hardcode password at the beginning but this shouldn't be an issue.

In theory we could use the ASPNETCORE_Kestrel__Certificates__Default__Path, ASPNETCORE_Kestrel__Certificates__Default__KeyPath and ASPNETCORE_Kestrel__Certificates__Default__Password environment variables in order to setup our production certificates at deployment. This would allow us to run the image in a container while developing and use a securely stored certificated at deployment. However this solution is discouraged since Microsoft doesn't recommend directly exposing the Kestrel server in Production environments.

This leads to what in my opinion is the preferable solution : using a proxy. You can setup IIS, Nginx, Apache, Traefik and so on, with the certificate you want to use. Clients using the deployed application will have a secure connection and you don't need to deal with the complexities of setting up a “real” certificate at the image level.

Using Docker is amazing, and being able to use it with .NET even more. If you stumbled on the same roadblock I hope this article proved useful.

 
Read more...

from David Claeys

You are outside your home, but want to watch your favourite movie on your Plex server, or some VM crashed and you need access to your hypervisor.

In these cases external access to your network comes in handy, in this artivle we will learn how to setup external access with Wireguard.

Assumptions

  • You already have a working system with Docker installed
  • Your ISP provides an external IP (your internet connection is not behind CG-NAT)
  • You know how to expose ports on your firewall
  • You already have a domain

Setting up port forwarding

Before you start you should go into your router and forward the port of your liking to the system where later on we will setup Wireguard. It's important that this system has a static ip, since otherwise you would need to update your port forwarding settings each time your ip changes.

An example of a routing table with port forwarding enabledAn example of a routing table with port forwarding enabled

Setting up Wireguard

There are different options to setup Wireguard, the option I chose is called wireguard-ui. It is available as an easy to setup Docker image and offers a nice web interface.

This is an example compose file :

    version: "3"
    services:
      wg-ui:
        image: ngoduykhanh/wireguard-ui
        cap_add:
          - NET_ADMIN
          - SYS_MODULE
        environment:
          - WGUI_SERVER_LISTEN_PORT=${WGUI_SERVER_LISTEN_PORT}
          - WGUI_MANAGE_START=true
          - WGUI_MANAGE_RESTART=true
          - WGUI_SERVER_POST_UP_SCRIPT=iptables -A FORWARD -i wg0 -j ACCEPT;iptables -A FORWARD -o wg0 -j ACCEPT;iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE;iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
          - WGUI_SERVER_POST_DOWN_SCRIPT=iptables -D FORWARD -i wg0 -j ACCEPT;iptables -D FORWARD -o wg0 -j ACCEPT;iptables -t nat -D POSTROUTING -o wg0 -j MASQUERADE;iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
          - TZ=${TIME_ZONE}
        network_mode: bridge
        volumes:
          - ${WGUI_CONFIG_FOLDER}:/app/db
          - ${WG_CONFIG_FOLDER}:/etc/wireguard
    
        ports:
          - 5000:5000
          - ${WGUI_SERVER_LISTEN_PORT}:${WGUI_SERVER_LISTEN_PORT}/udp
        sysctls:
           - net.ipv4.conf.all.src_valid_mark=1
           - net.ipv4.ip_forward=1
        restart: unless-stopped  

And these are the variables for the compose file

    WGUI_CONFIG_FOLDER=/docker/wireguard/ui
    WG_CONFIG_FOLDER=/docker/wireguard/server
    #this should be the same port you exposed on your router
    WGUI_SERVER_LISTEN_PORT=60
    #choose the timezone you like
    TIME_ZONE=Europe/Madrid

Notices

  • It's very important you make sure the WGUI_SERVER_POST_UP_SCRIPT and WGUI_SERVER_POST_DOWN_SCRIPT variables are correctly filled in. It's not mentioned in the documentation but without them you'll not be able to establish a remote connection.

  • The documentiation suggests using host mode for networking, this might be usefull for performance reasons. However I didnΓÇÖt like to lose network isolation and didn't have performance issues, so I preferred bridge mode.

  • The documentation mentions that you can setup SMTP to automatically send Wireguard credentials, I've only been able to do this through the SendGrid integration (SENDGRID_API_KEY)

The next step you should take is to change the default password. You can do this by clicking on the username and then changing the password on the form that appears.

Change password formChange password form

Setting up a domain

This step is not strictly necessary but I very recomendable. Wireguard-ui lets you auto discover your external IP, and this will work.

However most residential internet connections have an dynamic ip address, this means that depending on your ISP your external IP could change at any time without notice. Everytime your external IP changes you would need to go into the settings and discover your new IP address (This could happen every couple of hours, days , months or years).

The issue with this is that your external IP could change without you noticing and at the worst time possible you've lost your remote network access.

The solution to this problem is setting up dynamic DNS. Again there are multiple options to do this, but the solution I liked the most is called ddns-updater.

My compose file looks like this :

    services:
      ddns-updater:
        image: qmcgaw/ddns-updater
        ports:
          - 8000:8000/tcp
        volumes:
          - ${CONFIG_FOLDER}:/updater/data
        environment:
          - CONFIG=
          - PERIOD=5m
          - UPDATE_COOLDOWN_PERIOD=5m
          - PUBLICIP_FETCHERS=all
          - PUBLICIP_HTTP_PROVIDERS=all
          - PUBLICIPV4_HTTP_PROVIDERS=all
          - PUBLICIPV6_HTTP_PROVIDERS=all
          - PUBLICIP_DNS_PROVIDERS=all
          - PUBLICIP_DNS_TIMEOUT=3s
          - HTTP_TIMEOUT=10s
          - LISTENING_PORT=8000
          - ROOT_URL=/
          - BACKUP_PERIOD=0 # 0 to disable
          - BACKUP_DIRECTORY=/updater/data
          - LOG_LEVEL=info
          - LOG_CALLER=hidden
        restart: always

And my variables look like this :

    CONFIG_FOLDER=/docker/ddns-updater

The last thing we need to do is to make our config.json file in order to get our dynamic DNS working. This file should be located in your config folder, so in this case in /docker/ddns-updater

This page provides all the available domain registerers and their configuration. Since I use cloudflare my config file looks like this :

    {
      "settings": [
        {
          "provider": "cloudflare",
          // fill in your zone identifier
          "zone_identifier": "zone_identifier",
          // fill in your domain
          "domain": "wireguard.example.com",
          "host": "@",
          "ttl": 600,
          // fill in your token
          "token": "token",
          "ip_version": "ipv4"
        }
      ]
    }

Once you've done this you can verify everything works by opening the web interface at port 8000.

An example of the web interfaceAn example of the web interface

The last step is to fill in our domain in the wireguard settings. You can do this in Global settings > Endpoint address

Configuring Wireguard end point settingsConfiguring Wireguard end point settings

Setting up clients

The wireguard-ui web interface is simple but for the sake of completeness here is a short explanation on how to create clients.

Go to Wireguard Clients and click on the New Client button. You should give it a name and if you want to later on send the wireguard credentials through email you can fill it in. The really important detail is to fill in the network that needs remote access under allowed IP's. Once everything is correctly filled in click on submit.

New client dialogNew client dialog

The last step would be to go to Wireguard Server and to click on the Apply config button. Now the wireguard server will restart and load the new config. Now you're ready to add as many clients as you want and access your network from remote locations :)

 
Read more...