<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>David&#39;s Blog Reader</title>
    <link>https://blog.claeyscloud.com</link>
    <description>Read the latest posts from David&#39;s Blog.</description>
    <pubDate>Tue, 05 May 2026 13:29:14 +0200</pubDate>
    <item>
      <title>How to host your code: methods and philosophy</title>
      <link>https://blog.claeyscloud.com/david/how-to-host-your-code-methods-and-philosophy</link>
      <description>&lt;![CDATA[When reading the title you might think that the answer is pretty obvious: you just put your code repositories in GitHub and that&#39;s it.&#xA;&#xA;However, when you think about what happened to PairDrop or spotizerr it becomes obvious that the answer is not so simple.&#xA;On one hand you want to put your code in a place that is easy to reach and where your project will have exposure.&#xA;On the other you don&#39;t want to rely on Big Tech to determine the future of your project.&#xA;One false positive from an AI tool or one malicious DMCA request and all your hard work can just disappear.&#xA;Unless your project has a big audience, nobody at Big Tech will listen to you and then it can take weeks or months until everything comes back to normal.&#xA;&#xA;Mitigating risks&#xA;&#xA;How do you mitigate this risk?&#xA;The answer is self-hosting, before you draw the conclusion that such a thing is not feasible hear me out!&#xA;&#xA;Everyone who is into self-hosting knows that it comes with its set of challenges.&#xA;&#xA;By example your own domain will never have the exposure of GitHub. So you might think that self-hosting your code will reduce the exposure and viability of your project.&#xA;Luckily such a thing as a push mirror exists! &#xA;This means the following : first you commit your code on your self-hosted repository and then automatically the code gets pushed to another git repository. The mirror repositories can be hosted on any platform you want, like GitHub.&#xA;This way you still have the exposure you want while your code is under your control.&#xA;&#xA;Another challenge associated with self-hosting is managing security.&#xA;If you don&#39;t want to take any risk you don&#39;t have to expose your self-hosted instance to the public.&#xA;You just put the software locally and setup a push mirror to a provider that&#39;s publicly available, job done.&#xA;Although with tools like pangolin exposing self-hosted services has become a breeze.&#xA;&#xA;Maybe the last challenge is choosing the right software but that&#39;s what we are for.&#xA;&#xA;Choosing the right software&#xA;&#xA;Basically there are two options : gitea and forgejo. Personally, I use gitea, so this article only will include examples for that software. However due to the actions of the company behind it I would recommend to have a look at forgejo if you&#39;re starting out. Someday I will make the switch, but for now I&#39;m holding out on that pending transition.&#xA;&#xA;Build actions&#xA;&#xA;Gitea provides a run agent that is very similar to GitHub actions. However there are some differences that need to be worked around.&#xA;&#xA;Install docker&#xA;&#xA;By default the gitea runner doesn&#39;t have Docker installed, in order to be able to do anything with Docker you need to install it yourself.&#xA;&#xA;This can be done in the following way :&#xA;&#xA;name: Install Docker&#xA;   run: |&#xA;     echo &#34;Checking docker installation&#34;&#xA;     if command -v docker &amp;  /dev/null; then&#xA;       echo &#34;Docker installation found&#34;&#xA;     else&#xA;       echo &#34;Docker installation not found. Docker will be installed&#34;&#xA;        curl -fsSL https://get.docker.com | sh&#xA;     fi&#xA;&#xA;Update docker hub description&#xA;&#xA;There is this action that enables you to automatically update descriptions on docker hub. However it requires some extra dependencies to be installed.&#xA;&#xA;name: Install npm dependencies&#xA;   run: |&#xA;     echo &#34;Installing fetch&#34;&#xA;     installnode=$false&#xA;     if ! command -v node &amp;  /dev/null; then&#xA;       echo &#34;No version of NodeJS detected&#34;&#xA;       installnode=true&#xA;     else&#xA;       nodeversion=$(node -v)&#xA;       nodeversion=${nodeversion:1} # Remove &#39;v&#39; at the beginning&#xA;       nodeversion=${nodeversion%\.} # Remove trailing &#34;.&#34;.&#xA;       nodeversion=${nodeversion%\.} # Remove trailing &#34;.&#34;.&#xA;       nodeversion=$(($nodeversion)) # Convert the NodeJS version number from a string to an integer.&#xA;       if [ $nodeversion -lt  24 ]; then&#xA;         echo &#34;node version : &#34; $nodeversion &#xA;         echo $&#34;removing outdated npm version&#34;&#xA;         installnode=true&#xA;         apt-get update&#xA;         apt-get remove nodejs npm&#xA;         apt-get purge nodejs&#xA;         rm -rf /usr/local/bin/npm &#xA;         rm -rf /usr/local/share/man/man1/node &#xA;         rm -rf /usr/local/lib/dtrace/node.d &#xA;         rm -rf ~/.npm &#xA;         rm -rf ~/.node-gyp &#xA;         rm -rf /opt/local/bin/node &#xA;         rm -rf opt/local/include/node &#xA;         rm -rf /opt/local/lib/nodemodules  &#xA;         rm -rf /usr/local/lib/node&#xA;         rm -rf /usr/local/include/node&#xA;         rm -rf /usr/local/bin/node&#xA;       fi&#xA;     fi&#xA;&#xA;     if $installnode; then&#xA;       NODEMAJOR=24&#xA;       echo &#34;Installing node ${NODEMAJOR}&#34;&#xA;       if test -f /etc/apt/keyrings/nodesource.gpg; then&#xA;         rm /etc/apt/keyrings/nodesource.gpg&#xA;       fi&#xA;       if test -f /etc/apt/sources.list.d/nodesource.list; then&#xA;         rm /etc/apt/sources.list.d/nodesource.list&#xA;       fi&#xA;       apt-get update&#xA;       apt-get install -y -q ca-certificates curl gnupg&#xA;       mkdir -p /etc/apt/keyrings&#xA;       curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg&#xA;       echo &#34;deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node${NODEMAJOR}.x nodistro main&#34; | tee /etc/apt/sources.list.d/nodesource.list&#xA;       apt-get update&#xA;       apt-get install -y -q nodejs&#xA;       npm install npm --global&#xA;     fi&#xA;&#xA;     echo &#34;node version : &#34; $(node -v)&#xA;       &#xA;     package=&#39;node-fetch&#39;&#xA;     if [ npm list -g | grep -c $package -eq 0 ]; then&#xA;       npm install -g $package&#xA;     fi&#xA;name: Docker Hub Description&#xA;   uses: peter-evans/dockerhub-description@v5&#xA;   with:&#xA;     username: ${{ secrets.DOCKERHUBUSERNAME }}&#xA;     password: ${{ secrets.DOCKERHUBPASSWORD }}  &#xA;     repository: ${{ repostitory }}&#xA;`]]&gt;</description>
      <content:encoded><![CDATA[<p>When reading the title you might think that the answer is pretty obvious: you just put your code repositories in <a href="https://github.com/" rel="nofollow">GitHub</a> and that&#39;s it.</p>

<p>However, when you think about what happened to <a href="https://github.com/schlagmichdoch/PairDrop" rel="nofollow">PairDrop</a> or <a href="https://github.com/spotizerr-dev/spotizerr" rel="nofollow">spotizerr</a> it becomes obvious that the answer is not so simple.
On one hand you want to put your code in a place that is easy to reach and where your project will have exposure.
On the other you don&#39;t want to rely on Big Tech to determine the future of your project.
One false positive from an AI tool or one malicious DMCA request and all your hard work can just disappear.
Unless your project has a big audience, nobody at Big Tech will listen to you and then it can take weeks or months until everything comes back to normal.</p>

<h2 id="mitigating-risks">Mitigating risks</h2>

<p>How do you mitigate this risk?
The answer is self-hosting, before you draw the conclusion that such a thing is not feasible hear me out!</p>

<p>Everyone who is into self-hosting knows that it comes with its set of challenges.</p>

<p>By example your own domain will never have the exposure of GitHub. So you might think that self-hosting your code will reduce the exposure and viability of your project.
Luckily such a thing as a push mirror exists!
This means the following : first you commit your code on your self-hosted repository and then automatically the code gets pushed to another git repository. The mirror repositories can be hosted on any platform you want, like GitHub.
This way you still have the exposure you want while your code is under your control.</p>

<p>Another challenge associated with self-hosting is managing security.
If you don&#39;t want to take any risk you don&#39;t have to expose your self-hosted instance to the public.
You just put the software locally and setup a push mirror to a provider that&#39;s publicly available, job done.
Although with tools like <a href="https://github.com/fosrl/pangolin" rel="nofollow">pangolin</a> exposing self-hosted services has become a breeze.</p>

<p>Maybe the last challenge is choosing the right software but that&#39;s what we are for.</p>

<h2 id="choosing-the-right-software">Choosing the right software</h2>

<p>Basically there are two options : <a href="https://github.com/go-gitea/gitea" rel="nofollow">gitea</a> and <a href="https://codeberg.org/forgejo/forgejo" rel="nofollow">forgejo</a>. Personally, I use gitea, so this article only will include examples for that software. However due to the actions of the company behind it I would recommend to have a look at forgejo if you&#39;re starting out. Someday I will make the switch, but for now I&#39;m holding out on that pending transition.</p>

<h2 id="build-actions">Build actions</h2>

<p>Gitea provides a run agent that is very similar to GitHub actions. However there are some differences that need to be worked around.</p>

<h3 id="install-docker">Install docker</h3>

<p>By default the gitea runner doesn&#39;t have Docker installed, in order to be able to do anything with Docker you need to install it yourself.</p>

<p>This can be done in the following way :</p>

<pre><code class="language-yaml">- name: Install Docker
   run: |
     echo &#34;Checking docker installation&#34;
     if command -v docker &amp;&gt; /dev/null; then
       echo &#34;Docker installation found&#34;
     else
       echo &#34;Docker installation not found. Docker will be installed&#34;
        curl -fsSL https://get.docker.com | sh
     fi
</code></pre>

<h3 id="update-docker-hub-description">Update docker hub description</h3>

<p>There is <a href="https://github.com/peter-evans/dockerhub-description" rel="nofollow">this</a> action that enables you to automatically update descriptions on docker hub. However it requires some extra dependencies to be installed.</p>

<pre><code class="language-yaml">- name: Install npm dependencies
   run: |
     echo &#34;Installing fetch&#34;
     install_node=$false
     if ! command -v node &amp;&gt; /dev/null; then
       echo &#34;No version of NodeJS detected&#34;
       install_node=true
     else
       node_version=$(node -v)
       node_version=${node_version:1} # Remove &#39;v&#39; at the beginning
       node_version=${node_version%\.*} # Remove trailing &#34;.*&#34;.
       node_version=${node_version%\.*} # Remove trailing &#34;.*&#34;.
       node_version=$(($node_version)) # Convert the NodeJS version number from a string to an integer.
       if [ $node_version -lt  24 ]; then
         echo &#34;node version : &#34; $node_version 
         echo $&#34;removing outdated npm version&#34;
         install_node=true
         apt-get update
         apt-get remove nodejs npm
         apt-get purge nodejs
         rm -rf /usr/local/bin/npm 
         rm -rf /usr/local/share/man/man1/node* 
         rm -rf /usr/local/lib/dtrace/node.d 
         rm -rf ~/.npm 
         rm -rf ~/.node-gyp 
         rm -rf /opt/local/bin/node 
         rm -rf opt/local/include/node 
         rm -rf /opt/local/lib/node_modules  
         rm -rf /usr/local/lib/node*
         rm -rf /usr/local/include/node*
         rm -rf /usr/local/bin/node*
       fi
     fi

     if $install_node; then
       NODE_MAJOR=24
       echo &#34;Installing node ${NODE_MAJOR}&#34;
       if test -f /etc/apt/keyrings/nodesource.gpg; then
         rm /etc/apt/keyrings/nodesource.gpg
       fi
       if test -f /etc/apt/sources.list.d/nodesource.list; then
         rm /etc/apt/sources.list.d/nodesource.list
       fi
       apt-get update
       apt-get install -y -q ca-certificates curl gnupg
       mkdir -p /etc/apt/keyrings
       curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg
       echo &#34;deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_${NODE_MAJOR}.x nodistro main&#34; | tee /etc/apt/sources.list.d/nodesource.list
       apt-get update
       apt-get install -y -q nodejs
       npm install npm --global
     fi

     echo &#34;node version : &#34; $(node -v)
       
     package=&#39;node-fetch&#39;
     if [ `npm list -g | grep -c $package` -eq 0 ]; then
       npm install -g $package
     fi
- name: Docker Hub Description
   uses: peter-evans/dockerhub-description@v5
   with:
     username: ${{ secrets.DOCKER_HUB_USERNAME }}
     password: ${{ secrets.DOCKER_HUB_PASSWORD }}  
     repository: ${{ repostitory }}
</code></pre>
]]></content:encoded>
      <author>David Claeys</author>
      <guid>https://blog.claeyscloud.com/read/a/m528wh616c</guid>
      <pubDate>Thu, 19 Feb 2026 09:53:56 +0000</pubDate>
    </item>
    <item>
      <title>Deploy front-end applications with Docker</title>
      <link>https://blog.claeyscloud.com/david/deploy-front-end-applications-with-docker</link>
      <description>&lt;![CDATA[In a previous article we explained how you could deploy a .NET application with Docker.&#xA;The content of this article will be applicable whether you use a .NET backend or not.&#xA;&#xA;Possible pitfalls&#xA;&#xA;A possible issue is that you only want to make your backend available for use to your front-end.&#xA;This is quite nice since it significantly decreased the possible attack surface.&#xA;But at a first glance this is not possible since the clients running the application wouldn&#39;t be able to perform any API call.&#xA;&#xA;Or maybe as per convention you host all your backends at api.example.com/apiName while you want to give your front-end applications a more recognizable domain.&#xA;If you&#39;ve tried to just point your client requests to a different domain you&#39;ve probably noticed the following problems :&#xA;it&#39;s quite annoying to hardcode domains since these can change over time&#xA;CORS on won&#39;t let you do it&#xA;&#xA;The solution&#xA;&#xA;These problems can both be solved through building a Docker image.&#xA;The proposed example is based on Node but with some creativity you could tweak it with any front-end solution. To be clear since we&#39;re using Node we can build any framework based on it (like React or Angular).  &#xA;&#xA;We will split up the building process in two stages. &#xA;&#xA;First build stage : Compiling&#xA;&#xA;The first stage is intended to build or node application.&#xA;If you want to build an application that&#39;s not based on Node this is where you would change the base image. If for some reason your build process requires multiple steps this is the place where you would do it.&#xA;&#xA;FROM node:22-alpine AS builder&#xA;all subsequent commands will be performed in the /app directory&#xA;WORKDIR /app/&#xA;copy all the source code into the current directory&#xA;COPY . .&#xA;update the system ,after that install all dependencies and run build&#xA;RUN apk update &amp;&amp; apk upgrade --available &amp;&amp; npm install \&#xA;    &amp;&amp; npm run build&#xA;&#xA;Second build stage : Hosting&#xA;&#xA;The following stage will be responsible for running a http server (Nginx) hosting the application and also will proxy requests to the backend.&#xA;&#xA;The contents of this stage would be something like this :&#xA;&#xA;FROM nginx:mainline-alpine&#xA;define environment variables for later subsitution&#xA;ENV APIPROTOCOL=&#34;https&#34;&#xA;ENV APIHOST=&#34;localhost&#34;&#xA;ENV APIPORT=&#34;80&#34;&#xA;change the working directory to the main nginx directory&#xA;WORKDIR /usr/share/nginx/html&#xA;update and adding system dependencies&#xA;default nginx configurations are also wiped out&#xA;RUN apk update &amp;&amp; apk upgrade --available \&#xA;    &amp;&amp; apk add envsubst \&#xA;    &amp;&amp; rm -rf ./&#xA;copy the build output to the current folder&#xA;COPY --from=builder /app/build .&#xA;add nginx configuration template file&#xA;COPY nginx.conf.template /etc/nginx/nginx.conf&#xA;add script for variable substitution at runtime&#xA;COPY entrypoint.sh /docker-entrypoint.d/05-docker-entrypoint.sh&#xA;set correct file permissions and remove files that are not needed&#xA;RUN chmod +x /docker-entrypoint.d/05-docker-entrypoint.sh \&#xA;    &amp;&amp; apk del envsubst \&#xA;    &amp;&amp; rm -rf /var/cache/apk/ \&#xA;    &amp;&amp; rm -rf /etc/nginx/conf.d&#xA;&#xA;Nginx configuration overview&#xA;Let&#39;s take a look at a file we will call nginx.conf.template.&#xA;&#xA;user  nginx;&#xA;workerprocesses  auto;&#xA;&#xA;errorlog  /var/log/nginx/error.log notice;&#xA;pid        /var/run/nginx.pid;&#xA;&#xA;events {&#xA;    workerconnections  1024;&#xA;}&#xA;&#xA;http {&#xA;&#xA; map $httpupgrade $connectionupgrade {&#xA;        default upgrade;&#xA;        &#39;&#39;      close;&#xA;    }&#xA;&#xA;    include       /etc/nginx/mime.types;&#xA;    defaulttype  application/octet-stream;&#xA;&#xA;    logformat  main  &#39;$remoteaddr - $remoteuser [$timelocal] &#34;$request&#34; &#39;&#xA;                      &#39;$status $bodybytessent &#34;$httpreferer&#34; &#39;&#xA;                      &#39;&#34;$httpuseragent&#34; &#34;$httpxforwardedfor&#34;&#39;;&#xA;&#xA;    accesslog  /var/log/nginx/access.log  main;&#xA;&#xA;    sendfile        on;&#xA;        keepalivetimeout  65;&#xA;&#xA;     server{&#xA;        listen 80;&#xA;        &#xA;        location / {&#xA;            root /usr/share/nginx/html;&#xA;        }&#xA;&#xA;        location /hubs {&#xA;            allow all;&#xA;            # App server url&#xA;            proxypass $APIPROTOCOL://$APIHOST:$APIPORT;&#xA;&#xA;            # Configuration for WebSockets&#xA;            proxysetheader Upgrade $httpupgrade;&#xA;            proxysetheader Connection $connectionupgrade;&#xA;            proxycache off;&#xA;            proxycachebypass $httpupgrade;&#xA;&#xA;            # WebSockets were implemented after http/1.0&#xA;            proxyhttpversion 1.1;&#xA;&#xA;            # Configuration for ServerSentEvents&#xA;            proxybuffering off;&#xA;&#xA;            # Configuration for LongPolling or if your KeepAliveInterval is longer than 60 seconds&#xA;            proxyreadtimeout 100s;&#xA;&#xA;            proxysslservername off;&#xA;            proxysslverify off;&#xA;&#xA;            proxysetheader Host $host;&#xA;            proxysetheader X-Real-IP $remoteaddr;&#xA;            proxysetheader X-Forwarded-For $proxyaddxforwardedfor;&#xA;            proxysetheader X-Forwarded-Proto $scheme;&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;Discussing locations&#xA;&#xA;The first thing to note is that we define two locations : / (web location) and  location /hubs (proxy location).&#xA;The / location will host the build output of our application, while the /hubs location is the location of the requests that will be proxied. It&#39;s important that in order for the web location to work the build files must be present in the indicated root directory.&#xA;&#xA;The reason that we did not call the proxy location /api is that our front-end application uses SignalR to communicate to the backend. The configuration provided in this example enables features like web sockets and long polling. However you can tweak the example provided to meet your needs.&#xA;&#xA;If you look deeper into the proxy configuration you probably will notice $APIPROTOCOL://$APIHOST:$APIPORT. If you would try this configuration directly in nginx it will fail pointing out your configuration is incorrect. &#xA;&#xA;Don&#39;t worry though since these are simply placeholders (that&#39;s the reason we&#39;ve called this file a template) that will be replaced later on. Our front-end application can simply point API communication to /hubs/whatever and our proxy will take care of it.&#xA;&#xA;Variable substitutions&#xA;&#xA;Let me ask you a question : When do you replace the placeholders with it&#39;s final value ?&#xA;If you do it at build time each time a domain changes you&#39;ll be forced to rebuild.&#xA;Or worse if you host multiple instances this means you&#39;ll need to build a separate image for each instance. I think it&#39;s obvious this method is not desirable at all.&#xA;&#xA;Instead of performing variable substitutions at build time they should be performed at run time.&#xA;Modifying the entry point of an existing Docker image can be quite tricky, luckily we won&#39;t need to.&#xA;The nginx image provides a feature that when you put scripts into the /docker-entrypoint.d  folder of the container it will run these scripts at startup time.    &#xA;&#xA;We will substitute the following variables :  APIPROTOCOL, APIHOST and APIPORT.&#xA;Let&#39;s have a look at our entrypoint.sh file :&#xA;&#xA;!/usr/bin/env sh&#xA;set -eu&#xA;&#xA;echo &#34;$(envsubst &#39;${APIPROTOCOL},${APIHOST},${APIPORT}&#39;  /etc/nginx/nginx.conf)&#34;  /etc/nginx/nginx.conf&#xA;exec &#34;$@&#34;&#xA;&#xA;This script is quite easy, it uses the envsubst _ command in order to read and substitute the contents of   /etc/nginx/nginx.conf and writes them afterwards into the same file.&#xA;So during our docker image process we will need to locate our template file at /etc/nginx/nginx.conf and at runtime this script will substitute the contents of the file with the provided environment variables.&#xA;&#xA;Considerations and thoughts&#xA;&#xA;In this example we used Nginx as our http server, however you can use the server that best fits your use-case. However if you choose so you will need to figure out how to setup a proxy on your own.&#xA;To be honest most common http servers provide plentiful documentation, so it really shouldn&#39;t be a problem.&#xA;&#xA;You might have noticed the use of envsubst. The placeholder substitution at runtime has been one of the parts where I struggled most. For some reason it has been quite tricky to get the values of the environment variables in a bash script and putting them in the configuration file.&#xA;The most annoying part is that you specify the variables you want to substitute. If you have a large amount of placeholders to replace this can become quite cumbersome.&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>In a previous article we explained how you could deploy a .NET application with Docker.
The content of this article will be applicable whether you use a .NET backend or not.</p>

<h2 id="possible-pitfalls">Possible pitfalls</h2>

<p>A possible issue is that you only want to make your backend available for use to your front-end.
This is quite nice since it significantly decreased the possible attack surface.
But at a first glance this is not possible since the clients running the application wouldn&#39;t be able to perform any API call.</p>

<p>Or maybe as per convention you host all your backends at <em>api.example.com/apiName</em> while you want to give your front-end applications a more recognizable domain.
If you&#39;ve tried to just point your client requests to a different domain you&#39;ve probably noticed the following problems :
–  it&#39;s quite annoying to hardcode domains since these can change over time
–  CORS on won&#39;t let you do it</p>

<h2 id="the-solution">The solution</h2>

<p>These problems can both be solved through building a Docker image.
The proposed example is based on <a href="https://nodejs.org/en/" rel="nofollow">Node</a> but with some creativity you could tweak it with any front-end solution. To be clear since we&#39;re using Node we can build any framework based on it (like React or Angular).</p>

<p>We will split up the building process in two stages.</p>

<h3 id="first-build-stage-compiling">First build stage : Compiling</h3>

<p>The first stage is intended to build or node application.
If you want to build an application that&#39;s not based on Node this is where you would change the base image. If for some reason your build process requires multiple steps this is the place where you would do it.</p>

<pre><code>FROM node:22-alpine AS builder
# all subsequent commands will be performed in the /app directory
WORKDIR /app/
# copy all the source code into the current directory
COPY . .
# update the system ,after that install all dependencies and run build
RUN apk update &amp;&amp; apk upgrade --available &amp;&amp; npm install \
    &amp;&amp; npm run build
</code></pre>

<h3 id="second-build-stage-hosting">Second build stage : Hosting</h3>

<p>The following stage will be responsible for running a http server (Nginx) hosting the application and also will proxy requests to the backend.</p>

<p>The contents of this stage would be something like this :</p>

<pre><code>FROM nginx:mainline-alpine
# define environment variables for later subsitution
ENV API_PROTOCOL=&#34;https&#34;
ENV API_HOST=&#34;localhost&#34;
ENV API_PORT=&#34;80&#34;
# change the working directory to the main nginx directory
WORKDIR /usr/share/nginx/html
# update and adding system dependencies
# default nginx configurations are also wiped out
RUN apk update &amp;&amp; apk upgrade --available \
    &amp;&amp; apk add envsubst \
    &amp;&amp; rm -rf ./*
# copy the build output to the current folder
COPY --from=builder /app/build .
# add nginx configuration template file
COPY nginx.conf.template /etc/nginx/nginx.conf
# add script for variable substitution at runtime
COPY entrypoint.sh /docker-entrypoint.d/05-docker-entrypoint.sh
# set correct file permissions and remove files that are not needed
RUN chmod +x /docker-entrypoint.d/05-docker-entrypoint.sh \
    &amp;&amp; apk del envsubst \
    &amp;&amp; rm -rf /var/cache/apk/* \
    &amp;&amp; rm -rf /etc/nginx/conf.d
</code></pre>

<h4 id="nginx-configuration-overview">Nginx configuration overview</h4>

<p>Let&#39;s take a look at a file we will call <code>nginx.conf.template</code>.</p>

<pre><code>user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {

 map $http_upgrade $connection_upgrade {
        default upgrade;
        &#39;&#39;      close;
    }


    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  &#39;$remote_addr - $remote_user [$time_local] &#34;$request&#34; &#39;
                      &#39;$status $body_bytes_sent &#34;$http_referer&#34; &#39;
                      &#39;&#34;$http_user_agent&#34; &#34;$http_x_forwarded_for&#34;&#39;;

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
        keepalive_timeout  65;

     server{
        listen 80;
        
        location / {
            root /usr/share/nginx/html;
        }

        location /hubs {
            allow all;
            # App server url
            proxy_pass $API_PROTOCOL://$API_HOST:$API_PORT;

            # Configuration for WebSockets
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
            proxy_cache off;
            proxy_cache_bypass $http_upgrade;

            # WebSockets were implemented after http/1.0
            proxy_http_version 1.1;

            # Configuration for ServerSentEvents
            proxy_buffering off;

            # Configuration for LongPolling or if your KeepAliveInterval is longer than 60 seconds
            proxy_read_timeout 100s;

            proxy_ssl_server_name off;
            proxy_ssl_verify off;

            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
}
</code></pre>

<h5 id="discussing-locations">Discussing locations</h5>

<p>The first thing to note is that we define two locations : <code>/</code> (web location) and  location <code>/hubs</code> (proxy location).
The <code>/</code> location will host the build output of our application, while the <code>/hubs</code> location is the location of the requests that will be proxied. It&#39;s important that in order for the web location to work the build files must be present in the indicated root directory.</p>

<p>The reason that we did not call the proxy location <code>/api</code> is that our front-end application uses SignalR to communicate to the backend. The configuration provided in this example enables features like web sockets and long polling. However you can tweak the example provided to meet your needs.</p>

<p>If you look deeper into the proxy configuration you probably will notice <code>$API_PROTOCOL://$API_HOST:$API_PORT</code>. If you would try this configuration directly in nginx it will fail pointing out your configuration is incorrect.</p>

<p>Don&#39;t worry though since these are simply placeholders (that&#39;s the reason we&#39;ve called this file a template) that will be replaced later on. Our front-end application can simply point API communication to <code>/hubs/whatever</code> and our proxy will take care of it.</p>

<h5 id="variable-substitutions">Variable substitutions</h5>

<p>Let me ask you a question : When do you replace the placeholders with it&#39;s final value ?
If you do it at build time each time a domain changes you&#39;ll be forced to rebuild.
Or worse if you host multiple instances this means you&#39;ll need to build a separate image for each instance. I think it&#39;s obvious this method is not desirable at all.</p>

<p>Instead of performing variable substitutions at build time they should be performed at run time.
Modifying the entry point of an existing Docker image can be quite tricky, luckily we won&#39;t need to.
The nginx image provides a feature that when you put scripts into the <code>/docker-entrypoint.d</code> folder of the container it will run these scripts at startup time.</p>

<p>We will substitute the following variables :  <em>API</em>PROTOCOL_, <em>API</em>HOST_ and <em>API</em>PORT_.
Let&#39;s have a look at our <code>entrypoint.sh</code> file :</p>

<pre><code>#!/usr/bin/env sh
set -eu

echo &#34;$(envsubst &#39;${API_PROTOCOL},${API_HOST},${API_PORT}&#39; &lt; /etc/nginx/nginx.conf)&#34; &gt; /etc/nginx/nginx.conf
exec &#34;$@&#34;
</code></pre>

<p>This script is quite easy, it uses the _envsubst _ command in order to read and substitute the contents of   <code>/etc/nginx/nginx.conf</code> and writes them afterwards into the same file.
So during our docker image process we will need to locate our template file at <code>/etc/nginx/nginx.conf</code> and at runtime this script will substitute the contents of the file with the provided environment variables.</p>

<h2 id="considerations-and-thoughts">Considerations and thoughts</h2>

<p>In this example we used Nginx as our http server, however you can use the server that best fits your use-case. However if you choose so you will need to figure out how to setup a proxy on your own.
To be honest most common http servers provide plentiful documentation, so it really shouldn&#39;t be a problem.</p>

<p>You might have noticed the use of <code>envsubst</code>. The placeholder substitution at runtime has been one of the parts where I struggled most. For some reason it has been quite tricky to get the values of the environment variables in a bash script and putting them in the configuration file.
The most annoying part is that you specify the variables you want to substitute. If you have a large amount of placeholders to replace this can become quite cumbersome.</p>
]]></content:encoded>
      <author>David Claeys</author>
      <guid>https://blog.claeyscloud.com/read/a/z2o44i5txf</guid>
      <pubDate>Fri, 11 Oct 2024 12:18:08 +0000</pubDate>
    </item>
    <item>
      <title>Epg, the easy way</title>
      <link>https://blog.claeyscloud.com/david/epg-the-easy-way</link>
      <description>&lt;![CDATA[The problem&#xA;In a previous post I went through the process of setting up your own epg provider with  iptiv-org/epg. That process is still valid but it has some important drawbacks.&#xA;First of all the setup process is quite lengthy, which may scare potential users away.&#xA;Secondly the installation process is performed directly on the host.&#xA;Which might be a dealbreaker if you like hosting applications through Docker.&#xA;&#xA;The solution&#xA;&#xA;Introduction&#xA;This is where one of my personal projects comes into place epg-info-docker.&#xA;The purpose of this repository is to take the code in iptiv-org/epg and to build a Docker image out of it.&#xA;&#xA;If you want to take a look at it, the code is available through my git server or github.&#xA;You obviously can take this code and build it yourself, but this is not the most convenient.&#xA;&#xA;For your convenience images are made available at different registries :&#xA;git.claeyscloud.com/david/epg-info&#xA;ghcr.io/davidclaeysquinones/epg-info&#xA;docker.io/davidquinonescl/epg-info&#xA;&#xA;Each of these images is the same, so you can pick the image from where you prefer.&#xA;&#xA;Setup&#xA;You can use this image in the following way :&#xA;&#xA;version: &#39;3.3&#39;&#xA;services:&#xA;  epg:&#xA;    image: git.claeyscloud.com/david/epg-info:latest&#xA;    #image: ghcr.io/davidclaeysquinones/epg-info:latest&#xA;    #image: davidquinonescl/epg-info:latest&#xA;    volumes:&#xA;      # add a mapping in order to add the channels file&#xA;      /docker/epg:/config&#xA;    ports:&#xA;      6080:3000&#xA;    environment:&#xA;      # specify the time zone for the server&#xA;      TZ=Etc/UTC&#xA;      # uncomment the underlying line if you want to enable custom fixes&#xA;      #- ENABLEFIXES=true&#xA;    restart: unless-stopped&#xA;&#xA;In order to setup the program you need a channels.xml file.&#xA;This files describes which providers and channels you want the program to generate epg information.&#xA;An example of the contents for this file looks like this :&#xA;?xml version=&#34;1.0&#34; encoding=&#34;UTF-8&#34;?&#xA;channels&#xA; channel site=&#34;movistarplus.es&#34; lang=&#34;es&#34; xmltvid=&#34;24Horas.es&#34; site_id=&#34;24H&#34;24 Horas/channel&#xA;/channels&#xA;In the repo you can look for all available providers. Each provider has a list with it&#39;s available channels. &#xA;&#xA;And that&#39;s it ! You&#39;ve just setup your own epg provider.]]&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-problem">The problem</h2>

<p>In a previous post I went through the process of setting up your own epg provider with  <a href="https://github.com/iptv-org/epg" rel="nofollow">iptiv-org/epg</a>. That process is still valid but it has some important drawbacks.
First of all the setup process is quite lengthy, which may scare potential users away.
Secondly the installation process is performed directly on the host.
Which might be a dealbreaker if you like hosting applications through Docker.</p>

<h2 id="the-solution">The solution</h2>

<h3 id="introduction">Introduction</h3>

<p>This is where one of my personal projects comes into place <a href="https://git.claeyscloud.com/david/epg-info-docker" rel="nofollow">epg-info-docker</a>.
The purpose of this repository is to take the code in <a href="https://github.com/iptv-org/epg" rel="nofollow">iptiv-org/epg</a> and to build a Docker image out of it.</p>

<p>If you want to take a look at it, the code is available through my <a href="https://git.claeyscloud.com/david/epg-info-docker" rel="nofollow">git server</a> or <a href="https://github.com/davidclaeysquinones/epg-info-docker" rel="nofollow">github</a>.
You obviously can take this code and build it yourself, but this is not the most convenient.</p>

<p>For your convenience images are made available at different registries :
– git.claeyscloud.com/david/epg-info
– ghcr.io/davidclaeysquinones/epg-info
– docker.io/davidquinonescl/epg-info</p>

<p>Each of these images is the same, so you can pick the image from where you prefer.</p>

<h3 id="setup">Setup</h3>

<p>You can use this image in the following way :</p>

<pre><code class="language-sh">version: &#39;3.3&#39;
services:
  epg:
    image: git.claeyscloud.com/david/epg-info:latest
    #image: ghcr.io/davidclaeysquinones/epg-info:latest
    #image: davidquinonescl/epg-info:latest
    volumes:
      # add a mapping in order to add the channels file
      - /docker/epg:/config
    ports:
      - 6080:3000
    environment:
      # specify the time zone for the server
      - TZ=Etc/UTC
      # uncomment the underlying line if you want to enable custom fixes
      #- ENABLE_FIXES=true
    restart: unless-stopped
</code></pre>

<p>In order to setup the program you need a channels.xml file.
This files describes which providers and channels you want the program to generate epg information.
An example of the contents for this file looks like this :</p>

<pre><code>&lt;?xml version=&#34;1.0&#34; encoding=&#34;UTF-8&#34;?&gt;
&lt;channels&gt;
 &lt;channel site=&#34;movistarplus.es&#34; lang=&#34;es&#34; xmltv_id=&#34;24Horas.es&#34; site_id=&#34;24H&#34;&gt;24 Horas&lt;/channel&gt;
&lt;/channels&gt;
</code></pre>

<p>In the <a href="https://github.com/iptv-org/epg/tree/master/sites" rel="nofollow">repo</a> you can look for all available providers. Each provider has a list with it&#39;s available channels.</p>

<p>And that&#39;s it ! You&#39;ve just setup your own epg provider.</p>
]]></content:encoded>
      <author>David Claeys</author>
      <guid>https://blog.claeyscloud.com/read/a/sx7825okev</guid>
      <pubDate>Wed, 09 Oct 2024 11:37:18 +0000</pubDate>
    </item>
    <item>
      <title>Watching Live TV on all your devices</title>
      <link>https://blog.claeyscloud.com/david/watching-live-tv-on-all-your-devices</link>
      <description>&lt;![CDATA[In recent years streaming services have gained a lot of popularity. However for a multiple of reasons sometimes we might want to watch Live TV.&#xA;&#xA;Depending on the place you live your ISP or cable provider might (or not) provide some kind of app to watch TV on your mobile devices. However some apps are crappy, other are limited in the channels you can watch or other might have a very limited feature set. For these reasons you might want to watch Live Tv on your own terms.&#xA;&#xA;In this article we will look at how you would go about setting up Live Tv on your own infrastructure.&#xA;In the end you&#39;ll be able to stream Tv through web, mobile devices in a very convenient way.&#xA;&#xA;In order to reach our end goal we will perform the following steps:&#xA;Installing and setting up iptiv-org/epg to acquire EPG data&#xA;Installing and setting up Threadfin&#xA;Installing and setting up Jellyfin&#xA;&#xA;Disclaimer :&#xA;This article&#39;s assumption is that you have some knowledge about the Linux network stack and Docker.&#xA;&#xA;Setting up EPG&#xA;&#xA;Getting schedules for the channels you want is quite essential in order to have a good experience.&#xA;However depending on the country where you live getting EPG (Electronic Programme Guide) can be very easy or almost impossible.&#xA;&#xA;By example if you live in Spain dobleM provides EPG information for almost any channel you can imagine.&#xA;&#xA;However if you live in Belgium getting decent EPG information is very challenging. I&#39;ve looked through forums and not found any source available.&#xA;&#xA;Setting up your own EPG provider&#xA;&#xA;So what do you do there are no EPG sources available for your country or for a particular channel ?&#xA;&#xA;This is where iptiv-org/epg comes to the rescue.&#xA;&#xA;Let&#39;s get through the necessary steps in order to set it up.&#xA;&#xA;First of all you&#39;ll want a system with a static IP address. We will be using Ubuntu 22.04 in order to perform the setup process. As always feel free to use any Linux flavor you like but be aware that you might get through some roadblocks (or not) if you do so.&#xA;&#xA;Updating and installing dependencies&#xA;First of all we want to make sure all our system dependencies are up to date and and we will install our necessary dependencies.&#xA;&#xA;sudo apt-get update \&#xA;  &amp;&amp; sudo apt-get upgrade -y -q \&#xA;  &amp;&amp; sudo apt-get install curl -y \&#xA;  &amp;&amp; sudo apt-get install git -y&#xA;Installing Nodejs&#xA;In order to install the latest supported NodeJs version we will be using NodeSource. There are other ways you could do the same but this is the most convenient way to do it.&#xA;&#xA;Note :&#xA;At the moment NodeJS 22 is not compatible with the software we&#39;re installing.&#xA;&#xA;curl -fsSL https://deb.nodesource.com/setup21.x -o nodesourcesetup.sh&#xA;sudo -E bash nodesourcesetup.sh&#xA;sudo apt-get install -y nodejs&#xA;Once you&#39;ve performed these steps the command `node -v` should return v21.x.x.&#xA;&#xA;Installing iptiv-org/epg&#xA;&#xA;Now we can proceed to the actual installation of our EPG provider.&#xA;First we will make a directory where we will perform the installation&#xA;&#xA;mkdir /bin/epg -p&#xA;Now we want to go into the directory we just made by typing `cd /bin/epg`&#xA;&#xA;At this point we are ready to clone the git repository into our server.&#xA;&#xA;git -C /bin clone --depth 1 -b master https://github.com/iptv-org/epg.git&#xA;&#xA;Once the source code is on our machine we can install the necessary dependencies.&#xA;&#xA;npm install&#xA;&#xA;In order to serve our files over the network we also want to install an npm module called pm2 &#xA;&#xA;npm install pm2 -g&#xA;&#xA;Now we will create two scripts that will enable us to start our EPG provider at startup.&#xA;start.sh :&#xA;!/bin/bash&#xA;&#xA;pm2 --name epg start npm -- run serve&#xA;npm run grab -- --channels=channels.xml --cron=&#34;0 0,12   &#34; --maxConnections=10 --days=14 --gzip&#xA;stop.sh :&#xA;!/bin/bash&#xA;&#xA;pm2 delete 0&#xA;To use these scripts we need to create our service file typing `nano /etc/systemd/system/epg.service`&#xA;Put the following content in the file :&#xA;[Unit]&#xA;Description=Epg&#xA;After=network.target&#xA;&#xA;[Service]&#xA;ExecStart=/bin/epg/start.sh&#xA;ExecStop=/bin/epg/stop.sh&#xA;WorkingDirectory=/bin/epg&#xA;&#xA;[Install]&#xA;WantedBy=default.target &#xA;As a last step we need to tell the system is should reload it&#39;s services by typing  `systemctl daemon-reload`.&#xA;&#xA;We&#39;ve just completed the installation of our own EPG provider but in order to get actual EPG information we need to tell it which channels we want information for.&#xA;&#xA;We do this by creating a file called channels.xml by typing `nano channels.xml`. &#xA;An example of the contents for this file looks like this :&#xA;?xml version=&#34;1.0&#34; encoding=&#34;UTF-8&#34;?&#xA;channels&#xA; channel site=&#34;movistarplus.es&#34; lang=&#34;es&#34; xmltvid=&#34;24Horas.es&#34; siteid=&#34;24H&#34;24 Horas/channel&#xA;/channels&#xA;&#xA;The contents of this file depend on which providers and channels you want to use.&#xA;In the repo you can look for all available providers. Each provider has a list with it&#39;s available channels. &#xA;&#xA;Be aware that not all providers are equal. For example telenet.tv is rock solid but lacks program thumbnails for most channels.&#xA;And in contrast pickx.be keeps breaking because of intentional API changes but most programs have thumbnails.&#xA;&#xA;Finding the right providers for the right channels is a process of trial and error and also depends on what you&#39;re willing to deal with.&#xA;&#xA;These are some providers you could use :&#xA;&#xA;telenet.tv (Belgium)&#xA;pickx.be (Belgium)&#xA;movistarplus.es (Spain)&#xA;programacion-tv.elpais.com (Spain)&#xA;tvgids.nl (Netherlands)&#xA;tv24.co.uk (UK)&#xA;tvtv.us (US)&#xA;chaines-tv.orange.fr (France)&#xA;&#xA;This list is by any means extensive and if you&#39;re looking for other countries you should check which providers are available&#xA;&#xA;Setting up Live Tv streams&#xA;&#xA;The next piece of the puzzle is getting the streams for the channels you want. The options you have depend a lot on where you live and on your goals.&#xA;&#xA;For example in the US you could use a HD HomeRun.&#xA;In some countries (like Spain) you could install a DVB-T2 decoder into your system and setup tvheadend to stream over the network.&#xA;However if you live in countries where open standards were purposely not adopted (like Belgium) you&#39;re only option is to resort to an IPTV provider. &#xA;&#xA;There are some IPTV list available like iptv-org/iptv or TDTChannels that just list publicly available streams and that are completely legal. &#xA;&#xA;If you still choose to use an IPTV provider that infringes copyright please be aware that depending on legislation you could be sanctioned for just being a customer. Be also aware that getting scammed while sourcing an IPTV provider is a real possibility. I don&#39;t want to encourage neither recommend you to source an IPTV provider that infringes copyright. If you make that decision you do so under your own responsibility. Please be careful and try to minimize risks as much as possible.  &#xA;&#xA;Some pieces of software (like Jellyfin) offer a direct integration to the HD HomeRun. If you have such a device you can directly integrate it. However I would recommend to use Threadfin as an intermediate layer in order to manage EPG and channel numbering. If you&#39;re using an m3u stream from tvheadend or an IPTV provider you can&#39;t get around using this piece of software.&#xA;&#xA;Installing Threadfin&#xA;This is how a Docker compose file would look like for Threadfin without any additional precaution :&#xA;version: &#34;3.5&#34;&#xA;services:&#xA;  threadfin:&#xA;    image: fyb3roptik/threadfin&#xA;    environment:&#xA;      PUID=${PUID}&#xA;      PGID=${PGID}&#xA;      TZ=${TIMEZONE}&#xA;    volumes:&#xA;      ${THREADFINCONFIGDIR}:/home/threadfin/conf&#xA;    ports:&#xA;      34400:34400&#xA;    restart: unless-stopped&#xA;If you would like to take some precaution gluetun is a very good option. This is basically a Docker image that allows you to configure almost any VPN provider.&#xA;&#xA;In the wiki you can find information about how to setup your particular VPN provider.&#xA;&#xA;So if you would like to take precautions your compose file would look like this :&#xA;version: &#34;3.5&#34;&#xA;services:&#xA;  vpn:&#xA;    image: qmcgaw/gluetun&#xA;    capadd:&#xA;      NETADMIN&#xA;    devices:&#xA;      /dev/net/tun:/dev/net/tun&#xA;    sysctls:&#xA;      net.ipv6.conf.all.disableipv6=0&#xA;    environment:&#xA;      TZ=${TIMEZONE}&#xA;      VPNSERVICEPROVIDER=${YOURPROVIDER}&#xA;      ....&#xA;      # some provider specific variavles&#xA;      ....&#xA;      FIREWALLOUTBOUNDSUBNETS=${YOURSUBNET}/24&#xA;    ports:&#xA;      34400:34400&#xA;    volumes:&#xA;      ${VPNCONFIGDIR}:/config&#xA;    restart: unless-stopped&#xA;  threadfin:&#xA;    image: fyb3roptik/threadfin&#xA;    environment:&#xA;      PUID=${PUID}&#xA;      PGID=${PGID}&#xA;      TZ=${TIMEZONE}&#xA;    dependson:&#xA;      vpn&#xA;    networkmode: service:vpn&#xA;    volumes:&#xA;         ${THREADFINCONFIGDIR}:/home/threadfin/conf&#xA;    restart: unless-stopped&#xA;Setting up Threadfin&#xA;Once Threadfin is installed we need to set it up.&#xA;&#xA;Basic settings&#xA;&#xA;Threadfin settings page&#xA;&#xA;Before we continue we want to open the settings page.&#xA;We want to change the following things : &#xA;`EPG Source` to XEPG&#xA;`Replace missing program images` should be checked&#xA;`Stream Buffer:` to VLC&#xA;&#xA;If you notice that your streams are stuttering you can experiment with increasing `Buffer Size`.&#xA;&#xA;The `Number of Tuners` setting sets a system wide maximum number of streams. Choose a realistic number based on your needs and system performance. This setting can also be overridden at playlist level to a lower value. &#xA;&#xA;If you&#39;re going to use TVHeadend the `Ignore Filters` setting will make things easier later on.&#xA;&#xA;Playlist settings&#xA;&#xA;Threadfin playlist settings&#xA;&#xA;The first time you open this page you will be greeted by an empty page.&#xA;&#xA;When you press on the new button you will be greeted by the following dialog.&#xA;New playlist dialog&#xA;&#xA;Choose `M3U if you&#39;re using an stream (IPTV or TvHeadend) or choose HdHomeRun` if you&#39;re using that particular device.&#xA;&#xA;Depending on your choice you will see once of these dialogs.&#xA;&#xA;New playlist M3U playlist&#xA;&#xA;New playlist HDHomeRun playlist&#xA;&#xA;The `M3U file or HDHomeRun IP` fields are the most crucial part. &#xA;Fill in the address to the M3U file or your HDHomeRun device on your local network.&#xA;&#xA;You also want to set the  `Tuner/Streams ` amount to a reasonable amount. If you&#39;re using TV Headend, a public IPTV list or HdHomeRun this will be hardware constrained (number or tuners and general system performance. If you&#39;re using a IPTV provider this will be whatever their general policy permits.&#xA;&#xA;XMLTV settings&#xA;&#xA;Threadfin XMLTV settings&#xA;&#xA;This page will also be empty when you open it up for the first time. In my opinion this is one of the strengths of Threadfin. Regardless of whether you have any EPG information you can mix and match different sources to the combination you like.  &#xA;&#xA;When you press on the new button you will be greeted by the following dialog.&#xA;New XMLTV dialog&#xA;&#xA;You can give it whatever name and description you like. The `XMLTV File field is the part that really matters. If you want to use a publicly available source you just fill in the corresponding URL according to their documentation. If you followed along and set up your own EPG provider the address will be  EPG IP ADDRESS:3000/guide.xml`.&#xA;&#xA;Filter settings&#xA;&#xA;If you plan to use TvHeadend and enabled the `Ignore Filters` setting you can skip this section. &#xA;&#xA;Otherwise open this page and since we&#39;re getting started it will be empty.&#xA;The general idea of this page is that in most cases IPTV lists contain hundreds if not thousands of streams. In order to not affect system performance and keep things manageable we need to choose the categories we&#39;ll want to map later on.  Choosing one particular category doesn&#39;t mean we are forced to map all channels in it. &#xA;&#xA;New filter dialog&#xA;&#xA;Threadfin offers two different filter types M3U and custom filters.&#xA;The M3U type is pretty basic and limits itself to the categories contained in group titles contained in the M3U file. The custom filter is powerful because it enables to make filters on specific patterns.&#xA; &#xA;Now I need to be honest, at some point I&#39;ve tried to use custom filters but I didn&#39;t figure it out. I think that depending on playlist size it might take quite some time to process since it needs to check for a pattern for each stream in the playlist. However that&#39;s just an assumption since I&#39;ve not really used this feature. Feel free to try it out but I won&#39;t go into any more dept since I&#39;m not able to.&#xA;&#xA;New M3U filter dialog&#xA;The field we want to look for is `group title`. This will make the chosen group title available in the mapping tab. You can have a look at the include/exclude settings if you want so but it&#39;s not strictly necessary.&#xA;&#xA;Mapping settings&#xA;&#xA;When opening the mappings page you won&#39;t be greeted by an empty list.&#xA;Most probably you&#39;ll be greeted with a list with unmapped/inactive channels.&#xA;You can make the distinction because of the red line on the left end of the table.&#xA;List of unmapped channels&#xA;&#xA;Before activating a channel you should first assign it the number of your liking. You do this by typing the desired value in the text field.&#xA;&#xA;In order to continue click on the desired channel in order to open the map channel popup.&#xA;&#xA;Map channel popup&#xA;&#xA;The most important settings are :&#xA;`Active` to activate the channel&#xA;`Channel name` to edit the channel name&#xA;`Logo Url` to assign the channel a logo&#xA;`Group title` to group the channel to your liking&#xA;`XMLTV File` in order to choose the XMLTV file you want to use&#xA;`XMLTV Channel` to choose the right channel in the XMLTV file&#xA;&#xA;Once you&#39;ve chosen your desired settings click on the done button.&#xA;Now there also should be a list with active/mapped channels.&#xA;You can make the distinction because of the green line on the left end of the table.&#xA;&#xA;List of mapped channels&#xA;&#xA;Mapping all desired channels can be a repetitive task but as you&#39;ll see in the end the effort is worth it.&#xA;&#xA;Note :&#xA;In the next steps we&#39;ll be talking about setting up and installing Jellyfin. However you can use Threadfin with any software that supports the HD HomeRun since it functions as an emulation layer. Other software of the likes of Plex Media Server, Kodi and Emby exist that enables you to do the same. However Jellyfin is the only open source solution that enables this feature without any paid plan and on the server side (Kodi is a client application).&#xA;&#xA;Installing Jellyfin&#xA;&#xA;This is how a compose file for a Jellyfin installation looks like :&#xA;version: &#34;3.5&#34;&#xA;services:&#xA;  jellyfin:&#xA;    image: jellyfin/jellyfin&#xA;    user: ${PUID}:${PGID}&#xA;    ports:&#xA;      8096:8096&#xA;    volumes:&#xA;      ${CONFIGFOLDER}:/config&#xA;      ${CACHEFOLDER}:/cache&#xA;      ${MOVIESFOLDER}:/Movies&#xA;      ${TVSHOWSFOLDER}:/Tv Shows&#xA;      ${RECORDINGSFOLDER}:/recordings:/recordings&#xA;    restart: unless-stopped&#xA;    dependson:&#xA;    environment:&#xA;      #use this variable if you want to access your Jellyfin server through a domain name&#xA;      JELLYFINPublishedServerUrl=http://jellyfin.yourdomain.com&#xA;&#xA;Once you deploy this compose file Jellyfin will be available through port 8096 or through the domain you&#39;ve set up. Complete the setup wizard and setup your libraries.  &#xA;&#xA;After this click on your user icon and open the administration panel&#xA;&#xA;Jellyfin admin panel&#xA;&#xA;We want to go to the Live Tv section of the admin panel.&#xA;Click on the + button under Tuner Device.&#xA;&#xA;Add tuner dialogl&#xA;&#xA;Select HD Homerun as the Tuner Type and check the Allow hardware transcoding checkbox.&#xA;Under Tuner IP Address you should type `http://THREADFIN IP ADDRESS/`. Once that&#39;s done click on the save button.&#xA;&#xA;Last but not least click on the + button under TV Guide Data Providers and choose XMLTV.&#xA;&#xA;Add XMLTV dialogl&#xA;&#xA;The only thing you need to do is type `http://THREADFIN IP ADDRESS:34400/xmltv/threadfin.xml` under File or URL*. Click on the save button and you&#39;re all set.&#xA;Jellyfin will need some time in order to gather all necessary information but after a while live tv will be available.&#xA;&#xA;Jellyfin is  available through the web interface and different apps. The UI is pretty straightforward so we won&#39;t go into detail on this topic. You&#39;ve just setup up live tv on your server on your terms.&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>In recent years streaming services have gained a lot of popularity. However for a multiple of reasons sometimes we might want to watch Live TV.</p>

<p>Depending on the place you live your ISP or cable provider might (or not) provide some kind of app to watch TV on your mobile devices. However some apps are crappy, other are limited in the channels you can watch or other might have a very limited feature set. For these reasons you might want to watch Live Tv on your own terms.</p>

<p>In this article we will look at how you would go about setting up Live Tv on your own infrastructure.
In the end you&#39;ll be able to stream Tv through web, mobile devices in a very convenient way.</p>

<p>In order to reach our end goal we will perform the following steps:
– Installing and setting up <a href="https://github.com/iptv-org/epg" rel="nofollow">iptiv-org/epg</a> to acquire EPG data
– Installing and setting up <a href="https://github.com/Threadfin/Threadfin" rel="nofollow">Threadfin</a>
– Installing and setting up <a href="https://github.com/jellyfin/jellyfin" rel="nofollow">Jellyfin</a></p>

<p><em>Disclaimer :</em>
This article&#39;s assumption is that you have some knowledge about the Linux network stack and Docker.</p>

<h2 id="setting-up-epg">Setting up EPG</h2>

<p>Getting schedules for the channels you want is quite essential in order to have a good experience.
However depending on the country where you live getting EPG (Electronic Programme Guide) can be very easy or almost impossible.</p>

<p>By example if you live in Spain <a href="https://github.com/davidmuma/EPG_dobleM" rel="nofollow">dobleM</a> provides EPG information for almost any channel you can imagine.</p>

<p>However if you live in Belgium getting decent EPG information is very challenging. I&#39;ve looked through forums and not found any source available.</p>

<h3 id="setting-up-your-own-epg-provider">Setting up your own EPG provider</h3>

<p>So what do you do there are no EPG sources available for your country or for a particular channel ?</p>

<p>This is where <a href="https://github.com/iptv-org/epg" rel="nofollow">iptiv-org/epg</a> comes to the rescue.</p>

<p>Let&#39;s get through the necessary steps in order to set it up.</p>

<p>First of all you&#39;ll want a system with a static IP address. We will be using Ubuntu 22.04 in order to perform the setup process. As always feel free to use any Linux flavor you like but be aware that you might get through some roadblocks (or not) if you do so.</p>

<h4 id="updating-and-installing-dependencies">Updating and installing dependencies</h4>

<p>First of all we want to make sure all our system dependencies are up to date and and we will install our necessary dependencies.</p>

<pre><code>sudo apt-get update \
  &amp;&amp; sudo apt-get upgrade -y -q \
  &amp;&amp; sudo apt-get install curl -y \
  &amp;&amp; sudo apt-get install git -y
</code></pre>

<h4 id="installing-nodejs">Installing Nodejs</h4>

<p>In order to install the latest supported NodeJs version we will be using <a href="https://github.com/nodesource/distributions" rel="nofollow">NodeSource</a>. There are other ways you could do the same but this is the most convenient way to do it.</p>

<p><em>Note :</em>
At the moment NodeJS 22 is not compatible with the software we&#39;re installing.</p>

<pre><code>curl -fsSL https://deb.nodesource.com/setup_21.x -o nodesource_setup.sh
sudo -E bash nodesource_setup.sh
sudo apt-get install -y nodejs
</code></pre>

<p>Once you&#39;ve performed these steps the command <code>node -v</code> should return v21.x.x.</p>

<h4 id="installing-iptiv-org-epg">Installing iptiv-org/epg</h4>

<p>Now we can proceed to the actual installation of our EPG provider.
First we will make a directory where we will perform the installation</p>

<pre><code>mkdir /bin/epg -p
</code></pre>

<p>Now we want to go into the directory we just made by typing <code>cd /bin/epg</code></p>

<p>At this point we are ready to clone the git repository into our server.</p>

<pre><code>git -C /bin clone --depth 1 -b master https://github.com/iptv-org/epg.git
</code></pre>

<p>Once the source code is on our machine we can install the necessary dependencies.</p>

<pre><code>npm install
</code></pre>

<p>In order to serve our files over the network we also want to install an npm module called <a href="https://www.npmjs.com/package/pm2" rel="nofollow">pm2</a></p>

<pre><code>npm install pm2 -g
</code></pre>

<p>Now we will create two scripts that will enable us to start our EPG provider at startup.
<em>start.sh :</em></p>

<pre><code>#!/bin/bash

pm2 --name epg start npm -- run serve
npm run grab -- --channels=channels.xml --cron=&#34;0 0,12 * * *&#34; --maxConnections=10 --days=14 --gzip
</code></pre>

<p><em>stop.sh :</em></p>

<pre><code>#!/bin/bash

pm2 delete 0
</code></pre>

<p>To use these scripts we need to create our service file typing <code>nano /etc/systemd/system/epg.service</code>
Put the following content in the file :</p>

<pre><code>[Unit]
Description=Epg
After=network.target

[Service]
ExecStart=/bin/epg/start.sh
ExecStop=/bin/epg/stop.sh
WorkingDirectory=/bin/epg

[Install]
WantedBy=default.target 
</code></pre>

<p>As a last step we need to tell the system is should reload it&#39;s services by typing  <code>systemctl daemon-reload</code>.</p>

<p>We&#39;ve just completed the installation of our own EPG provider but in order to get actual EPG information we need to tell it which channels we want information for.</p>

<p>We do this by creating a file called channels.xml by typing <code>nano channels.xml</code>.
An example of the contents for this file looks like this :</p>

<pre><code>&lt;?xml version=&#34;1.0&#34; encoding=&#34;UTF-8&#34;?&gt;
&lt;channels&gt;
 &lt;channel site=&#34;movistarplus.es&#34; lang=&#34;es&#34; xmltv_id=&#34;24Horas.es&#34; site_id=&#34;24H&#34;&gt;24 Horas&lt;/channel&gt;
&lt;/channels&gt;
</code></pre>

<p>The contents of this file depend on which providers and channels you want to use.
In the <a href="https://github.com/iptv-org/epg/tree/master/sites" rel="nofollow">repo</a> you can look for all available providers. Each provider has a list with it&#39;s available channels.</p>

<p>Be aware that not all providers are equal. For example <a href="https://github.com/iptv-org/epg/tree/master/sites/telenet.tv" rel="nofollow">telenet.tv</a> is rock solid but lacks program thumbnails for most channels.
And in contrast <a href="https://github.com/iptv-org/epg/tree/master/sites/pickx.be" rel="nofollow">pickx.be</a> keeps breaking because of intentional API changes but most programs have thumbnails.</p>

<p>Finding the right providers for the right channels is a process of trial and error and also depends on what you&#39;re willing to deal with.</p>

<p>These are some providers you could use :</p>
<ul><li><a href="https://github.com/iptv-org/epg/tree/master/sites/telenet.tv" rel="nofollow">telenet.tv</a> (Belgium)</li>
<li><a href="https://github.com/iptv-org/epg/tree/master/sites/pickx.be" rel="nofollow">pickx.be</a> (Belgium)</li>
<li><a href="https://github.com/iptv-org/epg/tree/master/sites/movistarplus.es" rel="nofollow">movistarplus.es</a> (Spain)</li>
<li><a href="https://github.com/iptv-org/epg/tree/master/sites/programacion-tv.elpais.com" rel="nofollow">programacion-tv.elpais.com</a> (Spain)</li>
<li><a href="https://github.com/iptv-org/epg/tree/master/sites/tvgids.nl" rel="nofollow">tvgids.nl</a> (Netherlands)</li>
<li><a href="https://github.com/iptv-org/epg/tree/master/sites/tv24.co.uk" rel="nofollow">tv24.co.uk</a> (UK)</li>
<li><a href="https://github.com/iptv-org/epg/tree/master/sites/tvtv.us" rel="nofollow">tvtv.us</a> (US)</li>
<li><a href="https://github.com/iptv-org/epg/tree/master/sites/chaines-tv.orange.fr" rel="nofollow">chaines-tv.orange.fr</a> (France)</li></ul>

<p>This list is by any means extensive and if you&#39;re looking for other countries you should check which providers are available</p>

<h2 id="setting-up-live-tv-streams">Setting up Live Tv streams</h2>

<p>The next piece of the puzzle is getting the streams for the channels you want. The options you have depend a lot on where you live and on your goals.</p>

<p>For example in the US you could use a <a href="https://www.silicondust.com/hdhomerun/" rel="nofollow">HD HomeRun</a>.
In some countries (like Spain) you could install a <a href="https://www.amazon.es/gp/product/B09KQM9NQ8/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&amp;psc=1" rel="nofollow">DVB-T2 decoder</a> into your system and setup <a href="https://github.com/tvheadend/tvheadend" rel="nofollow">tvheadend</a> to stream over the network.
However if you live in countries where open standards were purposely not adopted (like Belgium) you&#39;re only option is to resort to an IPTV provider.</p>

<p>There are some IPTV list available like <a href="https://github.com/iptv-org/iptv" rel="nofollow">iptv-org/iptv</a> or <a href="https://github.com/LaQuay/TDTChannels" rel="nofollow">TDTChannels</a> that just list publicly available streams and that are completely legal.</p>

<p>If you still choose to use an IPTV provider that infringes copyright please be aware that depending on legislation you could be sanctioned for just being a customer. Be also aware that getting scammed while sourcing an IPTV provider is a real possibility. I don&#39;t want to encourage neither recommend you to source an IPTV provider that infringes copyright. If you make that decision you do so under your own responsibility. Please be careful and try to minimize risks as much as possible.</p>

<p>Some pieces of software (like Jellyfin) offer a direct integration to the HD HomeRun. If you have such a device you can directly integrate it. However I would recommend to use <a href="https://github.com/Threadfin/Threadfin" rel="nofollow">Threadfin</a> as an intermediate layer in order to manage EPG and channel numbering. If you&#39;re using an m3u stream from tvheadend or an IPTV provider you can&#39;t get around using this piece of software.</p>

<h4 id="installing-threadfin">Installing Threadfin</h4>

<p>This is how a Docker compose file would look like for Threadfin without any additional precaution :</p>

<pre><code>version: &#34;3.5&#34;
services:
  threadfin:
    image: fyb3roptik/threadfin
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TIME_ZONE}
    volumes:
      - ${THREADFIN_CONFIG_DIR}:/home/threadfin/conf
    ports:
      - 34400:34400
    restart: unless-stopped
</code></pre>

<p>If you would like to take some precaution <a href="https://github.com/qdm12/gluetun" rel="nofollow">gluetun</a> is a very good option. This is basically a Docker image that allows you to configure almost any VPN provider.</p>

<p>In the <a href="https://github.com/qdm12/gluetun-wiki/tree/main/setup/providers" rel="nofollow">wiki</a> you can find information about how to setup your particular VPN provider.</p>

<p>So if you would like to take precautions your compose file would look like this :</p>

<pre><code>version: &#34;3.5&#34;
services:
  vpn:
    image: qmcgaw/gluetun
    cap_add:
      - NET_ADMIN
    devices:
      - /dev/net/tun:/dev/net/tun
    sysctls:
      - net.ipv6.conf.all.disable_ipv6=0
    environment:
      - TZ=${TIME_ZONE}
      - VPN_SERVICE_PROVIDER=${YOUR_PROVIDER}
      ....
      # some provider specific variavles
      ....
      - FIREWALL_OUTBOUND_SUBNETS=${YOUR_SUBNET}/24
    ports:
      - 34400:34400
    volumes:
      -  ${VPN_CONFIG_DIR}:/config
    restart: unless-stopped
  threadfin:
    image: fyb3roptik/threadfin
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TIME_ZONE}
    depends_on:
      - vpn
    network_mode: service:vpn
    volumes:
         - ${THREADFIN_CONFIG_DIR}:/home/threadfin/conf
    restart: unless-stopped
</code></pre>

<h4 id="setting-up-threadfin">Setting up Threadfin</h4>

<p>Once Threadfin is installed we need to set it up.</p>

<h5 id="basic-settings">Basic settings</h5>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/7Kc0y7N.png" alt="Threadfin settings page"></p>

<p>Before we continue we want to open the settings page.
We want to change the following things :
– <code>EPG Source</code> to XEPG
– <code>Replace missing program images</code> should be checked
– <code>Stream Buffer:</code> to VLC</p>

<p>If you notice that your streams are stuttering you can experiment with increasing <code>Buffer Size</code>.</p>

<p>The <code>Number of Tuners</code> setting sets a system wide maximum number of streams. Choose a realistic number based on your needs and system performance. This setting can also be overridden at playlist level to a lower value.</p>

<p>If you&#39;re going to use TVHeadend the <code>Ignore Filters</code> setting will make things easier later on.</p>

<h5 id="playlist-settings">Playlist settings</h5>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/v3hDyHE.png" alt="Threadfin playlist settings"></p>

<p>The first time you open this page you will be greeted by an empty page.</p>

<p>When you press on the new button you will be greeted by the following dialog.
<img src="https://images.claeyscloud.com/images/2024/11/27/XAquUSb.png" alt="New playlist dialog"></p>

<p>Choose <code>M3U</code> if you&#39;re using an stream (IPTV or TvHeadend) or choose <code>HdHomeRun</code> if you&#39;re using that particular device.</p>

<p>Depending on your choice you will see once of these dialogs.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/4Jt7Ijs.png" alt="New playlist M3U playlist"></p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/4Jt7Ijs92fe02bb0f5e18ba.png" alt="New playlist HDHomeRun playlist"></p>

<p>The <code>M3U</code> file or <code>HDHomeRun IP</code> fields are the most crucial part.
Fill in the address to the M3U file or your HDHomeRun device on your local network.</p>

<p>You also want to set the  <code>Tuner/Streams</code> amount to a reasonable amount. If you&#39;re using TV Headend, a public IPTV list or HdHomeRun this will be hardware constrained (number or tuners and general system performance. If you&#39;re using a IPTV provider this will be whatever their general policy permits.</p>

<h5 id="xmltv-settings">XMLTV settings</h5>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/EbfO6sn.png" alt="Threadfin XMLTV settings"></p>

<p>This page will also be empty when you open it up for the first time. In my opinion this is one of the strengths of Threadfin. Regardless of whether you have any EPG information you can mix and match different sources to the combination you like.</p>

<p>When you press on the new button you will be greeted by the following dialog.
<img src="https://images.claeyscloud.com/images/2024/11/27/IUBUdWw.png" alt="New XMLTV dialog"></p>

<p>You can give it whatever name and description you like. The <code>XMLTV File</code> field is the part that really matters. If you want to use a publicly available source you just fill in the corresponding URL according to their documentation. If you followed along and set up your own EPG provider the address will be  <code>&lt;EPG IP ADDRESS&gt;:3000/guide.xml</code>.</p>

<h5 id="filter-settings">Filter settings</h5>

<p>If you plan to use TvHeadend and enabled the <code>Ignore Filters</code> setting you can skip this section.</p>

<p>Otherwise open this page and since we&#39;re getting started it will be empty.
The general idea of this page is that in most cases IPTV lists contain hundreds if not thousands of streams. In order to not affect system performance and keep things manageable we need to choose the categories we&#39;ll want to map later on.  Choosing one particular category doesn&#39;t mean we are forced to map all channels in it.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/gvJY5hd.png" alt="New filter dialog"></p>

<p>Threadfin offers two different filter types <em>M3U</em> and <em>custom filters</em>.
The M3U type is pretty basic and limits itself to the categories contained in group titles contained in the M3U file. The custom filter is powerful because it enables to make filters on specific patterns.</p>

<p>Now I need to be honest, at some point I&#39;ve tried to use custom filters but I didn&#39;t figure it out. I think that depending on playlist size it might take quite some time to process since it needs to check for a pattern for each stream in the playlist. However that&#39;s just an assumption since I&#39;ve not really used this feature. Feel free to try it out but I won&#39;t go into any more dept since I&#39;m not able to.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/kscBJ9A.png" alt="New M3U filter dialog">
The field we want to look for is <code>group title</code>. This will make the chosen group title available in the mapping tab. You can have a look at the include/exclude settings if you want so but it&#39;s not strictly necessary.</p>

<h5 id="mapping-settings">Mapping settings</h5>

<p>When opening the mappings page you won&#39;t be greeted by an empty list.
Most probably you&#39;ll be greeted with a list with unmapped/inactive channels.
You can make the distinction because of the red line on the left end of the table.
<img src="https://images.claeyscloud.com/images/2024/11/27/Da5hQ8l.png" alt="List of unmapped channels"></p>

<p>Before activating a channel you should first assign it the number of your liking. You do this by typing the desired value in the text field.</p>

<p>In order to continue click on the desired channel in order to open the map channel popup.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/hixtHcJ.png" alt="Map channel popup"></p>

<p>The most important settings are :
– <code>Active</code> to activate the channel
– <code>Channel name</code> to edit the channel name
– <code>Logo Url</code> to assign the channel a logo
– <code>Group title</code> to group the channel to your liking
– <code>XMLTV File</code> in order to choose the XMLTV file you want to use
– <code>XMLTV Channel</code> to choose the right channel in the XMLTV file</p>

<p>Once you&#39;ve chosen your desired settings click on the <em>done</em> button.
Now there also should be a list with active/mapped channels.
You can make the distinction because of the green line on the left end of the table.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/xo3H74U.png" alt="List of mapped channels"></p>

<p>Mapping all desired channels can be a repetitive task but as you&#39;ll see in the end the effort is worth it.</p>

<p><em>Note :</em>
In the next steps we&#39;ll be talking about setting up and installing <a href="https://github.com/jellyfin/jellyfin" rel="nofollow">Jellyfin</a>. However you can use Threadfin with any software that supports the HD HomeRun since it functions as an emulation layer. Other software of the likes of <a href="https://www.plex.tv/es/media-server-downloads/" rel="nofollow">Plex Media Server</a>, <a href="https://kodi.tv/" rel="nofollow">Kodi</a> and <a href="https://emby.media/" rel="nofollow">Emby</a> exist that enables you to do the same. However Jellyfin is the only open source solution that enables this feature without any paid plan and on the server side (Kodi is a client application).</p>

<h4 id="installing-jellyfin">Installing Jellyfin</h4>

<p>This is how a compose file for a Jellyfin installation looks like :</p>

<pre><code>version: &#34;3.5&#34;
services:
  jellyfin:
    image: jellyfin/jellyfin
    user: ${PUID}:${PGID}
    ports:
      - 8096:8096
    volumes:
      - ${CONFIG_FOLDER}:/config
      - ${CACHE_FOLDER}:/cache
      - ${MOVIES_FOLDER}:/Movies
      - ${TV_SHOWS_FOLDER}:/Tv Shows
      - ${RECORDINGS_FOLDER}:/recordings:/recordings
    restart: unless-stopped
    depends_on:
    environment:
      #use this variable if you want to access your Jellyfin server through a domain name
      - JELLYFIN_PublishedServerUrl=http://jellyfin.yourdomain.com
</code></pre>

<p>Once you deploy this compose file Jellyfin will be available through port 8096 or through the domain you&#39;ve set up. Complete the setup wizard and setup your libraries.</p>

<p>After this click on your user icon and open the <em>administration panel</em></p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/UN3a4JH.png" alt="Jellyfin admin panel"></p>

<p>We want to go to the <em>Live Tv</em> section of the admin panel.
Click on the + button under <em>Tuner Device</em>.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/IIOZ9CS.png" alt="Add tuner dialogl"></p>

<p>Select HD Homerun as the <em>Tuner Type</em> and check the <em>Allow hardware transcoding</em> checkbox.
Under <em>Tuner IP Address</em> you should type <code>http://&lt;THREADFIN IP ADDRESS&gt;/</code>. Once that&#39;s done click on the save button.</p>

<p>Last but not least click on the + button under <em>TV Guide Data Providers</em> and choose XMLTV.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/uUAz4ST.png" alt="Add XMLTV dialogl"></p>

<p>The only thing you need to do is type <code>http://&lt;THREADFIN IP ADDRESS&gt;:34400/xmltv/threadfin.xml</code> under <em>File or URL</em>. Click on the save button and you&#39;re all set.
Jellyfin will need some time in order to gather all necessary information but after a while live tv will be available.</p>

<p>Jellyfin is  available through the web interface and different apps. The UI is pretty straightforward so we won&#39;t go into detail on this topic. You&#39;ve just setup up live tv on your server on your terms.</p>
]]></content:encoded>
      <author>David Claeys</author>
      <guid>https://blog.claeyscloud.com/read/a/waeibkyeu9</guid>
      <pubDate>Wed, 19 Jun 2024 14:20:31 +0000</pubDate>
    </item>
    <item>
      <title>Deploying .NET containers in Docker</title>
      <link>https://blog.claeyscloud.com/david/deploying-net-containers-in-docker</link>
      <description>&lt;![CDATA[Since Microsoft started to transition .NET they also started offering Docker images to package your applications. To be more specific at Docker Hub Microsoft lists their images and intended purposes.&#xA;&#xA;I wanted to take myself up for a challenge and try to package a .NET API project into a Docker container.&#xA;The purpose of this article isn&#39;t to tell you how to build an API project since this topic is broadly covered on the web. I want to tell you one of the roadblocks I ran against and how I managed to solve it.&#xA;&#xA;If you want to get started the following tutorials could be useful :&#xA;Containerize a .NET app&#xA;Step By Step Dockerizing .NET Core API&#xA;Smaller Docker Images for ASP.NET Core Apps&#xA;&#xA;Slim Docker images&#xA;&#xA;It&#39;s best practice to make the Docker images you publish as slim as possible. &#xA;The main benefit of doing this is that consuming your image will take less space on your host if you do so.&#xA;There are many ways to make your image slimmer but one of the most effective ways is picking the right base image with the right tag.&#xA;&#xA;For example if we look at the tags for the ASP.NET Core Runtime we see among others the following sections : Linux amd64, Nano Server 2022 amd64 , Windows Server Core 2022 amd64 and so on.&#xA;If you want to make your Docker image multi platform compatible (one of the main benefits of .NET and Docker) you should automatically discard the tags representing a Windows environment.&#xA;First of all it&#39;s probably not the most lightweight base OS to build your image but more importantly Windows Docker containers can&#39;t run on any system that isn&#39;t Windows based.&#xA;&#xA;This limits our choice to Linux based images, but even there we have lots of choice.&#xA;By example at this moment in time we can choose among others between 8.0-bookworm-slim (Debian), 8.0-alpine-amd64 (Alpine) and 8.0-jammy (Ubuntu).&#xA;Microsoft marks the Debian variant with the latest tag since this distribution is pretty lightweight and also is quite widespread. However if we want to take things up a notch we should go for alpine since this is a lightweight no frills distribution.&#xA;&#xA;The roadblock&#xA;&#xA;When publishing a .NET API it is served by Kestrel.&#xA;When making an API it is recommended to use HTTPS for security reasons. Furthermore when making a production build it is even required.&#xA;&#xA;When reading the documentation we see we should use the following commands  :&#xA;dotnet dev-certs https -ep $env:USERPROFILE\.aspnet\https\aspnetapp.pfx -p crypticpassword&#xA;dotnet dev-certs https --trust&#xA;&#xA;This is simple enough, what&#39;s the problem then? Well the second of those command is only supported on Windows based systems. &#xA;&#xA;The solution&#xA;&#xA;After a lot of trial and error I came to the following solution :&#xA;&#xA;Password for the certificate&#xA;ARG CERTPASSWORDARG=SUPERSECRET&#xA;this image contains the entire .NET SDK and is ideal for creation the build&#xA;FROM mcr.microsoft.com/dotnet/sdk:8.0-alpine-amd64 AS build-env&#xA;ARG CERTPASSWORDARG&#xA;ENV CERTPASSWORD=$CERTPASSWORDARG&#xA;WORKDIR /App&#xA;COPY . ./&#xA;Restore dependencies for your application&#xA;RUN dotnet restore&#xA;Build your application&#xA;RUN dotnet publish test.csproj --no-restore --self-contained false -c Release -o out /p:UseAppHost=false &#xA;Make the directory for certificate export&#xA;RUN mkdir /config&#xA;Generate certificate with specified password&#xA;RUN dotnet dev-certs https --export-path /config/aspnetapp.pem --password &#34;$CERTPASSWORD&#34; --format PEM&#xA;&#xA;this image contains the ASP.NET Core and .NET runtimes and libraries &#xA;FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine-amd64&#xA;ARG CERTPASSWORDARG&#xA;ENV CERTPASSWORD=$CERTPASSWORDARG&#xA;WORKDIR /App&#xA;add dependency in system to setup certificates&#xA;RUN apk add ca-certificates &#xA;create directory to store certificate config&#xA;RUN mkdir /config &#xA;create necessary config directory&#xA;RUN mkdir -p /usr/local/share/ca-certificates/&#xA;copy compiled files to runtime&#xA;COPY --from=build-env /App/out . &#xA;copy generated certificate&#xA;COPY --from=build-env /config /config&#xA;Disable Big Brother&#xA;ENV DOTNETCLITELEMETRYOPTOUT=1&#xA;Set the environment to production&#xA;ENV ASPNETCOREENVIRONMENT=Production&#xA;Set the urls where Kestrel is going to listen&#xA;ENV ASPNETCOREURLS=http://+:80;https://+:443&#xA;location of the certificate file&#xA;ENV ASPNETCOREKestrelCertificatesDefaultPath=/usr/local/share/ca-certificates/aspnetapp.crt&#xA;location of the certificate key&#xA;ENV ASPNETCOREKestrelCertificatesDefault_KeyPath=/usr/local/share/ca-certificates/aspnetapp.key&#xA;specify password in order to open certificate key&#xA;ENV ASPNETCOREKestrelCertificatesDefault_Password=$CERTPASSWORD&#xA;copy certificate files to config directory&#xA;RUN cp /config/aspnetapp.pem $ASPNETCOREKestrelCertificatesDefaultPath &#xA;RUN cp /config/aspnetapp.key $ASPNETCOREKestrelCertificatesDefault_KeyPath&#xA;set file permisions for certificate file&#xA;RUN chmod 755 $ASPNETCOREKestrelCertificatesDefault_Path &#xA;RUN chmod +x $ASPNETCOREKestrelCertificatesDefault_Path&#xA;change file ownership for certificate file&#xA;add generated certificate to trusted certificate list on the system&#xA;RUN cat $ASPNETCOREKestrelCertificatesDefault_Path     /etc/ssl/certs/ca-certificates.crt&#xA;set file permissions for key file&#xA;RUN chmod 755 $ASPNETCOREKestrelCertificatesDefault_KeyPath&#xA;RUN chmod +x $ASPNETCOREKestrelCertificatesDefault_KeyPath&#xA;change file ownership for key file&#xA;RUN update-ca-certificates&#xA;&#xA;ENTRYPOINT [&#34;dotnet&#34;, &#34;test.dll&#34;]&#xA;EXPOSE 80 &#xA;EXPOSE 443&#xA;The above file is for demonstration purposes, in practice you shouldn&#39;t use consecutive RUN instructions, you should update system dependencies and perform some cleanup. I&#39;ve excluded those steps in order to focus on this article&#39;s subject.&#xA;&#xA;Deep dive&#xA;&#xA;The first step I want to focus on is the following : &#xA;RUN dotnet dev-certs https --export-path /config/aspnetapp.pem --password &#34;$CERTPASSWORD&#34; --format PEM&#xA;By default the command to generate certificates generates a certificate in the PFX format.&#xA;While it is theoretically possible to use that format on Linux systems it&#39;s an overly complicated mess. So in order to make things easier we tell the generator tool to use the PEM format. &#xA;This way of using certificates is much better supported in Linux and much easier to setup.&#xA;This command will generate two files : a certificate file and a key file.&#xA;The key file is encrypted with the password that is specified in CERTPASSWORDARG.&#xA;&#xA;The next important part is :&#xA;location of the certificate file&#xA;ENV ASPNETCOREKestrelCertificatesDefaultPath=/usr/local/share/ca-certificates/aspnetapp.crt&#xA;location of the certificate key&#xA;ENV ASPNETCOREKestrelCertificatesDefault_KeyPath=/usr/local/share/ca-certificates/aspnetapp.key&#xA;specify password in order to open certificate key&#xA;ENV ASPNETCOREKestrelCertificatesDefault_Password=$CERTPASSWORD&#xA;These environment variables tell the Kestrel server where it needs to look for the certificate files.&#xA;The ASPNETCOREKestrelCertificatesDefaultPassword is key, since if it is not specified or correctly populated Kestrel won&#39;t be able to use the certificate and will crash.&#xA;This variable isn&#39;t anywhere to be found on Microsoft&#39;s documentation and I only was able to find it looking at the .NET source code published on GitHub.&#xA;&#xA;The next important part is &#xA;&#xA;RUN cat $ASPNETCOREKestrelCertificatesDefault_Path     /etc/ssl/certs/ca-certificates.crt&#xA;RUN update-ca-certificates&#xA;This tells the system to trust the certificate we generated. If we wouldn&#39;t do that Kestrel also wouldn&#39;t be able to run and would crash.&#xA;&#xA;Security implications&#xA;&#xA;Maybe the elephant in the room is that in this setup we are using a self signed certificate in order to serve our application in a container. Many might be eager to discard this whole setup for this reason.&#xA;But before doing that hear me out.&#xA; &#xA;To start with, it&#39;s bad practice to hardcode the certificate you&#39;ll deploy in production environments in code.&#xA;So in fact your Docker image should always use a development certificate.&#xA;Yes, this example also contains a hardcode password at the beginning but this shouldn&#39;t be an issue.&#xA;&#xA;In theory we could use the ASPNETCOREKestrelCertificatesDefault_Path, ASPNETCOREKestrelCertificatesDefault_KeyPath and ASPNETCOREKestrelCertificatesDefault__Password environment variables in order to setup our production certificates at deployment.&#xA;This would allow us to run the image in a container while developing and use a securely stored certificated at deployment.&#xA;However this solution is discouraged since Microsoft doesn&#39;t recommend directly exposing the Kestrel server in Production environments.&#xA;&#xA;This leads to what in my opinion is the preferable solution : using a proxy.&#xA;You can setup IIS, Nginx, Apache, Traefik and so on, with the certificate you want to use.&#xA;Clients using the deployed application will have a secure connection and you don&#39;t need to deal with the complexities of setting up a &#34;real&#34; certificate at the image level.&#xA;&#xA;Using Docker is amazing, and being able to use it with .NET even more.&#xA;If you stumbled on the same roadblock I hope this article proved useful.]]&gt;</description>
      <content:encoded><![CDATA[<p>Since Microsoft started to transition .NET they also started offering Docker images to package your applications. To be more specific at <a href="https://hub.docker.com/_/microsoft-dotnet" rel="nofollow">Docker Hub</a> Microsoft lists their images and intended purposes.</p>

<p>I wanted to take myself up for a challenge and try to package a .NET API project into a Docker container.
The purpose of this article isn&#39;t to tell you how to build an API project since this topic is broadly covered on the web. I want to tell you one of the roadblocks I ran against and how I managed to solve it.</p>

<p>If you want to get started the following tutorials could be useful :
– <a href="https://learn.microsoft.com/en-us/dotnet/core/docker/build-container?tabs=windows&amp;pivots=dotnet-8-0" rel="nofollow">Containerize a .NET app</a>
– <a href="https://medium.com/@ersen/step-by-step-dockerizing-net-core-api-a2490752a3d2" rel="nofollow">Step By Step Dockerizing .NET Core API</a>
– <a href="https://itnext.io/smaller-docker-images-for-asp-net-core-apps-bee4a8fd1277" rel="nofollow">Smaller Docker Images for ASP.NET Core Apps</a></p>

<h2 id="slim-docker-images">Slim Docker images</h2>

<p>It&#39;s best practice to make the Docker images you publish as slim as possible.
The main benefit of doing this is that consuming your image will take less space on your host if you do so.
There are many ways to make your image slimmer but one of the most effective ways is picking the right base image with the right tag.</p>

<p>For example if we look at the tags for the <a href="https://hub.docker.com/_/microsoft-dotnet-aspnet/" rel="nofollow">ASP.NET Core Runtime</a> we see among others the following sections : <em>Linux amd64</em>, <em>Nano Server 2022 amd64</em> , <em>Windows Server Core 2022 amd64</em> and so on.
If you want to make your Docker image multi platform compatible (one of the main benefits of .NET and Docker) you should automatically discard the tags representing a Windows environment.
First of all it&#39;s probably not the most lightweight base OS to build your image but more importantly Windows Docker containers can&#39;t run on any system that isn&#39;t Windows based.</p>

<p>This limits our choice to Linux based images, but even there we have lots of choice.
By example at this moment in time we can choose among others between 8.0-bookworm-slim (<a href="https://www.debian.org/releases/bookworm/" rel="nofollow">Debian</a>), 8.0-alpine-amd64 (<a href="https://www.alpinelinux.org/posts/Alpine-3.18.0-released.html" rel="nofollow">Alpine</a>) and 8.0-jammy (<a href="https://releases.ubuntu.com/jammy/" rel="nofollow">Ubuntu</a>).
Microsoft marks the Debian variant with the latest <code>tag</code> since this distribution is pretty lightweight and also is quite widespread. However if we want to take things up a notch we should go for alpine since this is a lightweight no frills distribution.</p>

<h2 id="the-roadblock">The roadblock</h2>

<p>When publishing a .NET API it is served by <a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel?view=aspnetcore-8.0" rel="nofollow">Kestrel</a>.
When making an API it is recommended to use HTTPS for security reasons. Furthermore when making a production build it is even required.</p>

<p>When reading the <a href="https://learn.microsoft.com/en-us/dotnet/core/additional-tools/self-signed-certificates-guide#create-a-self-signed-certificate" rel="nofollow">documentation</a> we see we should use the following commands  :
– <code>dotnet dev-certs https -ep $env:USERPROFILE\.aspnet\https\aspnetapp.pfx -p crypticpassword</code>
– <code>dotnet dev-certs https --trust</code></p>

<p>This is simple enough, what&#39;s the problem then? Well the second of those command is only supported on Windows based systems.</p>

<h2 id="the-solution">The solution</h2>

<p>After a lot of trial and error I came to the following solution :</p>

<pre><code># Password for the certificate
ARG CERT_PASSWORD_ARG=SUPERSECRET
# this image contains the entire .NET SDK and is ideal for creation the build
FROM mcr.microsoft.com/dotnet/sdk:8.0-alpine-amd64 AS build-env
ARG CERT_PASSWORD_ARG
ENV CERT_PASSWORD=$CERT_PASSWORD_ARG
WORKDIR /App
COPY . ./
# Restore dependencies for your application
RUN dotnet restore
# Build your application
RUN dotnet publish test.csproj --no-restore --self-contained false -c Release -o out /p:UseAppHost=false 
# Make the directory for certificate export
RUN mkdir /config
# Generate certificate with specified password
RUN dotnet dev-certs https --export-path /config/aspnetapp.pem --password &#34;$CERT_PASSWORD&#34; --format PEM

# this image contains the ASP.NET Core and .NET runtimes and libraries 
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine-amd64
ARG CERT_PASSWORD_ARG
ENV CERT_PASSWORD=$CERT_PASSWORD_ARG
WORKDIR /App
# add dependency in system to setup certificates
RUN apk add ca-certificates 
# create directory to store certificate config
RUN mkdir /config 
# create necessary config directory
RUN mkdir -p /usr/local/share/ca-certificates/
# copy compiled files to runtime
COPY --from=build-env /App/out . 
# copy generated certificate
COPY --from=build-env /config /config
# Disable Big Brother
ENV DOTNET_CLI_TELEMETRY_OPTOUT=1
# Set the environment to production
ENV ASPNETCORE_ENVIRONMENT=Production
# Set the urls where Kestrel is going to listen
ENV ASPNETCORE_URLS=http://+:80;https://+:443
# location of the certificate file
ENV ASPNETCORE_Kestrel__Certificates__Default__Path=/usr/local/share/ca-certificates/aspnetapp.crt
# location of the certificate key
ENV ASPNETCORE_Kestrel__Certificates__Default__KeyPath=/usr/local/share/ca-certificates/aspnetapp.key
# specify password in order to open certificate key
ENV ASPNETCORE_Kestrel__Certificates__Default__Password=$CERT_PASSWORD
# copy certificate files to config directory
RUN cp /config/aspnetapp.pem $ASPNETCORE_Kestrel__Certificates__Default__Path 
RUN cp /config/aspnetapp.key $ASPNETCORE_Kestrel__Certificates__Default__KeyPath
# set file permisions for certificate file
RUN chmod 755 $ASPNETCORE_Kestrel__Certificates__Default__Path 
RUN chmod +x $ASPNETCORE_Kestrel__Certificates__Default__Path
# change file ownership for certificate file
# add generated certificate to trusted certificate list on the system
RUN cat $ASPNETCORE_Kestrel__Certificates__Default__Path &gt;&gt; /etc/ssl/certs/ca-certificates.crt
# set file permissions for key file
RUN chmod 755 $ASPNETCORE_Kestrel__Certificates__Default__KeyPath
RUN chmod +x $ASPNETCORE_Kestrel__Certificates__Default__KeyPath
# change file ownership for key file
RUN update-ca-certificates

ENTRYPOINT [&#34;dotnet&#34;, &#34;test.dll&#34;]
EXPOSE 80 
EXPOSE 443
</code></pre>

<p>The above file is for demonstration purposes, in practice you shouldn&#39;t use consecutive <code>RUN</code> instructions, you should update system dependencies and perform some cleanup. I&#39;ve excluded those steps in order to focus on this article&#39;s subject.</p>

<h3 id="deep-dive">Deep dive</h3>

<p>The first step I want to focus on is the following :</p>

<pre><code>RUN dotnet dev-certs https --export-path /config/aspnetapp.pem --password &#34;$CERT_PASSWORD&#34; --format PEM
</code></pre>

<p>By default the command to generate certificates generates a certificate in the <a href="https://learn.microsoft.com/en-us/windows-hardware/drivers/install/personal-information-exchange---pfx--files" rel="nofollow">PFX</a> format.
While it is theoretically possible to use that format on Linux systems it&#39;s an overly complicated mess. So in order to make things easier we tell the generator tool to use the PEM format.
This way of using certificates is much better supported in Linux and much easier to setup.
This command will generate two files : a certificate file and a key file.
The key file is encrypted with the password that is specified in <code>CERT_PASSWORD_ARG</code>.</p>

<p>The next important part is :</p>

<pre><code># location of the certificate file
ENV ASPNETCORE_Kestrel__Certificates__Default__Path=/usr/local/share/ca-certificates/aspnetapp.crt
# location of the certificate key
ENV ASPNETCORE_Kestrel__Certificates__Default__KeyPath=/usr/local/share/ca-certificates/aspnetapp.key
# specify password in order to open certificate key
ENV ASPNETCORE_Kestrel__Certificates__Default__Password=$CERT_PASSWORD
</code></pre>

<p>These environment variables tell the Kestrel server where it needs to look for the certificate files.
The <code>ASPNETCORE_Kestrel__Certificates__Default__Password</code> is key, since if it is not specified or correctly populated Kestrel won&#39;t be able to use the certificate and will crash.
This variable isn&#39;t anywhere to be found on Microsoft&#39;s documentation and I only was able to find it looking at the .NET source code published on GitHub.</p>

<p>The next important part is</p>

<pre><code>RUN cat $ASPNETCORE_Kestrel__Certificates__Default__Path &gt;&gt; /etc/ssl/certs/ca-certificates.crt
RUN update-ca-certificates
</code></pre>

<p>This tells the system to trust the certificate we generated. If we wouldn&#39;t do that Kestrel also wouldn&#39;t be able to run and would crash.</p>

<h2 id="security-implications">Security implications</h2>

<p>Maybe the elephant in the room is that in this setup we are using a self signed certificate in order to serve our application in a container. Many might be eager to discard this whole setup for this reason.
But before doing that hear me out.</p>

<p>To start with, it&#39;s bad practice to hardcode the certificate you&#39;ll deploy in production environments in code.
So in fact your Docker image should always use a development certificate.
Yes, this example also contains a hardcode password at the beginning but this shouldn&#39;t be an issue.</p>

<p>In theory we could use the <code>ASPNETCORE_Kestrel__Certificates__Default__Path</code>, <code>ASPNETCORE_Kestrel__Certificates__Default__KeyPath</code> and <code>ASPNETCORE_Kestrel__Certificates__Default__Password</code> environment variables in order to setup our production certificates at deployment.
This would allow us to run the image in a container while developing and use a securely stored certificated at deployment.
However this solution is discouraged since Microsoft doesn&#39;t recommend directly exposing the Kestrel server in Production environments.</p>

<p>This leads to what in my opinion is the preferable solution : using a proxy.
You can setup <a href="https://learn.microsoft.com/en-us/iis/get-started/introduction-to-iis/iis-web-server-overview" rel="nofollow">IIS</a>, <a href="https://www.nginx.com/" rel="nofollow">Nginx</a>, <a href="https://httpd.apache.org/" rel="nofollow">Apache</a>, <a href="https://traefik.io/traefik/" rel="nofollow">Traefik</a> and so on, with the certificate you want to use.
Clients using the deployed application will have a secure connection and you don&#39;t need to deal with the complexities of setting up a “real” certificate at the image level.</p>

<p>Using Docker is amazing, and being able to use it with .NET even more.
If you stumbled on the same roadblock I hope this article proved useful.</p>
]]></content:encoded>
      <author>David Claeys</author>
      <guid>https://blog.claeyscloud.com/read/a/1vwj510d2q</guid>
      <pubDate>Tue, 23 Apr 2024 06:59:36 +0000</pubDate>
    </item>
  </channel>
</rss>