<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>David Claeys</title>
    <link>https://blog.claeyscloud.com/david/</link>
    <description></description>
    <pubDate>Sun, 17 May 2026 07:06:05 +0200</pubDate>
    <item>
      <title>How to host your code: methods and philosophy</title>
      <link>https://blog.claeyscloud.com/david/how-to-host-your-code-methods-and-philosophy</link>
      <description>&lt;![CDATA[When reading the title you might think that the answer is pretty obvious: you just put your code repositories in GitHub and that&#39;s it.&#xA;&#xA;However, when you think about what happened to PairDrop or spotizerr it becomes obvious that the answer is not so simple.&#xA;On one hand you want to put your code in a place that is easy to reach and where your project will have exposure.&#xA;On the other you don&#39;t want to rely on Big Tech to determine the future of your project.&#xA;One false positive from an AI tool or one malicious DMCA request and all your hard work can just disappear.&#xA;Unless your project has a big audience, nobody at Big Tech will listen to you and then it can take weeks or months until everything comes back to normal.&#xA;&#xA;Mitigating risks&#xA;&#xA;How do you mitigate this risk?&#xA;The answer is self-hosting, before you draw the conclusion that such a thing is not feasible hear me out!&#xA;&#xA;Everyone who is into self-hosting knows that it comes with its set of challenges.&#xA;&#xA;By example your own domain will never have the exposure of GitHub. So you might think that self-hosting your code will reduce the exposure and viability of your project.&#xA;Luckily such a thing as a push mirror exists! &#xA;This means the following : first you commit your code on your self-hosted repository and then automatically the code gets pushed to another git repository. The mirror repositories can be hosted on any platform you want, like GitHub.&#xA;This way you still have the exposure you want while your code is under your control.&#xA;&#xA;Another challenge associated with self-hosting is managing security.&#xA;If you don&#39;t want to take any risk you don&#39;t have to expose your self-hosted instance to the public.&#xA;You just put the software locally and setup a push mirror to a provider that&#39;s publicly available, job done.&#xA;Although with tools like pangolin exposing self-hosted services has become a breeze.&#xA;&#xA;Maybe the last challenge is choosing the right software but that&#39;s what we are for.&#xA;&#xA;Choosing the right software&#xA;&#xA;Basically there are two options : gitea and forgejo. Personally, I use gitea, so this article only will include examples for that software. However due to the actions of the company behind it I would recommend to have a look at forgejo if you&#39;re starting out. Someday I will make the switch, but for now I&#39;m holding out on that pending transition.&#xA;&#xA;Build actions&#xA;&#xA;Gitea provides a run agent that is very similar to GitHub actions. However there are some differences that need to be worked around.&#xA;&#xA;Install docker&#xA;&#xA;By default the gitea runner doesn&#39;t have Docker installed, in order to be able to do anything with Docker you need to install it yourself.&#xA;&#xA;This can be done in the following way :&#xA;&#xA;name: Install Docker&#xA;   run: |&#xA;     echo &#34;Checking docker installation&#34;&#xA;     if command -v docker &amp;  /dev/null; then&#xA;       echo &#34;Docker installation found&#34;&#xA;     else&#xA;       echo &#34;Docker installation not found. Docker will be installed&#34;&#xA;        curl -fsSL https://get.docker.com | sh&#xA;     fi&#xA;&#xA;Update docker hub description&#xA;&#xA;There is this action that enables you to automatically update descriptions on docker hub. However it requires some extra dependencies to be installed.&#xA;&#xA;name: Install npm dependencies&#xA;   run: |&#xA;     echo &#34;Installing fetch&#34;&#xA;     installnode=$false&#xA;     if ! command -v node &amp;  /dev/null; then&#xA;       echo &#34;No version of NodeJS detected&#34;&#xA;       installnode=true&#xA;     else&#xA;       nodeversion=$(node -v)&#xA;       nodeversion=${nodeversion:1} # Remove &#39;v&#39; at the beginning&#xA;       nodeversion=${nodeversion%\.} # Remove trailing &#34;.&#34;.&#xA;       nodeversion=${nodeversion%\.} # Remove trailing &#34;.&#34;.&#xA;       nodeversion=$(($nodeversion)) # Convert the NodeJS version number from a string to an integer.&#xA;       if [ $nodeversion -lt  24 ]; then&#xA;         echo &#34;node version : &#34; $nodeversion &#xA;         echo $&#34;removing outdated npm version&#34;&#xA;         installnode=true&#xA;         apt-get update&#xA;         apt-get remove nodejs npm&#xA;         apt-get purge nodejs&#xA;         rm -rf /usr/local/bin/npm &#xA;         rm -rf /usr/local/share/man/man1/node &#xA;         rm -rf /usr/local/lib/dtrace/node.d &#xA;         rm -rf ~/.npm &#xA;         rm -rf ~/.node-gyp &#xA;         rm -rf /opt/local/bin/node &#xA;         rm -rf opt/local/include/node &#xA;         rm -rf /opt/local/lib/nodemodules  &#xA;         rm -rf /usr/local/lib/node&#xA;         rm -rf /usr/local/include/node&#xA;         rm -rf /usr/local/bin/node&#xA;       fi&#xA;     fi&#xA;&#xA;     if $installnode; then&#xA;       NODEMAJOR=24&#xA;       echo &#34;Installing node ${NODEMAJOR}&#34;&#xA;       if test -f /etc/apt/keyrings/nodesource.gpg; then&#xA;         rm /etc/apt/keyrings/nodesource.gpg&#xA;       fi&#xA;       if test -f /etc/apt/sources.list.d/nodesource.list; then&#xA;         rm /etc/apt/sources.list.d/nodesource.list&#xA;       fi&#xA;       apt-get update&#xA;       apt-get install -y -q ca-certificates curl gnupg&#xA;       mkdir -p /etc/apt/keyrings&#xA;       curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg&#xA;       echo &#34;deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node${NODEMAJOR}.x nodistro main&#34; | tee /etc/apt/sources.list.d/nodesource.list&#xA;       apt-get update&#xA;       apt-get install -y -q nodejs&#xA;       npm install npm --global&#xA;     fi&#xA;&#xA;     echo &#34;node version : &#34; $(node -v)&#xA;       &#xA;     package=&#39;node-fetch&#39;&#xA;     if [ npm list -g | grep -c $package -eq 0 ]; then&#xA;       npm install -g $package&#xA;     fi&#xA;name: Docker Hub Description&#xA;   uses: peter-evans/dockerhub-description@v5&#xA;   with:&#xA;     username: ${{ secrets.DOCKERHUBUSERNAME }}&#xA;     password: ${{ secrets.DOCKERHUBPASSWORD }}  &#xA;     repository: ${{ repostitory }}&#xA;`]]&gt;</description>
      <content:encoded><![CDATA[<p>When reading the title you might think that the answer is pretty obvious: you just put your code repositories in <a href="https://github.com/" rel="nofollow">GitHub</a> and that&#39;s it.</p>

<p>However, when you think about what happened to <a href="https://github.com/schlagmichdoch/PairDrop" rel="nofollow">PairDrop</a> or <a href="https://github.com/spotizerr-dev/spotizerr" rel="nofollow">spotizerr</a> it becomes obvious that the answer is not so simple.
On one hand you want to put your code in a place that is easy to reach and where your project will have exposure.
On the other you don&#39;t want to rely on Big Tech to determine the future of your project.
One false positive from an AI tool or one malicious DMCA request and all your hard work can just disappear.
Unless your project has a big audience, nobody at Big Tech will listen to you and then it can take weeks or months until everything comes back to normal.</p>

<h2 id="mitigating-risks">Mitigating risks</h2>

<p>How do you mitigate this risk?
The answer is self-hosting, before you draw the conclusion that such a thing is not feasible hear me out!</p>

<p>Everyone who is into self-hosting knows that it comes with its set of challenges.</p>

<p>By example your own domain will never have the exposure of GitHub. So you might think that self-hosting your code will reduce the exposure and viability of your project.
Luckily such a thing as a push mirror exists!
This means the following : first you commit your code on your self-hosted repository and then automatically the code gets pushed to another git repository. The mirror repositories can be hosted on any platform you want, like GitHub.
This way you still have the exposure you want while your code is under your control.</p>

<p>Another challenge associated with self-hosting is managing security.
If you don&#39;t want to take any risk you don&#39;t have to expose your self-hosted instance to the public.
You just put the software locally and setup a push mirror to a provider that&#39;s publicly available, job done.
Although with tools like <a href="https://github.com/fosrl/pangolin" rel="nofollow">pangolin</a> exposing self-hosted services has become a breeze.</p>

<p>Maybe the last challenge is choosing the right software but that&#39;s what we are for.</p>

<h2 id="choosing-the-right-software">Choosing the right software</h2>

<p>Basically there are two options : <a href="https://github.com/go-gitea/gitea" rel="nofollow">gitea</a> and <a href="https://codeberg.org/forgejo/forgejo" rel="nofollow">forgejo</a>. Personally, I use gitea, so this article only will include examples for that software. However due to the actions of the company behind it I would recommend to have a look at forgejo if you&#39;re starting out. Someday I will make the switch, but for now I&#39;m holding out on that pending transition.</p>

<h2 id="build-actions">Build actions</h2>

<p>Gitea provides a run agent that is very similar to GitHub actions. However there are some differences that need to be worked around.</p>

<h3 id="install-docker">Install docker</h3>

<p>By default the gitea runner doesn&#39;t have Docker installed, in order to be able to do anything with Docker you need to install it yourself.</p>

<p>This can be done in the following way :</p>

<pre><code class="language-yaml">- name: Install Docker
   run: |
     echo &#34;Checking docker installation&#34;
     if command -v docker &amp;&gt; /dev/null; then
       echo &#34;Docker installation found&#34;
     else
       echo &#34;Docker installation not found. Docker will be installed&#34;
        curl -fsSL https://get.docker.com | sh
     fi
</code></pre>

<h3 id="update-docker-hub-description">Update docker hub description</h3>

<p>There is <a href="https://github.com/peter-evans/dockerhub-description" rel="nofollow">this</a> action that enables you to automatically update descriptions on docker hub. However it requires some extra dependencies to be installed.</p>

<pre><code class="language-yaml">- name: Install npm dependencies
   run: |
     echo &#34;Installing fetch&#34;
     install_node=$false
     if ! command -v node &amp;&gt; /dev/null; then
       echo &#34;No version of NodeJS detected&#34;
       install_node=true
     else
       node_version=$(node -v)
       node_version=${node_version:1} # Remove &#39;v&#39; at the beginning
       node_version=${node_version%\.*} # Remove trailing &#34;.*&#34;.
       node_version=${node_version%\.*} # Remove trailing &#34;.*&#34;.
       node_version=$(($node_version)) # Convert the NodeJS version number from a string to an integer.
       if [ $node_version -lt  24 ]; then
         echo &#34;node version : &#34; $node_version 
         echo $&#34;removing outdated npm version&#34;
         install_node=true
         apt-get update
         apt-get remove nodejs npm
         apt-get purge nodejs
         rm -rf /usr/local/bin/npm 
         rm -rf /usr/local/share/man/man1/node* 
         rm -rf /usr/local/lib/dtrace/node.d 
         rm -rf ~/.npm 
         rm -rf ~/.node-gyp 
         rm -rf /opt/local/bin/node 
         rm -rf opt/local/include/node 
         rm -rf /opt/local/lib/node_modules  
         rm -rf /usr/local/lib/node*
         rm -rf /usr/local/include/node*
         rm -rf /usr/local/bin/node*
       fi
     fi

     if $install_node; then
       NODE_MAJOR=24
       echo &#34;Installing node ${NODE_MAJOR}&#34;
       if test -f /etc/apt/keyrings/nodesource.gpg; then
         rm /etc/apt/keyrings/nodesource.gpg
       fi
       if test -f /etc/apt/sources.list.d/nodesource.list; then
         rm /etc/apt/sources.list.d/nodesource.list
       fi
       apt-get update
       apt-get install -y -q ca-certificates curl gnupg
       mkdir -p /etc/apt/keyrings
       curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg
       echo &#34;deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_${NODE_MAJOR}.x nodistro main&#34; | tee /etc/apt/sources.list.d/nodesource.list
       apt-get update
       apt-get install -y -q nodejs
       npm install npm --global
     fi

     echo &#34;node version : &#34; $(node -v)
       
     package=&#39;node-fetch&#39;
     if [ `npm list -g | grep -c $package` -eq 0 ]; then
       npm install -g $package
     fi
- name: Docker Hub Description
   uses: peter-evans/dockerhub-description@v5
   with:
     username: ${{ secrets.DOCKER_HUB_USERNAME }}
     password: ${{ secrets.DOCKER_HUB_PASSWORD }}  
     repository: ${{ repostitory }}
</code></pre>
]]></content:encoded>
      <guid>https://blog.claeyscloud.com/david/how-to-host-your-code-methods-and-philosophy</guid>
      <pubDate>Thu, 19 Feb 2026 09:53:56 +0000</pubDate>
    </item>
    <item>
      <title>Deploy front-end applications with Docker</title>
      <link>https://blog.claeyscloud.com/david/deploy-front-end-applications-with-docker</link>
      <description>&lt;![CDATA[In a previous article we explained how you could deploy a .NET application with Docker.&#xA;The content of this article will be applicable whether you use a .NET backend or not.&#xA;&#xA;Possible pitfalls&#xA;&#xA;A possible issue is that you only want to make your backend available for use to your front-end.&#xA;This is quite nice since it significantly decreased the possible attack surface.&#xA;But at a first glance this is not possible since the clients running the application wouldn&#39;t be able to perform any API call.&#xA;&#xA;Or maybe as per convention you host all your backends at api.example.com/apiName while you want to give your front-end applications a more recognizable domain.&#xA;If you&#39;ve tried to just point your client requests to a different domain you&#39;ve probably noticed the following problems :&#xA;it&#39;s quite annoying to hardcode domains since these can change over time&#xA;CORS on won&#39;t let you do it&#xA;&#xA;The solution&#xA;&#xA;These problems can both be solved through building a Docker image.&#xA;The proposed example is based on Node but with some creativity you could tweak it with any front-end solution. To be clear since we&#39;re using Node we can build any framework based on it (like React or Angular).  &#xA;&#xA;We will split up the building process in two stages. &#xA;&#xA;First build stage : Compiling&#xA;&#xA;The first stage is intended to build or node application.&#xA;If you want to build an application that&#39;s not based on Node this is where you would change the base image. If for some reason your build process requires multiple steps this is the place where you would do it.&#xA;&#xA;FROM node:22-alpine AS builder&#xA;all subsequent commands will be performed in the /app directory&#xA;WORKDIR /app/&#xA;copy all the source code into the current directory&#xA;COPY . .&#xA;update the system ,after that install all dependencies and run build&#xA;RUN apk update &amp;&amp; apk upgrade --available &amp;&amp; npm install \&#xA;    &amp;&amp; npm run build&#xA;&#xA;Second build stage : Hosting&#xA;&#xA;The following stage will be responsible for running a http server (Nginx) hosting the application and also will proxy requests to the backend.&#xA;&#xA;The contents of this stage would be something like this :&#xA;&#xA;FROM nginx:mainline-alpine&#xA;define environment variables for later subsitution&#xA;ENV APIPROTOCOL=&#34;https&#34;&#xA;ENV APIHOST=&#34;localhost&#34;&#xA;ENV APIPORT=&#34;80&#34;&#xA;change the working directory to the main nginx directory&#xA;WORKDIR /usr/share/nginx/html&#xA;update and adding system dependencies&#xA;default nginx configurations are also wiped out&#xA;RUN apk update &amp;&amp; apk upgrade --available \&#xA;    &amp;&amp; apk add envsubst \&#xA;    &amp;&amp; rm -rf ./&#xA;copy the build output to the current folder&#xA;COPY --from=builder /app/build .&#xA;add nginx configuration template file&#xA;COPY nginx.conf.template /etc/nginx/nginx.conf&#xA;add script for variable substitution at runtime&#xA;COPY entrypoint.sh /docker-entrypoint.d/05-docker-entrypoint.sh&#xA;set correct file permissions and remove files that are not needed&#xA;RUN chmod +x /docker-entrypoint.d/05-docker-entrypoint.sh \&#xA;    &amp;&amp; apk del envsubst \&#xA;    &amp;&amp; rm -rf /var/cache/apk/ \&#xA;    &amp;&amp; rm -rf /etc/nginx/conf.d&#xA;&#xA;Nginx configuration overview&#xA;Let&#39;s take a look at a file we will call nginx.conf.template.&#xA;&#xA;user  nginx;&#xA;workerprocesses  auto;&#xA;&#xA;errorlog  /var/log/nginx/error.log notice;&#xA;pid        /var/run/nginx.pid;&#xA;&#xA;events {&#xA;    workerconnections  1024;&#xA;}&#xA;&#xA;http {&#xA;&#xA; map $httpupgrade $connectionupgrade {&#xA;        default upgrade;&#xA;        &#39;&#39;      close;&#xA;    }&#xA;&#xA;    include       /etc/nginx/mime.types;&#xA;    defaulttype  application/octet-stream;&#xA;&#xA;    logformat  main  &#39;$remoteaddr - $remoteuser [$timelocal] &#34;$request&#34; &#39;&#xA;                      &#39;$status $bodybytessent &#34;$httpreferer&#34; &#39;&#xA;                      &#39;&#34;$httpuseragent&#34; &#34;$httpxforwardedfor&#34;&#39;;&#xA;&#xA;    accesslog  /var/log/nginx/access.log  main;&#xA;&#xA;    sendfile        on;&#xA;        keepalivetimeout  65;&#xA;&#xA;     server{&#xA;        listen 80;&#xA;        &#xA;        location / {&#xA;            root /usr/share/nginx/html;&#xA;        }&#xA;&#xA;        location /hubs {&#xA;            allow all;&#xA;            # App server url&#xA;            proxypass $APIPROTOCOL://$APIHOST:$APIPORT;&#xA;&#xA;            # Configuration for WebSockets&#xA;            proxysetheader Upgrade $httpupgrade;&#xA;            proxysetheader Connection $connectionupgrade;&#xA;            proxycache off;&#xA;            proxycachebypass $httpupgrade;&#xA;&#xA;            # WebSockets were implemented after http/1.0&#xA;            proxyhttpversion 1.1;&#xA;&#xA;            # Configuration for ServerSentEvents&#xA;            proxybuffering off;&#xA;&#xA;            # Configuration for LongPolling or if your KeepAliveInterval is longer than 60 seconds&#xA;            proxyreadtimeout 100s;&#xA;&#xA;            proxysslservername off;&#xA;            proxysslverify off;&#xA;&#xA;            proxysetheader Host $host;&#xA;            proxysetheader X-Real-IP $remoteaddr;&#xA;            proxysetheader X-Forwarded-For $proxyaddxforwardedfor;&#xA;            proxysetheader X-Forwarded-Proto $scheme;&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;Discussing locations&#xA;&#xA;The first thing to note is that we define two locations : / (web location) and  location /hubs (proxy location).&#xA;The / location will host the build output of our application, while the /hubs location is the location of the requests that will be proxied. It&#39;s important that in order for the web location to work the build files must be present in the indicated root directory.&#xA;&#xA;The reason that we did not call the proxy location /api is that our front-end application uses SignalR to communicate to the backend. The configuration provided in this example enables features like web sockets and long polling. However you can tweak the example provided to meet your needs.&#xA;&#xA;If you look deeper into the proxy configuration you probably will notice $APIPROTOCOL://$APIHOST:$APIPORT. If you would try this configuration directly in nginx it will fail pointing out your configuration is incorrect. &#xA;&#xA;Don&#39;t worry though since these are simply placeholders (that&#39;s the reason we&#39;ve called this file a template) that will be replaced later on. Our front-end application can simply point API communication to /hubs/whatever and our proxy will take care of it.&#xA;&#xA;Variable substitutions&#xA;&#xA;Let me ask you a question : When do you replace the placeholders with it&#39;s final value ?&#xA;If you do it at build time each time a domain changes you&#39;ll be forced to rebuild.&#xA;Or worse if you host multiple instances this means you&#39;ll need to build a separate image for each instance. I think it&#39;s obvious this method is not desirable at all.&#xA;&#xA;Instead of performing variable substitutions at build time they should be performed at run time.&#xA;Modifying the entry point of an existing Docker image can be quite tricky, luckily we won&#39;t need to.&#xA;The nginx image provides a feature that when you put scripts into the /docker-entrypoint.d  folder of the container it will run these scripts at startup time.    &#xA;&#xA;We will substitute the following variables :  APIPROTOCOL, APIHOST and APIPORT.&#xA;Let&#39;s have a look at our entrypoint.sh file :&#xA;&#xA;!/usr/bin/env sh&#xA;set -eu&#xA;&#xA;echo &#34;$(envsubst &#39;${APIPROTOCOL},${APIHOST},${APIPORT}&#39;  /etc/nginx/nginx.conf)&#34;  /etc/nginx/nginx.conf&#xA;exec &#34;$@&#34;&#xA;&#xA;This script is quite easy, it uses the envsubst _ command in order to read and substitute the contents of   /etc/nginx/nginx.conf and writes them afterwards into the same file.&#xA;So during our docker image process we will need to locate our template file at /etc/nginx/nginx.conf and at runtime this script will substitute the contents of the file with the provided environment variables.&#xA;&#xA;Considerations and thoughts&#xA;&#xA;In this example we used Nginx as our http server, however you can use the server that best fits your use-case. However if you choose so you will need to figure out how to setup a proxy on your own.&#xA;To be honest most common http servers provide plentiful documentation, so it really shouldn&#39;t be a problem.&#xA;&#xA;You might have noticed the use of envsubst. The placeholder substitution at runtime has been one of the parts where I struggled most. For some reason it has been quite tricky to get the values of the environment variables in a bash script and putting them in the configuration file.&#xA;The most annoying part is that you specify the variables you want to substitute. If you have a large amount of placeholders to replace this can become quite cumbersome.&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>In a previous article we explained how you could deploy a .NET application with Docker.
The content of this article will be applicable whether you use a .NET backend or not.</p>

<h2 id="possible-pitfalls">Possible pitfalls</h2>

<p>A possible issue is that you only want to make your backend available for use to your front-end.
This is quite nice since it significantly decreased the possible attack surface.
But at a first glance this is not possible since the clients running the application wouldn&#39;t be able to perform any API call.</p>

<p>Or maybe as per convention you host all your backends at <em>api.example.com/apiName</em> while you want to give your front-end applications a more recognizable domain.
If you&#39;ve tried to just point your client requests to a different domain you&#39;ve probably noticed the following problems :
–  it&#39;s quite annoying to hardcode domains since these can change over time
–  CORS on won&#39;t let you do it</p>

<h2 id="the-solution">The solution</h2>

<p>These problems can both be solved through building a Docker image.
The proposed example is based on <a href="https://nodejs.org/en/" rel="nofollow">Node</a> but with some creativity you could tweak it with any front-end solution. To be clear since we&#39;re using Node we can build any framework based on it (like React or Angular).</p>

<p>We will split up the building process in two stages.</p>

<h3 id="first-build-stage-compiling">First build stage : Compiling</h3>

<p>The first stage is intended to build or node application.
If you want to build an application that&#39;s not based on Node this is where you would change the base image. If for some reason your build process requires multiple steps this is the place where you would do it.</p>

<pre><code>FROM node:22-alpine AS builder
# all subsequent commands will be performed in the /app directory
WORKDIR /app/
# copy all the source code into the current directory
COPY . .
# update the system ,after that install all dependencies and run build
RUN apk update &amp;&amp; apk upgrade --available &amp;&amp; npm install \
    &amp;&amp; npm run build
</code></pre>

<h3 id="second-build-stage-hosting">Second build stage : Hosting</h3>

<p>The following stage will be responsible for running a http server (Nginx) hosting the application and also will proxy requests to the backend.</p>

<p>The contents of this stage would be something like this :</p>

<pre><code>FROM nginx:mainline-alpine
# define environment variables for later subsitution
ENV API_PROTOCOL=&#34;https&#34;
ENV API_HOST=&#34;localhost&#34;
ENV API_PORT=&#34;80&#34;
# change the working directory to the main nginx directory
WORKDIR /usr/share/nginx/html
# update and adding system dependencies
# default nginx configurations are also wiped out
RUN apk update &amp;&amp; apk upgrade --available \
    &amp;&amp; apk add envsubst \
    &amp;&amp; rm -rf ./*
# copy the build output to the current folder
COPY --from=builder /app/build .
# add nginx configuration template file
COPY nginx.conf.template /etc/nginx/nginx.conf
# add script for variable substitution at runtime
COPY entrypoint.sh /docker-entrypoint.d/05-docker-entrypoint.sh
# set correct file permissions and remove files that are not needed
RUN chmod +x /docker-entrypoint.d/05-docker-entrypoint.sh \
    &amp;&amp; apk del envsubst \
    &amp;&amp; rm -rf /var/cache/apk/* \
    &amp;&amp; rm -rf /etc/nginx/conf.d
</code></pre>

<h4 id="nginx-configuration-overview">Nginx configuration overview</h4>

<p>Let&#39;s take a look at a file we will call <code>nginx.conf.template</code>.</p>

<pre><code>user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {

 map $http_upgrade $connection_upgrade {
        default upgrade;
        &#39;&#39;      close;
    }


    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  &#39;$remote_addr - $remote_user [$time_local] &#34;$request&#34; &#39;
                      &#39;$status $body_bytes_sent &#34;$http_referer&#34; &#39;
                      &#39;&#34;$http_user_agent&#34; &#34;$http_x_forwarded_for&#34;&#39;;

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
        keepalive_timeout  65;

     server{
        listen 80;
        
        location / {
            root /usr/share/nginx/html;
        }

        location /hubs {
            allow all;
            # App server url
            proxy_pass $API_PROTOCOL://$API_HOST:$API_PORT;

            # Configuration for WebSockets
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
            proxy_cache off;
            proxy_cache_bypass $http_upgrade;

            # WebSockets were implemented after http/1.0
            proxy_http_version 1.1;

            # Configuration for ServerSentEvents
            proxy_buffering off;

            # Configuration for LongPolling or if your KeepAliveInterval is longer than 60 seconds
            proxy_read_timeout 100s;

            proxy_ssl_server_name off;
            proxy_ssl_verify off;

            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
}
</code></pre>

<h5 id="discussing-locations">Discussing locations</h5>

<p>The first thing to note is that we define two locations : <code>/</code> (web location) and  location <code>/hubs</code> (proxy location).
The <code>/</code> location will host the build output of our application, while the <code>/hubs</code> location is the location of the requests that will be proxied. It&#39;s important that in order for the web location to work the build files must be present in the indicated root directory.</p>

<p>The reason that we did not call the proxy location <code>/api</code> is that our front-end application uses SignalR to communicate to the backend. The configuration provided in this example enables features like web sockets and long polling. However you can tweak the example provided to meet your needs.</p>

<p>If you look deeper into the proxy configuration you probably will notice <code>$API_PROTOCOL://$API_HOST:$API_PORT</code>. If you would try this configuration directly in nginx it will fail pointing out your configuration is incorrect.</p>

<p>Don&#39;t worry though since these are simply placeholders (that&#39;s the reason we&#39;ve called this file a template) that will be replaced later on. Our front-end application can simply point API communication to <code>/hubs/whatever</code> and our proxy will take care of it.</p>

<h5 id="variable-substitutions">Variable substitutions</h5>

<p>Let me ask you a question : When do you replace the placeholders with it&#39;s final value ?
If you do it at build time each time a domain changes you&#39;ll be forced to rebuild.
Or worse if you host multiple instances this means you&#39;ll need to build a separate image for each instance. I think it&#39;s obvious this method is not desirable at all.</p>

<p>Instead of performing variable substitutions at build time they should be performed at run time.
Modifying the entry point of an existing Docker image can be quite tricky, luckily we won&#39;t need to.
The nginx image provides a feature that when you put scripts into the <code>/docker-entrypoint.d</code> folder of the container it will run these scripts at startup time.</p>

<p>We will substitute the following variables :  <em>API</em>PROTOCOL_, <em>API</em>HOST_ and <em>API</em>PORT_.
Let&#39;s have a look at our <code>entrypoint.sh</code> file :</p>

<pre><code>#!/usr/bin/env sh
set -eu

echo &#34;$(envsubst &#39;${API_PROTOCOL},${API_HOST},${API_PORT}&#39; &lt; /etc/nginx/nginx.conf)&#34; &gt; /etc/nginx/nginx.conf
exec &#34;$@&#34;
</code></pre>

<p>This script is quite easy, it uses the _envsubst _ command in order to read and substitute the contents of   <code>/etc/nginx/nginx.conf</code> and writes them afterwards into the same file.
So during our docker image process we will need to locate our template file at <code>/etc/nginx/nginx.conf</code> and at runtime this script will substitute the contents of the file with the provided environment variables.</p>

<h2 id="considerations-and-thoughts">Considerations and thoughts</h2>

<p>In this example we used Nginx as our http server, however you can use the server that best fits your use-case. However if you choose so you will need to figure out how to setup a proxy on your own.
To be honest most common http servers provide plentiful documentation, so it really shouldn&#39;t be a problem.</p>

<p>You might have noticed the use of <code>envsubst</code>. The placeholder substitution at runtime has been one of the parts where I struggled most. For some reason it has been quite tricky to get the values of the environment variables in a bash script and putting them in the configuration file.
The most annoying part is that you specify the variables you want to substitute. If you have a large amount of placeholders to replace this can become quite cumbersome.</p>
]]></content:encoded>
      <guid>https://blog.claeyscloud.com/david/deploy-front-end-applications-with-docker</guid>
      <pubDate>Fri, 11 Oct 2024 12:18:08 +0000</pubDate>
    </item>
    <item>
      <title>Epg, the easy way</title>
      <link>https://blog.claeyscloud.com/david/epg-the-easy-way</link>
      <description>&lt;![CDATA[The problem&#xA;In a previous post I went through the process of setting up your own epg provider with  iptiv-org/epg. That process is still valid but it has some important drawbacks.&#xA;First of all the setup process is quite lengthy, which may scare potential users away.&#xA;Secondly the installation process is performed directly on the host.&#xA;Which might be a dealbreaker if you like hosting applications through Docker.&#xA;&#xA;The solution&#xA;&#xA;Introduction&#xA;This is where one of my personal projects comes into place epg-info-docker.&#xA;The purpose of this repository is to take the code in iptiv-org/epg and to build a Docker image out of it.&#xA;&#xA;If you want to take a look at it, the code is available through my git server or github.&#xA;You obviously can take this code and build it yourself, but this is not the most convenient.&#xA;&#xA;For your convenience images are made available at different registries :&#xA;git.claeyscloud.com/david/epg-info&#xA;ghcr.io/davidclaeysquinones/epg-info&#xA;docker.io/davidquinonescl/epg-info&#xA;&#xA;Each of these images is the same, so you can pick the image from where you prefer.&#xA;&#xA;Setup&#xA;You can use this image in the following way :&#xA;&#xA;version: &#39;3.3&#39;&#xA;services:&#xA;  epg:&#xA;    image: git.claeyscloud.com/david/epg-info:latest&#xA;    #image: ghcr.io/davidclaeysquinones/epg-info:latest&#xA;    #image: davidquinonescl/epg-info:latest&#xA;    volumes:&#xA;      # add a mapping in order to add the channels file&#xA;      /docker/epg:/config&#xA;    ports:&#xA;      6080:3000&#xA;    environment:&#xA;      # specify the time zone for the server&#xA;      TZ=Etc/UTC&#xA;      # uncomment the underlying line if you want to enable custom fixes&#xA;      #- ENABLEFIXES=true&#xA;    restart: unless-stopped&#xA;&#xA;In order to setup the program you need a channels.xml file.&#xA;This files describes which providers and channels you want the program to generate epg information.&#xA;An example of the contents for this file looks like this :&#xA;?xml version=&#34;1.0&#34; encoding=&#34;UTF-8&#34;?&#xA;channels&#xA; channel site=&#34;movistarplus.es&#34; lang=&#34;es&#34; xmltvid=&#34;24Horas.es&#34; site_id=&#34;24H&#34;24 Horas/channel&#xA;/channels&#xA;In the repo you can look for all available providers. Each provider has a list with it&#39;s available channels. &#xA;&#xA;And that&#39;s it ! You&#39;ve just setup your own epg provider.]]&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-problem">The problem</h2>

<p>In a previous post I went through the process of setting up your own epg provider with  <a href="https://github.com/iptv-org/epg" rel="nofollow">iptiv-org/epg</a>. That process is still valid but it has some important drawbacks.
First of all the setup process is quite lengthy, which may scare potential users away.
Secondly the installation process is performed directly on the host.
Which might be a dealbreaker if you like hosting applications through Docker.</p>

<h2 id="the-solution">The solution</h2>

<h3 id="introduction">Introduction</h3>

<p>This is where one of my personal projects comes into place <a href="https://git.claeyscloud.com/david/epg-info-docker" rel="nofollow">epg-info-docker</a>.
The purpose of this repository is to take the code in <a href="https://github.com/iptv-org/epg" rel="nofollow">iptiv-org/epg</a> and to build a Docker image out of it.</p>

<p>If you want to take a look at it, the code is available through my <a href="https://git.claeyscloud.com/david/epg-info-docker" rel="nofollow">git server</a> or <a href="https://github.com/davidclaeysquinones/epg-info-docker" rel="nofollow">github</a>.
You obviously can take this code and build it yourself, but this is not the most convenient.</p>

<p>For your convenience images are made available at different registries :
– git.claeyscloud.com/david/epg-info
– ghcr.io/davidclaeysquinones/epg-info
– docker.io/davidquinonescl/epg-info</p>

<p>Each of these images is the same, so you can pick the image from where you prefer.</p>

<h3 id="setup">Setup</h3>

<p>You can use this image in the following way :</p>

<pre><code class="language-sh">version: &#39;3.3&#39;
services:
  epg:
    image: git.claeyscloud.com/david/epg-info:latest
    #image: ghcr.io/davidclaeysquinones/epg-info:latest
    #image: davidquinonescl/epg-info:latest
    volumes:
      # add a mapping in order to add the channels file
      - /docker/epg:/config
    ports:
      - 6080:3000
    environment:
      # specify the time zone for the server
      - TZ=Etc/UTC
      # uncomment the underlying line if you want to enable custom fixes
      #- ENABLE_FIXES=true
    restart: unless-stopped
</code></pre>

<p>In order to setup the program you need a channels.xml file.
This files describes which providers and channels you want the program to generate epg information.
An example of the contents for this file looks like this :</p>

<pre><code>&lt;?xml version=&#34;1.0&#34; encoding=&#34;UTF-8&#34;?&gt;
&lt;channels&gt;
 &lt;channel site=&#34;movistarplus.es&#34; lang=&#34;es&#34; xmltv_id=&#34;24Horas.es&#34; site_id=&#34;24H&#34;&gt;24 Horas&lt;/channel&gt;
&lt;/channels&gt;
</code></pre>

<p>In the <a href="https://github.com/iptv-org/epg/tree/master/sites" rel="nofollow">repo</a> you can look for all available providers. Each provider has a list with it&#39;s available channels.</p>

<p>And that&#39;s it ! You&#39;ve just setup your own epg provider.</p>
]]></content:encoded>
      <guid>https://blog.claeyscloud.com/david/epg-the-easy-way</guid>
      <pubDate>Wed, 09 Oct 2024 11:37:18 +0000</pubDate>
    </item>
    <item>
      <title>Watching Live TV on all your devices</title>
      <link>https://blog.claeyscloud.com/david/watching-live-tv-on-all-your-devices</link>
      <description>&lt;![CDATA[In recent years streaming services have gained a lot of popularity. However for a multiple of reasons sometimes we might want to watch Live TV.&#xA;&#xA;Depending on the place you live your ISP or cable provider might (or not) provide some kind of app to watch TV on your mobile devices. However some apps are crappy, other are limited in the channels you can watch or other might have a very limited feature set. For these reasons you might want to watch Live Tv on your own terms.&#xA;&#xA;In this article we will look at how you would go about setting up Live Tv on your own infrastructure.&#xA;In the end you&#39;ll be able to stream Tv through web, mobile devices in a very convenient way.&#xA;&#xA;In order to reach our end goal we will perform the following steps:&#xA;Installing and setting up iptiv-org/epg to acquire EPG data&#xA;Installing and setting up Threadfin&#xA;Installing and setting up Jellyfin&#xA;&#xA;Disclaimer :&#xA;This article&#39;s assumption is that you have some knowledge about the Linux network stack and Docker.&#xA;&#xA;Setting up EPG&#xA;&#xA;Getting schedules for the channels you want is quite essential in order to have a good experience.&#xA;However depending on the country where you live getting EPG (Electronic Programme Guide) can be very easy or almost impossible.&#xA;&#xA;By example if you live in Spain dobleM provides EPG information for almost any channel you can imagine.&#xA;&#xA;However if you live in Belgium getting decent EPG information is very challenging. I&#39;ve looked through forums and not found any source available.&#xA;&#xA;Setting up your own EPG provider&#xA;&#xA;So what do you do there are no EPG sources available for your country or for a particular channel ?&#xA;&#xA;This is where iptiv-org/epg comes to the rescue.&#xA;&#xA;Let&#39;s get through the necessary steps in order to set it up.&#xA;&#xA;First of all you&#39;ll want a system with a static IP address. We will be using Ubuntu 22.04 in order to perform the setup process. As always feel free to use any Linux flavor you like but be aware that you might get through some roadblocks (or not) if you do so.&#xA;&#xA;Updating and installing dependencies&#xA;First of all we want to make sure all our system dependencies are up to date and and we will install our necessary dependencies.&#xA;&#xA;sudo apt-get update \&#xA;  &amp;&amp; sudo apt-get upgrade -y -q \&#xA;  &amp;&amp; sudo apt-get install curl -y \&#xA;  &amp;&amp; sudo apt-get install git -y&#xA;Installing Nodejs&#xA;In order to install the latest supported NodeJs version we will be using NodeSource. There are other ways you could do the same but this is the most convenient way to do it.&#xA;&#xA;Note :&#xA;At the moment NodeJS 22 is not compatible with the software we&#39;re installing.&#xA;&#xA;curl -fsSL https://deb.nodesource.com/setup21.x -o nodesourcesetup.sh&#xA;sudo -E bash nodesourcesetup.sh&#xA;sudo apt-get install -y nodejs&#xA;Once you&#39;ve performed these steps the command `node -v` should return v21.x.x.&#xA;&#xA;Installing iptiv-org/epg&#xA;&#xA;Now we can proceed to the actual installation of our EPG provider.&#xA;First we will make a directory where we will perform the installation&#xA;&#xA;mkdir /bin/epg -p&#xA;Now we want to go into the directory we just made by typing `cd /bin/epg`&#xA;&#xA;At this point we are ready to clone the git repository into our server.&#xA;&#xA;git -C /bin clone --depth 1 -b master https://github.com/iptv-org/epg.git&#xA;&#xA;Once the source code is on our machine we can install the necessary dependencies.&#xA;&#xA;npm install&#xA;&#xA;In order to serve our files over the network we also want to install an npm module called pm2 &#xA;&#xA;npm install pm2 -g&#xA;&#xA;Now we will create two scripts that will enable us to start our EPG provider at startup.&#xA;start.sh :&#xA;!/bin/bash&#xA;&#xA;pm2 --name epg start npm -- run serve&#xA;npm run grab -- --channels=channels.xml --cron=&#34;0 0,12   &#34; --maxConnections=10 --days=14 --gzip&#xA;stop.sh :&#xA;!/bin/bash&#xA;&#xA;pm2 delete 0&#xA;To use these scripts we need to create our service file typing `nano /etc/systemd/system/epg.service`&#xA;Put the following content in the file :&#xA;[Unit]&#xA;Description=Epg&#xA;After=network.target&#xA;&#xA;[Service]&#xA;ExecStart=/bin/epg/start.sh&#xA;ExecStop=/bin/epg/stop.sh&#xA;WorkingDirectory=/bin/epg&#xA;&#xA;[Install]&#xA;WantedBy=default.target &#xA;As a last step we need to tell the system is should reload it&#39;s services by typing  `systemctl daemon-reload`.&#xA;&#xA;We&#39;ve just completed the installation of our own EPG provider but in order to get actual EPG information we need to tell it which channels we want information for.&#xA;&#xA;We do this by creating a file called channels.xml by typing `nano channels.xml`. &#xA;An example of the contents for this file looks like this :&#xA;?xml version=&#34;1.0&#34; encoding=&#34;UTF-8&#34;?&#xA;channels&#xA; channel site=&#34;movistarplus.es&#34; lang=&#34;es&#34; xmltvid=&#34;24Horas.es&#34; siteid=&#34;24H&#34;24 Horas/channel&#xA;/channels&#xA;&#xA;The contents of this file depend on which providers and channels you want to use.&#xA;In the repo you can look for all available providers. Each provider has a list with it&#39;s available channels. &#xA;&#xA;Be aware that not all providers are equal. For example telenet.tv is rock solid but lacks program thumbnails for most channels.&#xA;And in contrast pickx.be keeps breaking because of intentional API changes but most programs have thumbnails.&#xA;&#xA;Finding the right providers for the right channels is a process of trial and error and also depends on what you&#39;re willing to deal with.&#xA;&#xA;These are some providers you could use :&#xA;&#xA;telenet.tv (Belgium)&#xA;pickx.be (Belgium)&#xA;movistarplus.es (Spain)&#xA;programacion-tv.elpais.com (Spain)&#xA;tvgids.nl (Netherlands)&#xA;tv24.co.uk (UK)&#xA;tvtv.us (US)&#xA;chaines-tv.orange.fr (France)&#xA;&#xA;This list is by any means extensive and if you&#39;re looking for other countries you should check which providers are available&#xA;&#xA;Setting up Live Tv streams&#xA;&#xA;The next piece of the puzzle is getting the streams for the channels you want. The options you have depend a lot on where you live and on your goals.&#xA;&#xA;For example in the US you could use a HD HomeRun.&#xA;In some countries (like Spain) you could install a DVB-T2 decoder into your system and setup tvheadend to stream over the network.&#xA;However if you live in countries where open standards were purposely not adopted (like Belgium) you&#39;re only option is to resort to an IPTV provider. &#xA;&#xA;There are some IPTV list available like iptv-org/iptv or TDTChannels that just list publicly available streams and that are completely legal. &#xA;&#xA;If you still choose to use an IPTV provider that infringes copyright please be aware that depending on legislation you could be sanctioned for just being a customer. Be also aware that getting scammed while sourcing an IPTV provider is a real possibility. I don&#39;t want to encourage neither recommend you to source an IPTV provider that infringes copyright. If you make that decision you do so under your own responsibility. Please be careful and try to minimize risks as much as possible.  &#xA;&#xA;Some pieces of software (like Jellyfin) offer a direct integration to the HD HomeRun. If you have such a device you can directly integrate it. However I would recommend to use Threadfin as an intermediate layer in order to manage EPG and channel numbering. If you&#39;re using an m3u stream from tvheadend or an IPTV provider you can&#39;t get around using this piece of software.&#xA;&#xA;Installing Threadfin&#xA;This is how a Docker compose file would look like for Threadfin without any additional precaution :&#xA;version: &#34;3.5&#34;&#xA;services:&#xA;  threadfin:&#xA;    image: fyb3roptik/threadfin&#xA;    environment:&#xA;      PUID=${PUID}&#xA;      PGID=${PGID}&#xA;      TZ=${TIMEZONE}&#xA;    volumes:&#xA;      ${THREADFINCONFIGDIR}:/home/threadfin/conf&#xA;    ports:&#xA;      34400:34400&#xA;    restart: unless-stopped&#xA;If you would like to take some precaution gluetun is a very good option. This is basically a Docker image that allows you to configure almost any VPN provider.&#xA;&#xA;In the wiki you can find information about how to setup your particular VPN provider.&#xA;&#xA;So if you would like to take precautions your compose file would look like this :&#xA;version: &#34;3.5&#34;&#xA;services:&#xA;  vpn:&#xA;    image: qmcgaw/gluetun&#xA;    capadd:&#xA;      NETADMIN&#xA;    devices:&#xA;      /dev/net/tun:/dev/net/tun&#xA;    sysctls:&#xA;      net.ipv6.conf.all.disableipv6=0&#xA;    environment:&#xA;      TZ=${TIMEZONE}&#xA;      VPNSERVICEPROVIDER=${YOURPROVIDER}&#xA;      ....&#xA;      # some provider specific variavles&#xA;      ....&#xA;      FIREWALLOUTBOUNDSUBNETS=${YOURSUBNET}/24&#xA;    ports:&#xA;      34400:34400&#xA;    volumes:&#xA;      ${VPNCONFIGDIR}:/config&#xA;    restart: unless-stopped&#xA;  threadfin:&#xA;    image: fyb3roptik/threadfin&#xA;    environment:&#xA;      PUID=${PUID}&#xA;      PGID=${PGID}&#xA;      TZ=${TIMEZONE}&#xA;    dependson:&#xA;      vpn&#xA;    networkmode: service:vpn&#xA;    volumes:&#xA;         ${THREADFINCONFIGDIR}:/home/threadfin/conf&#xA;    restart: unless-stopped&#xA;Setting up Threadfin&#xA;Once Threadfin is installed we need to set it up.&#xA;&#xA;Basic settings&#xA;&#xA;Threadfin settings page&#xA;&#xA;Before we continue we want to open the settings page.&#xA;We want to change the following things : &#xA;`EPG Source` to XEPG&#xA;`Replace missing program images` should be checked&#xA;`Stream Buffer:` to VLC&#xA;&#xA;If you notice that your streams are stuttering you can experiment with increasing `Buffer Size`.&#xA;&#xA;The `Number of Tuners` setting sets a system wide maximum number of streams. Choose a realistic number based on your needs and system performance. This setting can also be overridden at playlist level to a lower value. &#xA;&#xA;If you&#39;re going to use TVHeadend the `Ignore Filters` setting will make things easier later on.&#xA;&#xA;Playlist settings&#xA;&#xA;Threadfin playlist settings&#xA;&#xA;The first time you open this page you will be greeted by an empty page.&#xA;&#xA;When you press on the new button you will be greeted by the following dialog.&#xA;New playlist dialog&#xA;&#xA;Choose `M3U if you&#39;re using an stream (IPTV or TvHeadend) or choose HdHomeRun` if you&#39;re using that particular device.&#xA;&#xA;Depending on your choice you will see once of these dialogs.&#xA;&#xA;New playlist M3U playlist&#xA;&#xA;New playlist HDHomeRun playlist&#xA;&#xA;The `M3U file or HDHomeRun IP` fields are the most crucial part. &#xA;Fill in the address to the M3U file or your HDHomeRun device on your local network.&#xA;&#xA;You also want to set the  `Tuner/Streams ` amount to a reasonable amount. If you&#39;re using TV Headend, a public IPTV list or HdHomeRun this will be hardware constrained (number or tuners and general system performance. If you&#39;re using a IPTV provider this will be whatever their general policy permits.&#xA;&#xA;XMLTV settings&#xA;&#xA;Threadfin XMLTV settings&#xA;&#xA;This page will also be empty when you open it up for the first time. In my opinion this is one of the strengths of Threadfin. Regardless of whether you have any EPG information you can mix and match different sources to the combination you like.  &#xA;&#xA;When you press on the new button you will be greeted by the following dialog.&#xA;New XMLTV dialog&#xA;&#xA;You can give it whatever name and description you like. The `XMLTV File field is the part that really matters. If you want to use a publicly available source you just fill in the corresponding URL according to their documentation. If you followed along and set up your own EPG provider the address will be  EPG IP ADDRESS:3000/guide.xml`.&#xA;&#xA;Filter settings&#xA;&#xA;If you plan to use TvHeadend and enabled the `Ignore Filters` setting you can skip this section. &#xA;&#xA;Otherwise open this page and since we&#39;re getting started it will be empty.&#xA;The general idea of this page is that in most cases IPTV lists contain hundreds if not thousands of streams. In order to not affect system performance and keep things manageable we need to choose the categories we&#39;ll want to map later on.  Choosing one particular category doesn&#39;t mean we are forced to map all channels in it. &#xA;&#xA;New filter dialog&#xA;&#xA;Threadfin offers two different filter types M3U and custom filters.&#xA;The M3U type is pretty basic and limits itself to the categories contained in group titles contained in the M3U file. The custom filter is powerful because it enables to make filters on specific patterns.&#xA; &#xA;Now I need to be honest, at some point I&#39;ve tried to use custom filters but I didn&#39;t figure it out. I think that depending on playlist size it might take quite some time to process since it needs to check for a pattern for each stream in the playlist. However that&#39;s just an assumption since I&#39;ve not really used this feature. Feel free to try it out but I won&#39;t go into any more dept since I&#39;m not able to.&#xA;&#xA;New M3U filter dialog&#xA;The field we want to look for is `group title`. This will make the chosen group title available in the mapping tab. You can have a look at the include/exclude settings if you want so but it&#39;s not strictly necessary.&#xA;&#xA;Mapping settings&#xA;&#xA;When opening the mappings page you won&#39;t be greeted by an empty list.&#xA;Most probably you&#39;ll be greeted with a list with unmapped/inactive channels.&#xA;You can make the distinction because of the red line on the left end of the table.&#xA;List of unmapped channels&#xA;&#xA;Before activating a channel you should first assign it the number of your liking. You do this by typing the desired value in the text field.&#xA;&#xA;In order to continue click on the desired channel in order to open the map channel popup.&#xA;&#xA;Map channel popup&#xA;&#xA;The most important settings are :&#xA;`Active` to activate the channel&#xA;`Channel name` to edit the channel name&#xA;`Logo Url` to assign the channel a logo&#xA;`Group title` to group the channel to your liking&#xA;`XMLTV File` in order to choose the XMLTV file you want to use&#xA;`XMLTV Channel` to choose the right channel in the XMLTV file&#xA;&#xA;Once you&#39;ve chosen your desired settings click on the done button.&#xA;Now there also should be a list with active/mapped channels.&#xA;You can make the distinction because of the green line on the left end of the table.&#xA;&#xA;List of mapped channels&#xA;&#xA;Mapping all desired channels can be a repetitive task but as you&#39;ll see in the end the effort is worth it.&#xA;&#xA;Note :&#xA;In the next steps we&#39;ll be talking about setting up and installing Jellyfin. However you can use Threadfin with any software that supports the HD HomeRun since it functions as an emulation layer. Other software of the likes of Plex Media Server, Kodi and Emby exist that enables you to do the same. However Jellyfin is the only open source solution that enables this feature without any paid plan and on the server side (Kodi is a client application).&#xA;&#xA;Installing Jellyfin&#xA;&#xA;This is how a compose file for a Jellyfin installation looks like :&#xA;version: &#34;3.5&#34;&#xA;services:&#xA;  jellyfin:&#xA;    image: jellyfin/jellyfin&#xA;    user: ${PUID}:${PGID}&#xA;    ports:&#xA;      8096:8096&#xA;    volumes:&#xA;      ${CONFIGFOLDER}:/config&#xA;      ${CACHEFOLDER}:/cache&#xA;      ${MOVIESFOLDER}:/Movies&#xA;      ${TVSHOWSFOLDER}:/Tv Shows&#xA;      ${RECORDINGSFOLDER}:/recordings:/recordings&#xA;    restart: unless-stopped&#xA;    dependson:&#xA;    environment:&#xA;      #use this variable if you want to access your Jellyfin server through a domain name&#xA;      JELLYFINPublishedServerUrl=http://jellyfin.yourdomain.com&#xA;&#xA;Once you deploy this compose file Jellyfin will be available through port 8096 or through the domain you&#39;ve set up. Complete the setup wizard and setup your libraries.  &#xA;&#xA;After this click on your user icon and open the administration panel&#xA;&#xA;Jellyfin admin panel&#xA;&#xA;We want to go to the Live Tv section of the admin panel.&#xA;Click on the + button under Tuner Device.&#xA;&#xA;Add tuner dialogl&#xA;&#xA;Select HD Homerun as the Tuner Type and check the Allow hardware transcoding checkbox.&#xA;Under Tuner IP Address you should type `http://THREADFIN IP ADDRESS/`. Once that&#39;s done click on the save button.&#xA;&#xA;Last but not least click on the + button under TV Guide Data Providers and choose XMLTV.&#xA;&#xA;Add XMLTV dialogl&#xA;&#xA;The only thing you need to do is type `http://THREADFIN IP ADDRESS:34400/xmltv/threadfin.xml` under File or URL*. Click on the save button and you&#39;re all set.&#xA;Jellyfin will need some time in order to gather all necessary information but after a while live tv will be available.&#xA;&#xA;Jellyfin is  available through the web interface and different apps. The UI is pretty straightforward so we won&#39;t go into detail on this topic. You&#39;ve just setup up live tv on your server on your terms.&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>In recent years streaming services have gained a lot of popularity. However for a multiple of reasons sometimes we might want to watch Live TV.</p>

<p>Depending on the place you live your ISP or cable provider might (or not) provide some kind of app to watch TV on your mobile devices. However some apps are crappy, other are limited in the channels you can watch or other might have a very limited feature set. For these reasons you might want to watch Live Tv on your own terms.</p>

<p>In this article we will look at how you would go about setting up Live Tv on your own infrastructure.
In the end you&#39;ll be able to stream Tv through web, mobile devices in a very convenient way.</p>

<p>In order to reach our end goal we will perform the following steps:
– Installing and setting up <a href="https://github.com/iptv-org/epg" rel="nofollow">iptiv-org/epg</a> to acquire EPG data
– Installing and setting up <a href="https://github.com/Threadfin/Threadfin" rel="nofollow">Threadfin</a>
– Installing and setting up <a href="https://github.com/jellyfin/jellyfin" rel="nofollow">Jellyfin</a></p>

<p><em>Disclaimer :</em>
This article&#39;s assumption is that you have some knowledge about the Linux network stack and Docker.</p>

<h2 id="setting-up-epg">Setting up EPG</h2>

<p>Getting schedules for the channels you want is quite essential in order to have a good experience.
However depending on the country where you live getting EPG (Electronic Programme Guide) can be very easy or almost impossible.</p>

<p>By example if you live in Spain <a href="https://github.com/davidmuma/EPG_dobleM" rel="nofollow">dobleM</a> provides EPG information for almost any channel you can imagine.</p>

<p>However if you live in Belgium getting decent EPG information is very challenging. I&#39;ve looked through forums and not found any source available.</p>

<h3 id="setting-up-your-own-epg-provider">Setting up your own EPG provider</h3>

<p>So what do you do there are no EPG sources available for your country or for a particular channel ?</p>

<p>This is where <a href="https://github.com/iptv-org/epg" rel="nofollow">iptiv-org/epg</a> comes to the rescue.</p>

<p>Let&#39;s get through the necessary steps in order to set it up.</p>

<p>First of all you&#39;ll want a system with a static IP address. We will be using Ubuntu 22.04 in order to perform the setup process. As always feel free to use any Linux flavor you like but be aware that you might get through some roadblocks (or not) if you do so.</p>

<h4 id="updating-and-installing-dependencies">Updating and installing dependencies</h4>

<p>First of all we want to make sure all our system dependencies are up to date and and we will install our necessary dependencies.</p>

<pre><code>sudo apt-get update \
  &amp;&amp; sudo apt-get upgrade -y -q \
  &amp;&amp; sudo apt-get install curl -y \
  &amp;&amp; sudo apt-get install git -y
</code></pre>

<h4 id="installing-nodejs">Installing Nodejs</h4>

<p>In order to install the latest supported NodeJs version we will be using <a href="https://github.com/nodesource/distributions" rel="nofollow">NodeSource</a>. There are other ways you could do the same but this is the most convenient way to do it.</p>

<p><em>Note :</em>
At the moment NodeJS 22 is not compatible with the software we&#39;re installing.</p>

<pre><code>curl -fsSL https://deb.nodesource.com/setup_21.x -o nodesource_setup.sh
sudo -E bash nodesource_setup.sh
sudo apt-get install -y nodejs
</code></pre>

<p>Once you&#39;ve performed these steps the command <code>node -v</code> should return v21.x.x.</p>

<h4 id="installing-iptiv-org-epg">Installing iptiv-org/epg</h4>

<p>Now we can proceed to the actual installation of our EPG provider.
First we will make a directory where we will perform the installation</p>

<pre><code>mkdir /bin/epg -p
</code></pre>

<p>Now we want to go into the directory we just made by typing <code>cd /bin/epg</code></p>

<p>At this point we are ready to clone the git repository into our server.</p>

<pre><code>git -C /bin clone --depth 1 -b master https://github.com/iptv-org/epg.git
</code></pre>

<p>Once the source code is on our machine we can install the necessary dependencies.</p>

<pre><code>npm install
</code></pre>

<p>In order to serve our files over the network we also want to install an npm module called <a href="https://www.npmjs.com/package/pm2" rel="nofollow">pm2</a></p>

<pre><code>npm install pm2 -g
</code></pre>

<p>Now we will create two scripts that will enable us to start our EPG provider at startup.
<em>start.sh :</em></p>

<pre><code>#!/bin/bash

pm2 --name epg start npm -- run serve
npm run grab -- --channels=channels.xml --cron=&#34;0 0,12 * * *&#34; --maxConnections=10 --days=14 --gzip
</code></pre>

<p><em>stop.sh :</em></p>

<pre><code>#!/bin/bash

pm2 delete 0
</code></pre>

<p>To use these scripts we need to create our service file typing <code>nano /etc/systemd/system/epg.service</code>
Put the following content in the file :</p>

<pre><code>[Unit]
Description=Epg
After=network.target

[Service]
ExecStart=/bin/epg/start.sh
ExecStop=/bin/epg/stop.sh
WorkingDirectory=/bin/epg

[Install]
WantedBy=default.target 
</code></pre>

<p>As a last step we need to tell the system is should reload it&#39;s services by typing  <code>systemctl daemon-reload</code>.</p>

<p>We&#39;ve just completed the installation of our own EPG provider but in order to get actual EPG information we need to tell it which channels we want information for.</p>

<p>We do this by creating a file called channels.xml by typing <code>nano channels.xml</code>.
An example of the contents for this file looks like this :</p>

<pre><code>&lt;?xml version=&#34;1.0&#34; encoding=&#34;UTF-8&#34;?&gt;
&lt;channels&gt;
 &lt;channel site=&#34;movistarplus.es&#34; lang=&#34;es&#34; xmltv_id=&#34;24Horas.es&#34; site_id=&#34;24H&#34;&gt;24 Horas&lt;/channel&gt;
&lt;/channels&gt;
</code></pre>

<p>The contents of this file depend on which providers and channels you want to use.
In the <a href="https://github.com/iptv-org/epg/tree/master/sites" rel="nofollow">repo</a> you can look for all available providers. Each provider has a list with it&#39;s available channels.</p>

<p>Be aware that not all providers are equal. For example <a href="https://github.com/iptv-org/epg/tree/master/sites/telenet.tv" rel="nofollow">telenet.tv</a> is rock solid but lacks program thumbnails for most channels.
And in contrast <a href="https://github.com/iptv-org/epg/tree/master/sites/pickx.be" rel="nofollow">pickx.be</a> keeps breaking because of intentional API changes but most programs have thumbnails.</p>

<p>Finding the right providers for the right channels is a process of trial and error and also depends on what you&#39;re willing to deal with.</p>

<p>These are some providers you could use :</p>
<ul><li><a href="https://github.com/iptv-org/epg/tree/master/sites/telenet.tv" rel="nofollow">telenet.tv</a> (Belgium)</li>
<li><a href="https://github.com/iptv-org/epg/tree/master/sites/pickx.be" rel="nofollow">pickx.be</a> (Belgium)</li>
<li><a href="https://github.com/iptv-org/epg/tree/master/sites/movistarplus.es" rel="nofollow">movistarplus.es</a> (Spain)</li>
<li><a href="https://github.com/iptv-org/epg/tree/master/sites/programacion-tv.elpais.com" rel="nofollow">programacion-tv.elpais.com</a> (Spain)</li>
<li><a href="https://github.com/iptv-org/epg/tree/master/sites/tvgids.nl" rel="nofollow">tvgids.nl</a> (Netherlands)</li>
<li><a href="https://github.com/iptv-org/epg/tree/master/sites/tv24.co.uk" rel="nofollow">tv24.co.uk</a> (UK)</li>
<li><a href="https://github.com/iptv-org/epg/tree/master/sites/tvtv.us" rel="nofollow">tvtv.us</a> (US)</li>
<li><a href="https://github.com/iptv-org/epg/tree/master/sites/chaines-tv.orange.fr" rel="nofollow">chaines-tv.orange.fr</a> (France)</li></ul>

<p>This list is by any means extensive and if you&#39;re looking for other countries you should check which providers are available</p>

<h2 id="setting-up-live-tv-streams">Setting up Live Tv streams</h2>

<p>The next piece of the puzzle is getting the streams for the channels you want. The options you have depend a lot on where you live and on your goals.</p>

<p>For example in the US you could use a <a href="https://www.silicondust.com/hdhomerun/" rel="nofollow">HD HomeRun</a>.
In some countries (like Spain) you could install a <a href="https://www.amazon.es/gp/product/B09KQM9NQ8/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&amp;psc=1" rel="nofollow">DVB-T2 decoder</a> into your system and setup <a href="https://github.com/tvheadend/tvheadend" rel="nofollow">tvheadend</a> to stream over the network.
However if you live in countries where open standards were purposely not adopted (like Belgium) you&#39;re only option is to resort to an IPTV provider.</p>

<p>There are some IPTV list available like <a href="https://github.com/iptv-org/iptv" rel="nofollow">iptv-org/iptv</a> or <a href="https://github.com/LaQuay/TDTChannels" rel="nofollow">TDTChannels</a> that just list publicly available streams and that are completely legal.</p>

<p>If you still choose to use an IPTV provider that infringes copyright please be aware that depending on legislation you could be sanctioned for just being a customer. Be also aware that getting scammed while sourcing an IPTV provider is a real possibility. I don&#39;t want to encourage neither recommend you to source an IPTV provider that infringes copyright. If you make that decision you do so under your own responsibility. Please be careful and try to minimize risks as much as possible.</p>

<p>Some pieces of software (like Jellyfin) offer a direct integration to the HD HomeRun. If you have such a device you can directly integrate it. However I would recommend to use <a href="https://github.com/Threadfin/Threadfin" rel="nofollow">Threadfin</a> as an intermediate layer in order to manage EPG and channel numbering. If you&#39;re using an m3u stream from tvheadend or an IPTV provider you can&#39;t get around using this piece of software.</p>

<h4 id="installing-threadfin">Installing Threadfin</h4>

<p>This is how a Docker compose file would look like for Threadfin without any additional precaution :</p>

<pre><code>version: &#34;3.5&#34;
services:
  threadfin:
    image: fyb3roptik/threadfin
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TIME_ZONE}
    volumes:
      - ${THREADFIN_CONFIG_DIR}:/home/threadfin/conf
    ports:
      - 34400:34400
    restart: unless-stopped
</code></pre>

<p>If you would like to take some precaution <a href="https://github.com/qdm12/gluetun" rel="nofollow">gluetun</a> is a very good option. This is basically a Docker image that allows you to configure almost any VPN provider.</p>

<p>In the <a href="https://github.com/qdm12/gluetun-wiki/tree/main/setup/providers" rel="nofollow">wiki</a> you can find information about how to setup your particular VPN provider.</p>

<p>So if you would like to take precautions your compose file would look like this :</p>

<pre><code>version: &#34;3.5&#34;
services:
  vpn:
    image: qmcgaw/gluetun
    cap_add:
      - NET_ADMIN
    devices:
      - /dev/net/tun:/dev/net/tun
    sysctls:
      - net.ipv6.conf.all.disable_ipv6=0
    environment:
      - TZ=${TIME_ZONE}
      - VPN_SERVICE_PROVIDER=${YOUR_PROVIDER}
      ....
      # some provider specific variavles
      ....
      - FIREWALL_OUTBOUND_SUBNETS=${YOUR_SUBNET}/24
    ports:
      - 34400:34400
    volumes:
      -  ${VPN_CONFIG_DIR}:/config
    restart: unless-stopped
  threadfin:
    image: fyb3roptik/threadfin
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TIME_ZONE}
    depends_on:
      - vpn
    network_mode: service:vpn
    volumes:
         - ${THREADFIN_CONFIG_DIR}:/home/threadfin/conf
    restart: unless-stopped
</code></pre>

<h4 id="setting-up-threadfin">Setting up Threadfin</h4>

<p>Once Threadfin is installed we need to set it up.</p>

<h5 id="basic-settings">Basic settings</h5>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/7Kc0y7N.png" alt="Threadfin settings page"></p>

<p>Before we continue we want to open the settings page.
We want to change the following things :
– <code>EPG Source</code> to XEPG
– <code>Replace missing program images</code> should be checked
– <code>Stream Buffer:</code> to VLC</p>

<p>If you notice that your streams are stuttering you can experiment with increasing <code>Buffer Size</code>.</p>

<p>The <code>Number of Tuners</code> setting sets a system wide maximum number of streams. Choose a realistic number based on your needs and system performance. This setting can also be overridden at playlist level to a lower value.</p>

<p>If you&#39;re going to use TVHeadend the <code>Ignore Filters</code> setting will make things easier later on.</p>

<h5 id="playlist-settings">Playlist settings</h5>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/v3hDyHE.png" alt="Threadfin playlist settings"></p>

<p>The first time you open this page you will be greeted by an empty page.</p>

<p>When you press on the new button you will be greeted by the following dialog.
<img src="https://images.claeyscloud.com/images/2024/11/27/XAquUSb.png" alt="New playlist dialog"></p>

<p>Choose <code>M3U</code> if you&#39;re using an stream (IPTV or TvHeadend) or choose <code>HdHomeRun</code> if you&#39;re using that particular device.</p>

<p>Depending on your choice you will see once of these dialogs.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/4Jt7Ijs.png" alt="New playlist M3U playlist"></p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/4Jt7Ijs92fe02bb0f5e18ba.png" alt="New playlist HDHomeRun playlist"></p>

<p>The <code>M3U</code> file or <code>HDHomeRun IP</code> fields are the most crucial part.
Fill in the address to the M3U file or your HDHomeRun device on your local network.</p>

<p>You also want to set the  <code>Tuner/Streams</code> amount to a reasonable amount. If you&#39;re using TV Headend, a public IPTV list or HdHomeRun this will be hardware constrained (number or tuners and general system performance. If you&#39;re using a IPTV provider this will be whatever their general policy permits.</p>

<h5 id="xmltv-settings">XMLTV settings</h5>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/EbfO6sn.png" alt="Threadfin XMLTV settings"></p>

<p>This page will also be empty when you open it up for the first time. In my opinion this is one of the strengths of Threadfin. Regardless of whether you have any EPG information you can mix and match different sources to the combination you like.</p>

<p>When you press on the new button you will be greeted by the following dialog.
<img src="https://images.claeyscloud.com/images/2024/11/27/IUBUdWw.png" alt="New XMLTV dialog"></p>

<p>You can give it whatever name and description you like. The <code>XMLTV File</code> field is the part that really matters. If you want to use a publicly available source you just fill in the corresponding URL according to their documentation. If you followed along and set up your own EPG provider the address will be  <code>&lt;EPG IP ADDRESS&gt;:3000/guide.xml</code>.</p>

<h5 id="filter-settings">Filter settings</h5>

<p>If you plan to use TvHeadend and enabled the <code>Ignore Filters</code> setting you can skip this section.</p>

<p>Otherwise open this page and since we&#39;re getting started it will be empty.
The general idea of this page is that in most cases IPTV lists contain hundreds if not thousands of streams. In order to not affect system performance and keep things manageable we need to choose the categories we&#39;ll want to map later on.  Choosing one particular category doesn&#39;t mean we are forced to map all channels in it.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/gvJY5hd.png" alt="New filter dialog"></p>

<p>Threadfin offers two different filter types <em>M3U</em> and <em>custom filters</em>.
The M3U type is pretty basic and limits itself to the categories contained in group titles contained in the M3U file. The custom filter is powerful because it enables to make filters on specific patterns.</p>

<p>Now I need to be honest, at some point I&#39;ve tried to use custom filters but I didn&#39;t figure it out. I think that depending on playlist size it might take quite some time to process since it needs to check for a pattern for each stream in the playlist. However that&#39;s just an assumption since I&#39;ve not really used this feature. Feel free to try it out but I won&#39;t go into any more dept since I&#39;m not able to.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/kscBJ9A.png" alt="New M3U filter dialog">
The field we want to look for is <code>group title</code>. This will make the chosen group title available in the mapping tab. You can have a look at the include/exclude settings if you want so but it&#39;s not strictly necessary.</p>

<h5 id="mapping-settings">Mapping settings</h5>

<p>When opening the mappings page you won&#39;t be greeted by an empty list.
Most probably you&#39;ll be greeted with a list with unmapped/inactive channels.
You can make the distinction because of the red line on the left end of the table.
<img src="https://images.claeyscloud.com/images/2024/11/27/Da5hQ8l.png" alt="List of unmapped channels"></p>

<p>Before activating a channel you should first assign it the number of your liking. You do this by typing the desired value in the text field.</p>

<p>In order to continue click on the desired channel in order to open the map channel popup.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/hixtHcJ.png" alt="Map channel popup"></p>

<p>The most important settings are :
– <code>Active</code> to activate the channel
– <code>Channel name</code> to edit the channel name
– <code>Logo Url</code> to assign the channel a logo
– <code>Group title</code> to group the channel to your liking
– <code>XMLTV File</code> in order to choose the XMLTV file you want to use
– <code>XMLTV Channel</code> to choose the right channel in the XMLTV file</p>

<p>Once you&#39;ve chosen your desired settings click on the <em>done</em> button.
Now there also should be a list with active/mapped channels.
You can make the distinction because of the green line on the left end of the table.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/xo3H74U.png" alt="List of mapped channels"></p>

<p>Mapping all desired channels can be a repetitive task but as you&#39;ll see in the end the effort is worth it.</p>

<p><em>Note :</em>
In the next steps we&#39;ll be talking about setting up and installing <a href="https://github.com/jellyfin/jellyfin" rel="nofollow">Jellyfin</a>. However you can use Threadfin with any software that supports the HD HomeRun since it functions as an emulation layer. Other software of the likes of <a href="https://www.plex.tv/es/media-server-downloads/" rel="nofollow">Plex Media Server</a>, <a href="https://kodi.tv/" rel="nofollow">Kodi</a> and <a href="https://emby.media/" rel="nofollow">Emby</a> exist that enables you to do the same. However Jellyfin is the only open source solution that enables this feature without any paid plan and on the server side (Kodi is a client application).</p>

<h4 id="installing-jellyfin">Installing Jellyfin</h4>

<p>This is how a compose file for a Jellyfin installation looks like :</p>

<pre><code>version: &#34;3.5&#34;
services:
  jellyfin:
    image: jellyfin/jellyfin
    user: ${PUID}:${PGID}
    ports:
      - 8096:8096
    volumes:
      - ${CONFIG_FOLDER}:/config
      - ${CACHE_FOLDER}:/cache
      - ${MOVIES_FOLDER}:/Movies
      - ${TV_SHOWS_FOLDER}:/Tv Shows
      - ${RECORDINGS_FOLDER}:/recordings:/recordings
    restart: unless-stopped
    depends_on:
    environment:
      #use this variable if you want to access your Jellyfin server through a domain name
      - JELLYFIN_PublishedServerUrl=http://jellyfin.yourdomain.com
</code></pre>

<p>Once you deploy this compose file Jellyfin will be available through port 8096 or through the domain you&#39;ve set up. Complete the setup wizard and setup your libraries.</p>

<p>After this click on your user icon and open the <em>administration panel</em></p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/UN3a4JH.png" alt="Jellyfin admin panel"></p>

<p>We want to go to the <em>Live Tv</em> section of the admin panel.
Click on the + button under <em>Tuner Device</em>.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/IIOZ9CS.png" alt="Add tuner dialogl"></p>

<p>Select HD Homerun as the <em>Tuner Type</em> and check the <em>Allow hardware transcoding</em> checkbox.
Under <em>Tuner IP Address</em> you should type <code>http://&lt;THREADFIN IP ADDRESS&gt;/</code>. Once that&#39;s done click on the save button.</p>

<p>Last but not least click on the + button under <em>TV Guide Data Providers</em> and choose XMLTV.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/uUAz4ST.png" alt="Add XMLTV dialogl"></p>

<p>The only thing you need to do is type <code>http://&lt;THREADFIN IP ADDRESS&gt;:34400/xmltv/threadfin.xml</code> under <em>File or URL</em>. Click on the save button and you&#39;re all set.
Jellyfin will need some time in order to gather all necessary information but after a while live tv will be available.</p>

<p>Jellyfin is  available through the web interface and different apps. The UI is pretty straightforward so we won&#39;t go into detail on this topic. You&#39;ve just setup up live tv on your server on your terms.</p>
]]></content:encoded>
      <guid>https://blog.claeyscloud.com/david/watching-live-tv-on-all-your-devices</guid>
      <pubDate>Wed, 19 Jun 2024 14:20:31 +0000</pubDate>
    </item>
    <item>
      <title>Deploying .NET containers in Docker</title>
      <link>https://blog.claeyscloud.com/david/deploying-net-containers-in-docker</link>
      <description>&lt;![CDATA[Since Microsoft started to transition .NET they also started offering Docker images to package your applications. To be more specific at Docker Hub Microsoft lists their images and intended purposes.&#xA;&#xA;I wanted to take myself up for a challenge and try to package a .NET API project into a Docker container.&#xA;The purpose of this article isn&#39;t to tell you how to build an API project since this topic is broadly covered on the web. I want to tell you one of the roadblocks I ran against and how I managed to solve it.&#xA;&#xA;If you want to get started the following tutorials could be useful :&#xA;Containerize a .NET app&#xA;Step By Step Dockerizing .NET Core API&#xA;Smaller Docker Images for ASP.NET Core Apps&#xA;&#xA;Slim Docker images&#xA;&#xA;It&#39;s best practice to make the Docker images you publish as slim as possible. &#xA;The main benefit of doing this is that consuming your image will take less space on your host if you do so.&#xA;There are many ways to make your image slimmer but one of the most effective ways is picking the right base image with the right tag.&#xA;&#xA;For example if we look at the tags for the ASP.NET Core Runtime we see among others the following sections : Linux amd64, Nano Server 2022 amd64 , Windows Server Core 2022 amd64 and so on.&#xA;If you want to make your Docker image multi platform compatible (one of the main benefits of .NET and Docker) you should automatically discard the tags representing a Windows environment.&#xA;First of all it&#39;s probably not the most lightweight base OS to build your image but more importantly Windows Docker containers can&#39;t run on any system that isn&#39;t Windows based.&#xA;&#xA;This limits our choice to Linux based images, but even there we have lots of choice.&#xA;By example at this moment in time we can choose among others between 8.0-bookworm-slim (Debian), 8.0-alpine-amd64 (Alpine) and 8.0-jammy (Ubuntu).&#xA;Microsoft marks the Debian variant with the latest tag since this distribution is pretty lightweight and also is quite widespread. However if we want to take things up a notch we should go for alpine since this is a lightweight no frills distribution.&#xA;&#xA;The roadblock&#xA;&#xA;When publishing a .NET API it is served by Kestrel.&#xA;When making an API it is recommended to use HTTPS for security reasons. Furthermore when making a production build it is even required.&#xA;&#xA;When reading the documentation we see we should use the following commands  :&#xA;dotnet dev-certs https -ep $env:USERPROFILE\.aspnet\https\aspnetapp.pfx -p crypticpassword&#xA;dotnet dev-certs https --trust&#xA;&#xA;This is simple enough, what&#39;s the problem then? Well the second of those command is only supported on Windows based systems. &#xA;&#xA;The solution&#xA;&#xA;After a lot of trial and error I came to the following solution :&#xA;&#xA;Password for the certificate&#xA;ARG CERTPASSWORDARG=SUPERSECRET&#xA;this image contains the entire .NET SDK and is ideal for creation the build&#xA;FROM mcr.microsoft.com/dotnet/sdk:8.0-alpine-amd64 AS build-env&#xA;ARG CERTPASSWORDARG&#xA;ENV CERTPASSWORD=$CERTPASSWORDARG&#xA;WORKDIR /App&#xA;COPY . ./&#xA;Restore dependencies for your application&#xA;RUN dotnet restore&#xA;Build your application&#xA;RUN dotnet publish test.csproj --no-restore --self-contained false -c Release -o out /p:UseAppHost=false &#xA;Make the directory for certificate export&#xA;RUN mkdir /config&#xA;Generate certificate with specified password&#xA;RUN dotnet dev-certs https --export-path /config/aspnetapp.pem --password &#34;$CERTPASSWORD&#34; --format PEM&#xA;&#xA;this image contains the ASP.NET Core and .NET runtimes and libraries &#xA;FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine-amd64&#xA;ARG CERTPASSWORDARG&#xA;ENV CERTPASSWORD=$CERTPASSWORDARG&#xA;WORKDIR /App&#xA;add dependency in system to setup certificates&#xA;RUN apk add ca-certificates &#xA;create directory to store certificate config&#xA;RUN mkdir /config &#xA;create necessary config directory&#xA;RUN mkdir -p /usr/local/share/ca-certificates/&#xA;copy compiled files to runtime&#xA;COPY --from=build-env /App/out . &#xA;copy generated certificate&#xA;COPY --from=build-env /config /config&#xA;Disable Big Brother&#xA;ENV DOTNETCLITELEMETRYOPTOUT=1&#xA;Set the environment to production&#xA;ENV ASPNETCOREENVIRONMENT=Production&#xA;Set the urls where Kestrel is going to listen&#xA;ENV ASPNETCOREURLS=http://+:80;https://+:443&#xA;location of the certificate file&#xA;ENV ASPNETCOREKestrelCertificatesDefaultPath=/usr/local/share/ca-certificates/aspnetapp.crt&#xA;location of the certificate key&#xA;ENV ASPNETCOREKestrelCertificatesDefault_KeyPath=/usr/local/share/ca-certificates/aspnetapp.key&#xA;specify password in order to open certificate key&#xA;ENV ASPNETCOREKestrelCertificatesDefault_Password=$CERTPASSWORD&#xA;copy certificate files to config directory&#xA;RUN cp /config/aspnetapp.pem $ASPNETCOREKestrelCertificatesDefaultPath &#xA;RUN cp /config/aspnetapp.key $ASPNETCOREKestrelCertificatesDefault_KeyPath&#xA;set file permisions for certificate file&#xA;RUN chmod 755 $ASPNETCOREKestrelCertificatesDefault_Path &#xA;RUN chmod +x $ASPNETCOREKestrelCertificatesDefault_Path&#xA;change file ownership for certificate file&#xA;add generated certificate to trusted certificate list on the system&#xA;RUN cat $ASPNETCOREKestrelCertificatesDefault_Path     /etc/ssl/certs/ca-certificates.crt&#xA;set file permissions for key file&#xA;RUN chmod 755 $ASPNETCOREKestrelCertificatesDefault_KeyPath&#xA;RUN chmod +x $ASPNETCOREKestrelCertificatesDefault_KeyPath&#xA;change file ownership for key file&#xA;RUN update-ca-certificates&#xA;&#xA;ENTRYPOINT [&#34;dotnet&#34;, &#34;test.dll&#34;]&#xA;EXPOSE 80 &#xA;EXPOSE 443&#xA;The above file is for demonstration purposes, in practice you shouldn&#39;t use consecutive RUN instructions, you should update system dependencies and perform some cleanup. I&#39;ve excluded those steps in order to focus on this article&#39;s subject.&#xA;&#xA;Deep dive&#xA;&#xA;The first step I want to focus on is the following : &#xA;RUN dotnet dev-certs https --export-path /config/aspnetapp.pem --password &#34;$CERTPASSWORD&#34; --format PEM&#xA;By default the command to generate certificates generates a certificate in the PFX format.&#xA;While it is theoretically possible to use that format on Linux systems it&#39;s an overly complicated mess. So in order to make things easier we tell the generator tool to use the PEM format. &#xA;This way of using certificates is much better supported in Linux and much easier to setup.&#xA;This command will generate two files : a certificate file and a key file.&#xA;The key file is encrypted with the password that is specified in CERTPASSWORDARG.&#xA;&#xA;The next important part is :&#xA;location of the certificate file&#xA;ENV ASPNETCOREKestrelCertificatesDefaultPath=/usr/local/share/ca-certificates/aspnetapp.crt&#xA;location of the certificate key&#xA;ENV ASPNETCOREKestrelCertificatesDefault_KeyPath=/usr/local/share/ca-certificates/aspnetapp.key&#xA;specify password in order to open certificate key&#xA;ENV ASPNETCOREKestrelCertificatesDefault_Password=$CERTPASSWORD&#xA;These environment variables tell the Kestrel server where it needs to look for the certificate files.&#xA;The ASPNETCOREKestrelCertificatesDefaultPassword is key, since if it is not specified or correctly populated Kestrel won&#39;t be able to use the certificate and will crash.&#xA;This variable isn&#39;t anywhere to be found on Microsoft&#39;s documentation and I only was able to find it looking at the .NET source code published on GitHub.&#xA;&#xA;The next important part is &#xA;&#xA;RUN cat $ASPNETCOREKestrelCertificatesDefault_Path     /etc/ssl/certs/ca-certificates.crt&#xA;RUN update-ca-certificates&#xA;This tells the system to trust the certificate we generated. If we wouldn&#39;t do that Kestrel also wouldn&#39;t be able to run and would crash.&#xA;&#xA;Security implications&#xA;&#xA;Maybe the elephant in the room is that in this setup we are using a self signed certificate in order to serve our application in a container. Many might be eager to discard this whole setup for this reason.&#xA;But before doing that hear me out.&#xA; &#xA;To start with, it&#39;s bad practice to hardcode the certificate you&#39;ll deploy in production environments in code.&#xA;So in fact your Docker image should always use a development certificate.&#xA;Yes, this example also contains a hardcode password at the beginning but this shouldn&#39;t be an issue.&#xA;&#xA;In theory we could use the ASPNETCOREKestrelCertificatesDefault_Path, ASPNETCOREKestrelCertificatesDefault_KeyPath and ASPNETCOREKestrelCertificatesDefault__Password environment variables in order to setup our production certificates at deployment.&#xA;This would allow us to run the image in a container while developing and use a securely stored certificated at deployment.&#xA;However this solution is discouraged since Microsoft doesn&#39;t recommend directly exposing the Kestrel server in Production environments.&#xA;&#xA;This leads to what in my opinion is the preferable solution : using a proxy.&#xA;You can setup IIS, Nginx, Apache, Traefik and so on, with the certificate you want to use.&#xA;Clients using the deployed application will have a secure connection and you don&#39;t need to deal with the complexities of setting up a &#34;real&#34; certificate at the image level.&#xA;&#xA;Using Docker is amazing, and being able to use it with .NET even more.&#xA;If you stumbled on the same roadblock I hope this article proved useful.]]&gt;</description>
      <content:encoded><![CDATA[<p>Since Microsoft started to transition .NET they also started offering Docker images to package your applications. To be more specific at <a href="https://hub.docker.com/_/microsoft-dotnet" rel="nofollow">Docker Hub</a> Microsoft lists their images and intended purposes.</p>

<p>I wanted to take myself up for a challenge and try to package a .NET API project into a Docker container.
The purpose of this article isn&#39;t to tell you how to build an API project since this topic is broadly covered on the web. I want to tell you one of the roadblocks I ran against and how I managed to solve it.</p>

<p>If you want to get started the following tutorials could be useful :
– <a href="https://learn.microsoft.com/en-us/dotnet/core/docker/build-container?tabs=windows&amp;pivots=dotnet-8-0" rel="nofollow">Containerize a .NET app</a>
– <a href="https://medium.com/@ersen/step-by-step-dockerizing-net-core-api-a2490752a3d2" rel="nofollow">Step By Step Dockerizing .NET Core API</a>
– <a href="https://itnext.io/smaller-docker-images-for-asp-net-core-apps-bee4a8fd1277" rel="nofollow">Smaller Docker Images for ASP.NET Core Apps</a></p>

<h2 id="slim-docker-images">Slim Docker images</h2>

<p>It&#39;s best practice to make the Docker images you publish as slim as possible.
The main benefit of doing this is that consuming your image will take less space on your host if you do so.
There are many ways to make your image slimmer but one of the most effective ways is picking the right base image with the right tag.</p>

<p>For example if we look at the tags for the <a href="https://hub.docker.com/_/microsoft-dotnet-aspnet/" rel="nofollow">ASP.NET Core Runtime</a> we see among others the following sections : <em>Linux amd64</em>, <em>Nano Server 2022 amd64</em> , <em>Windows Server Core 2022 amd64</em> and so on.
If you want to make your Docker image multi platform compatible (one of the main benefits of .NET and Docker) you should automatically discard the tags representing a Windows environment.
First of all it&#39;s probably not the most lightweight base OS to build your image but more importantly Windows Docker containers can&#39;t run on any system that isn&#39;t Windows based.</p>

<p>This limits our choice to Linux based images, but even there we have lots of choice.
By example at this moment in time we can choose among others between 8.0-bookworm-slim (<a href="https://www.debian.org/releases/bookworm/" rel="nofollow">Debian</a>), 8.0-alpine-amd64 (<a href="https://www.alpinelinux.org/posts/Alpine-3.18.0-released.html" rel="nofollow">Alpine</a>) and 8.0-jammy (<a href="https://releases.ubuntu.com/jammy/" rel="nofollow">Ubuntu</a>).
Microsoft marks the Debian variant with the latest <code>tag</code> since this distribution is pretty lightweight and also is quite widespread. However if we want to take things up a notch we should go for alpine since this is a lightweight no frills distribution.</p>

<h2 id="the-roadblock">The roadblock</h2>

<p>When publishing a .NET API it is served by <a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel?view=aspnetcore-8.0" rel="nofollow">Kestrel</a>.
When making an API it is recommended to use HTTPS for security reasons. Furthermore when making a production build it is even required.</p>

<p>When reading the <a href="https://learn.microsoft.com/en-us/dotnet/core/additional-tools/self-signed-certificates-guide#create-a-self-signed-certificate" rel="nofollow">documentation</a> we see we should use the following commands  :
– <code>dotnet dev-certs https -ep $env:USERPROFILE\.aspnet\https\aspnetapp.pfx -p crypticpassword</code>
– <code>dotnet dev-certs https --trust</code></p>

<p>This is simple enough, what&#39;s the problem then? Well the second of those command is only supported on Windows based systems.</p>

<h2 id="the-solution">The solution</h2>

<p>After a lot of trial and error I came to the following solution :</p>

<pre><code># Password for the certificate
ARG CERT_PASSWORD_ARG=SUPERSECRET
# this image contains the entire .NET SDK and is ideal for creation the build
FROM mcr.microsoft.com/dotnet/sdk:8.0-alpine-amd64 AS build-env
ARG CERT_PASSWORD_ARG
ENV CERT_PASSWORD=$CERT_PASSWORD_ARG
WORKDIR /App
COPY . ./
# Restore dependencies for your application
RUN dotnet restore
# Build your application
RUN dotnet publish test.csproj --no-restore --self-contained false -c Release -o out /p:UseAppHost=false 
# Make the directory for certificate export
RUN mkdir /config
# Generate certificate with specified password
RUN dotnet dev-certs https --export-path /config/aspnetapp.pem --password &#34;$CERT_PASSWORD&#34; --format PEM

# this image contains the ASP.NET Core and .NET runtimes and libraries 
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine-amd64
ARG CERT_PASSWORD_ARG
ENV CERT_PASSWORD=$CERT_PASSWORD_ARG
WORKDIR /App
# add dependency in system to setup certificates
RUN apk add ca-certificates 
# create directory to store certificate config
RUN mkdir /config 
# create necessary config directory
RUN mkdir -p /usr/local/share/ca-certificates/
# copy compiled files to runtime
COPY --from=build-env /App/out . 
# copy generated certificate
COPY --from=build-env /config /config
# Disable Big Brother
ENV DOTNET_CLI_TELEMETRY_OPTOUT=1
# Set the environment to production
ENV ASPNETCORE_ENVIRONMENT=Production
# Set the urls where Kestrel is going to listen
ENV ASPNETCORE_URLS=http://+:80;https://+:443
# location of the certificate file
ENV ASPNETCORE_Kestrel__Certificates__Default__Path=/usr/local/share/ca-certificates/aspnetapp.crt
# location of the certificate key
ENV ASPNETCORE_Kestrel__Certificates__Default__KeyPath=/usr/local/share/ca-certificates/aspnetapp.key
# specify password in order to open certificate key
ENV ASPNETCORE_Kestrel__Certificates__Default__Password=$CERT_PASSWORD
# copy certificate files to config directory
RUN cp /config/aspnetapp.pem $ASPNETCORE_Kestrel__Certificates__Default__Path 
RUN cp /config/aspnetapp.key $ASPNETCORE_Kestrel__Certificates__Default__KeyPath
# set file permisions for certificate file
RUN chmod 755 $ASPNETCORE_Kestrel__Certificates__Default__Path 
RUN chmod +x $ASPNETCORE_Kestrel__Certificates__Default__Path
# change file ownership for certificate file
# add generated certificate to trusted certificate list on the system
RUN cat $ASPNETCORE_Kestrel__Certificates__Default__Path &gt;&gt; /etc/ssl/certs/ca-certificates.crt
# set file permissions for key file
RUN chmod 755 $ASPNETCORE_Kestrel__Certificates__Default__KeyPath
RUN chmod +x $ASPNETCORE_Kestrel__Certificates__Default__KeyPath
# change file ownership for key file
RUN update-ca-certificates

ENTRYPOINT [&#34;dotnet&#34;, &#34;test.dll&#34;]
EXPOSE 80 
EXPOSE 443
</code></pre>

<p>The above file is for demonstration purposes, in practice you shouldn&#39;t use consecutive <code>RUN</code> instructions, you should update system dependencies and perform some cleanup. I&#39;ve excluded those steps in order to focus on this article&#39;s subject.</p>

<h3 id="deep-dive">Deep dive</h3>

<p>The first step I want to focus on is the following :</p>

<pre><code>RUN dotnet dev-certs https --export-path /config/aspnetapp.pem --password &#34;$CERT_PASSWORD&#34; --format PEM
</code></pre>

<p>By default the command to generate certificates generates a certificate in the <a href="https://learn.microsoft.com/en-us/windows-hardware/drivers/install/personal-information-exchange---pfx--files" rel="nofollow">PFX</a> format.
While it is theoretically possible to use that format on Linux systems it&#39;s an overly complicated mess. So in order to make things easier we tell the generator tool to use the PEM format.
This way of using certificates is much better supported in Linux and much easier to setup.
This command will generate two files : a certificate file and a key file.
The key file is encrypted with the password that is specified in <code>CERT_PASSWORD_ARG</code>.</p>

<p>The next important part is :</p>

<pre><code># location of the certificate file
ENV ASPNETCORE_Kestrel__Certificates__Default__Path=/usr/local/share/ca-certificates/aspnetapp.crt
# location of the certificate key
ENV ASPNETCORE_Kestrel__Certificates__Default__KeyPath=/usr/local/share/ca-certificates/aspnetapp.key
# specify password in order to open certificate key
ENV ASPNETCORE_Kestrel__Certificates__Default__Password=$CERT_PASSWORD
</code></pre>

<p>These environment variables tell the Kestrel server where it needs to look for the certificate files.
The <code>ASPNETCORE_Kestrel__Certificates__Default__Password</code> is key, since if it is not specified or correctly populated Kestrel won&#39;t be able to use the certificate and will crash.
This variable isn&#39;t anywhere to be found on Microsoft&#39;s documentation and I only was able to find it looking at the .NET source code published on GitHub.</p>

<p>The next important part is</p>

<pre><code>RUN cat $ASPNETCORE_Kestrel__Certificates__Default__Path &gt;&gt; /etc/ssl/certs/ca-certificates.crt
RUN update-ca-certificates
</code></pre>

<p>This tells the system to trust the certificate we generated. If we wouldn&#39;t do that Kestrel also wouldn&#39;t be able to run and would crash.</p>

<h2 id="security-implications">Security implications</h2>

<p>Maybe the elephant in the room is that in this setup we are using a self signed certificate in order to serve our application in a container. Many might be eager to discard this whole setup for this reason.
But before doing that hear me out.</p>

<p>To start with, it&#39;s bad practice to hardcode the certificate you&#39;ll deploy in production environments in code.
So in fact your Docker image should always use a development certificate.
Yes, this example also contains a hardcode password at the beginning but this shouldn&#39;t be an issue.</p>

<p>In theory we could use the <code>ASPNETCORE_Kestrel__Certificates__Default__Path</code>, <code>ASPNETCORE_Kestrel__Certificates__Default__KeyPath</code> and <code>ASPNETCORE_Kestrel__Certificates__Default__Password</code> environment variables in order to setup our production certificates at deployment.
This would allow us to run the image in a container while developing and use a securely stored certificated at deployment.
However this solution is discouraged since Microsoft doesn&#39;t recommend directly exposing the Kestrel server in Production environments.</p>

<p>This leads to what in my opinion is the preferable solution : using a proxy.
You can setup <a href="https://learn.microsoft.com/en-us/iis/get-started/introduction-to-iis/iis-web-server-overview" rel="nofollow">IIS</a>, <a href="https://www.nginx.com/" rel="nofollow">Nginx</a>, <a href="https://httpd.apache.org/" rel="nofollow">Apache</a>, <a href="https://traefik.io/traefik/" rel="nofollow">Traefik</a> and so on, with the certificate you want to use.
Clients using the deployed application will have a secure connection and you don&#39;t need to deal with the complexities of setting up a “real” certificate at the image level.</p>

<p>Using Docker is amazing, and being able to use it with .NET even more.
If you stumbled on the same roadblock I hope this article proved useful.</p>
]]></content:encoded>
      <guid>https://blog.claeyscloud.com/david/deploying-net-containers-in-docker</guid>
      <pubDate>Tue, 23 Apr 2024 06:59:36 +0000</pubDate>
    </item>
    <item>
      <title>Setup remote acces to your network</title>
      <link>https://blog.claeyscloud.com/david/setup-remote-acces-to-your-network</link>
      <description>&lt;![CDATA[You are outside your home, but want to watch your favourite movie on your Plex server, or some VM crashed and you need access to your hypervisor.&#xA;&#xA;In these cases external access to your network comes in handy, in this artivle we will learn how to setup external access with Wireguard.&#xA;&#xA;Assumptions&#xA;&#xA;You already have a working system with Docker installed&#xA;Your ISP provides an external IP (your internet connection is not behind CG-NAT)&#xA;You know how to expose ports on your firewall&#xA;You already have a domain&#xA;&#xA;Setting up port forwarding&#xA;&#xA;Before you start you should go into your router and forward the port of your liking to the system where later on we will setup Wireguard.&#xA;It&#39;s important that this system has a static ip, since otherwise you would need to update your port forwarding settings each time your ip changes.&#xA;&#xA;An example of a routing table with port forwarding enabledAn example of a routing table with port forwarding enabled&#xA;&#xA;Setting up Wireguard&#xA;&#xA;There are different options to setup Wireguard, the option I chose is called wireguard-ui. It is available as an easy to setup Docker image and offers a nice web interface.&#xA;&#xA;This is an example compose file :&#xA;&#xA;    version: &#34;3&#34;&#xA;    services:&#xA;      wg-ui:&#xA;        image: ngoduykhanh/wireguard-ui&#xA;        capadd:&#xA;          NETADMIN&#xA;          SYSMODULE&#xA;        environment:&#xA;          WGUISERVERLISTENPORT=${WGUISERVERLISTENPORT}&#xA;          WGUIMANAGESTART=true&#xA;          WGUIMANAGERESTART=true&#xA;          WGUISERVERPOSTUPSCRIPT=iptables -A FORWARD -i wg0 -j ACCEPT;iptables -A FORWARD -o wg0 -j ACCEPT;iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE;iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE&#xA;          WGUISERVERPOSTDOWNSCRIPT=iptables -D FORWARD -i wg0 -j ACCEPT;iptables -D FORWARD -o wg0 -j ACCEPT;iptables -t nat -D POSTROUTING -o wg0 -j MASQUERADE;iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE&#xA;          TZ=${TIMEZONE}&#xA;        networkmode: bridge&#xA;        volumes:&#xA;          ${WGUICONFIGFOLDER}:/app/db&#xA;          ${WGCONFIGFOLDER}:/etc/wireguard&#xA;    &#xA;        ports:&#xA;          5000:5000&#xA;          ${WGUISERVERLISTENPORT}:${WGUISERVERLISTENPORT}/udp&#xA;        sysctls:&#xA;           net.ipv4.conf.all.srcvalidmark=1&#xA;           net.ipv4.ipforward=1&#xA;        restart: unless-stopped  &#xA;And these are the variables for the compose file&#xA;&#xA;    WGUICONFIGFOLDER=/docker/wireguard/ui&#xA;    WGCONFIGFOLDER=/docker/wireguard/server&#xA;    #this should be the same port you exposed on your router&#xA;    WGUISERVERLISTENPORT=60&#xA;    #choose the timezone you like&#xA;    TIMEZONE=Europe/Madrid&#xA;&#xA;Notices&#xA;&#xA;It&#39;s very important you make sure the WGUISERVERPOSTUPSCRIPT and WGUISERVERPOSTDOWNSCRIPT variables are correctly filled in.&#xA;It&#39;s not mentioned in the documentation but without them you&#39;ll not be able to establish a remote connection.&#xA;&#xA;The documentiation suggests using host mode for networking, this might be usefull for performance reasons. However I didnΓÇÖt like to lose network isolation and didn&#39;t have performance issues, so I preferred bridge mode.&#xA;&#xA;The documentation mentions that you can setup SMTP to automatically send Wireguard credentials, I&#39;ve only been able to do this through the SendGrid integration (SENDGRIDAPIKEY)&#xA;&#xA;The next step you should take is to change the default password.&#xA;You can do this by clicking on the username and then changing the password on the form that appears.&#xA;&#xA;Change password formChange password form&#xA;&#xA;Setting up a domain&#xA;&#xA;This step is not strictly necessary but I very recomendable. Wireguard-ui lets you auto discover your external IP, and this will work. &#xA;&#xA;However most residential internet connections have an dynamic ip address, this means that depending on your ISP your external IP could change at any time without notice. Everytime your external IP changes you would need to go into the settings and discover your new IP address (This could happen every couple of hours, days , months or years).&#xA;&#xA;The issue with this is that your external IP could change without you noticing and at the worst time possible you&#39;ve lost your remote network access.&#xA;&#xA;The solution to this problem is setting up dynamic DNS. Again there are multiple options to do this, but the solution I liked the most is called ddns-updater. &#xA;&#xA;My compose file looks like this :&#xA;&#xA;    services:&#xA;      ddns-updater:&#xA;        image: qmcgaw/ddns-updater&#xA;        ports:&#xA;          8000:8000/tcp&#xA;        volumes:&#xA;          ${CONFIGFOLDER}:/updater/data&#xA;        environment:&#xA;          CONFIG=&#xA;          PERIOD=5m&#xA;          UPDATECOOLDOWNPERIOD=5m&#xA;          PUBLICIPFETCHERS=all&#xA;          PUBLICIPHTTPPROVIDERS=all&#xA;          PUBLICIPV4HTTPPROVIDERS=all&#xA;          PUBLICIPV6HTTPPROVIDERS=all&#xA;          PUBLICIPDNSPROVIDERS=all&#xA;          PUBLICIPDNSTIMEOUT=3s&#xA;          HTTPTIMEOUT=10s&#xA;          LISTENINGPORT=8000&#xA;          ROOTURL=/&#xA;          BACKUPPERIOD=0 # 0 to disable&#xA;          BACKUPDIRECTORY=/updater/data&#xA;          LOGLEVEL=info&#xA;          LOGCALLER=hidden&#xA;        restart: always&#xA;&#xA;And my variables look like this :&#xA;&#xA;    CONFIGFOLDER=/docker/ddns-updater&#xA;&#xA;The last thing we need to do is to make our config.json file in order to get our dynamic DNS working. This file should be located in your config folder, so in this case in /docker/ddns-updater&#xA;&#xA;This page provides all the available domain registerers and their configuration. Since I use cloudflare my config file looks like this :&#xA;&#xA;    {&#xA;      &#34;settings&#34;: [&#xA;        {&#xA;          &#34;provider&#34;: &#34;cloudflare&#34;,&#xA;          // fill in your zone identifier&#xA;          &#34;zoneidentifier&#34;: &#34;zoneidentifier&#34;,&#xA;          // fill in your domain&#xA;          &#34;domain&#34;: &#34;wireguard.example.com&#34;,&#xA;          &#34;host&#34;: &#34;@&#34;,&#xA;          &#34;ttl&#34;: 600,&#xA;          // fill in your token&#xA;          &#34;token&#34;: &#34;token&#34;,&#xA;          &#34;ipversion&#34;: &#34;ipv4&#34;&#xA;        }&#xA;      ]&#xA;    }&#xA;&#xA;Once you&#39;ve done this you can verify everything works by opening the web interface at port 8000.&#xA;&#xA;An example of the web interfaceAn example of the web interface&#xA;&#xA;The last step is to fill in our domain in the wireguard settings. You can do this in Global settings   Endpoint address &#xA;&#xA;Configuring Wireguard end point settingsConfiguring Wireguard end point settings&#xA;&#xA;Setting up clients&#xA;&#xA;The wireguard-ui web interface is simple but for the sake of completeness here is a short explanation on how to create clients.&#xA;&#xA;Go to Wireguard Clients and click on the New Client button.&#xA;You should give it a name and if you want to later on send the wireguard credentials through email you can fill it in.&#xA;The really important detail is to fill in the network that needs remote access under allowed IP&#39;s. Once everything is correctly filled in click on submit. &#xA;&#xA;New client dialogNew client dialog&#xA;&#xA;The last step would be to go to Wireguard Server and to click on the Apply config button. Now the wireguard server will restart and load the new config. &#xA;Now you&#39;re ready to add as many clients as you want and access your network from remote locations :)&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>You are outside your home, but want to watch your favourite movie on your Plex server, or some VM crashed and you need access to your hypervisor.</p>

<p>In these cases external access to your network comes in handy, in this artivle we will learn how to setup external access with Wireguard.</p>

<h3 id="assumptions">Assumptions</h3>
<ul><li>You already have a working system with Docker installed</li>
<li>Your ISP provides an external IP (your internet connection is not behind CG-NAT)</li>
<li>You know how to expose ports on your firewall</li>
<li>You already have a domain</li></ul>

<h2 id="setting-up-port-forwarding">Setting up port forwarding</h2>

<p>Before you start you should go into your router and forward the port of your liking to the system where later on we will setup Wireguard.
It&#39;s important that this system has a static ip, since otherwise you would need to update your port forwarding settings each time your ip changes.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/1_5Cwbl3N4AiE3CAB4krwWgw.png" alt="An example of a routing table with port forwarding enabled"><em>An example of a routing table with port forwarding enabled</em></p>

<h2 id="setting-up-wireguard">Setting up Wireguard</h2>

<p>There are different options to setup Wireguard, the option I chose is called <a href="https://github.com/ngoduykhanh/wireguard-ui" rel="nofollow">wireguard-ui</a>. It is available as an easy to setup Docker image and offers a nice web interface.</p>

<p>This is an example compose file :</p>

<pre><code>    version: &#34;3&#34;
    services:
      wg-ui:
        image: ngoduykhanh/wireguard-ui
        cap_add:
          - NET_ADMIN
          - SYS_MODULE
        environment:
          - WGUI_SERVER_LISTEN_PORT=${WGUI_SERVER_LISTEN_PORT}
          - WGUI_MANAGE_START=true
          - WGUI_MANAGE_RESTART=true
          - WGUI_SERVER_POST_UP_SCRIPT=iptables -A FORWARD -i wg0 -j ACCEPT;iptables -A FORWARD -o wg0 -j ACCEPT;iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE;iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
          - WGUI_SERVER_POST_DOWN_SCRIPT=iptables -D FORWARD -i wg0 -j ACCEPT;iptables -D FORWARD -o wg0 -j ACCEPT;iptables -t nat -D POSTROUTING -o wg0 -j MASQUERADE;iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
          - TZ=${TIME_ZONE}
        network_mode: bridge
        volumes:
          - ${WGUI_CONFIG_FOLDER}:/app/db
          - ${WG_CONFIG_FOLDER}:/etc/wireguard
    
        ports:
          - 5000:5000
          - ${WGUI_SERVER_LISTEN_PORT}:${WGUI_SERVER_LISTEN_PORT}/udp
        sysctls:
           - net.ipv4.conf.all.src_valid_mark=1
           - net.ipv4.ip_forward=1
        restart: unless-stopped  
</code></pre>

<p>And these are the variables for the compose file</p>

<pre><code>    WGUI_CONFIG_FOLDER=/docker/wireguard/ui
    WG_CONFIG_FOLDER=/docker/wireguard/server
    #this should be the same port you exposed on your router
    WGUI_SERVER_LISTEN_PORT=60
    #choose the timezone you like
    TIME_ZONE=Europe/Madrid
</code></pre>

<p><strong>Notices</strong></p>
<ul><li><p>It&#39;s very important you make sure the <code>WGUI_SERVER_POST_UP_SCRIPT</code> and <code>WGUI_SERVER_POST_DOWN_SCRIPT</code> variables are correctly filled in.
It&#39;s not mentioned in the documentation but without them you&#39;ll not be able to establish a remote connection.</p></li>

<li><p>The <a href="https://github.com/ngoduykhanh/wireguard-ui/blob/master/examples/docker-compose/system.yml" rel="nofollow">documentiation </a>suggests using host mode for networking, this might be usefull for performance reasons. However I didnΓÇÖt like to lose network isolation and didn&#39;t have performance issues, so I preferred bridge mode.</p></li>

<li><p>The documentation mentions that you can setup SMTP to automatically send Wireguard credentials, I&#39;ve only been able to do this through the SendGrid integration (<em><code>SENDGRID_API_KEY</code></em>)</p></li></ul>

<p>The next step you should take is to change the default password.
You can do this by clicking on the username and then changing the password on the form that appears.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/1_bMzbsITOMKXFVR1aVg70Lg.png" alt="Change password form"><em>Change password form</em></p>

<h2 id="setting-up-a-domain">Setting up a domain</h2>

<p>This step is not strictly necessary but I very recomendable. Wireguard-ui lets you auto discover your external IP, and this will work.</p>

<p>However most residential internet connections have an dynamic ip address, this means that depending on your ISP your external IP could change at any time without notice. Everytime your external IP changes you would need to go into the settings and discover your new IP address (This could happen every couple of hours, days , months or years).</p>

<p>The issue with this is that your external IP could change without you noticing and at the worst time possible you&#39;ve lost your remote network access.</p>

<p>The solution to this problem is setting up dynamic DNS. Again there are multiple options to do this, but the solution I liked the most is called <a href="https://github.com/qdm12/ddns-updater" rel="nofollow">ddns-updater</a>.</p>

<p>My compose file looks like this :</p>

<pre><code>    services:
      ddns-updater:
        image: qmcgaw/ddns-updater
        ports:
          - 8000:8000/tcp
        volumes:
          - ${CONFIG_FOLDER}:/updater/data
        environment:
          - CONFIG=
          - PERIOD=5m
          - UPDATE_COOLDOWN_PERIOD=5m
          - PUBLICIP_FETCHERS=all
          - PUBLICIP_HTTP_PROVIDERS=all
          - PUBLICIPV4_HTTP_PROVIDERS=all
          - PUBLICIPV6_HTTP_PROVIDERS=all
          - PUBLICIP_DNS_PROVIDERS=all
          - PUBLICIP_DNS_TIMEOUT=3s
          - HTTP_TIMEOUT=10s
          - LISTENING_PORT=8000
          - ROOT_URL=/
          - BACKUP_PERIOD=0 # 0 to disable
          - BACKUP_DIRECTORY=/updater/data
          - LOG_LEVEL=info
          - LOG_CALLER=hidden
        restart: always
</code></pre>

<p>And my variables look like this :</p>

<pre><code>    CONFIG_FOLDER=/docker/ddns-updater
</code></pre>

<p>The last thing we need to do is to make our config.json file in order to get our dynamic DNS working. This file should be located in your config folder, so in this case in /docker/ddns-updater</p>

<p>This <a href="https://github.com/qdm12/ddns-updater/tree/master/docs" rel="nofollow">page </a>provides all the available domain registerers and their configuration. Since I use cloudflare my config file looks like this :</p>

<pre><code>    {
      &#34;settings&#34;: [
        {
          &#34;provider&#34;: &#34;cloudflare&#34;,
          // fill in your zone identifier
          &#34;zone_identifier&#34;: &#34;zone_identifier&#34;,
          // fill in your domain
          &#34;domain&#34;: &#34;wireguard.example.com&#34;,
          &#34;host&#34;: &#34;@&#34;,
          &#34;ttl&#34;: 600,
          // fill in your token
          &#34;token&#34;: &#34;token&#34;,
          &#34;ip_version&#34;: &#34;ipv4&#34;
        }
      ]
    }
</code></pre>

<p>Once you&#39;ve done this you can verify everything works by opening the web interface at port 8000.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/1_jcq2-cqJyWXPWil46OHTxA.png" alt="An example of the web interface"><em>An example of the web interface</em></p>

<p>The last step is to fill in our domain in the wireguard settings. You can do this in Global settings &gt; Endpoint address</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/1_auWk7zKd-YbmbJB1s9BY6w.png" alt="Configuring Wireguard end point settings"><em>Configuring Wireguard end point settings</em></p>

<h2 id="setting-up-clients">Setting up clients</h2>

<p>The wireguard-ui web interface is simple but for the sake of completeness here is a short explanation on how to create clients.</p>

<p>Go to Wireguard Clients and click on the New Client button.
You should give it a name and if you want to later on send the wireguard credentials through email you can fill it in.
The really important detail is to fill in the network that needs remote access under allowed IP&#39;s. Once everything is correctly filled in click on submit.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/1_sLKvcIZo-up6dKtPI3xfqA.png" alt="New client dialog"><em>New client dialog</em></p>

<p>The last step would be to go to Wireguard Server and to click on the Apply config button. Now the wireguard server will restart and load the new config.
Now you&#39;re ready to add as many clients as you want and access your network from remote locations :)</p>
]]></content:encoded>
      <guid>https://blog.claeyscloud.com/david/setup-remote-acces-to-your-network</guid>
      <pubDate>Thu, 28 Sep 2023 08:05:22 +0000</pubDate>
    </item>
    <item>
      <title>ScanservJs : make your own scan server</title>
      <link>https://blog.claeyscloud.com/david/scanservjs-a-make-your-own-scan-server</link>
      <description>&lt;![CDATA[Printers and scanners…, I have a love/hate relationship with them. They are very handy when they work, but sometimes they cause a lot of struggle.&#xA;&#xA;I recently set myself up for a challenge. Since a couple of months I own an HP Envy Pro 6442, a printer/scanner combo. While not horrible, the experience using this device as a scanner leaves a lot to be desired. By default, you are limited to use the Windows scanner utility or to installing HP’s app on your phone. I thought the experience could be much better if somehow you could scan images through a webpage.&#xA;&#xA;Admittedly, this device in particular provides a web interface that allows scanning from the web browser. But this interface is not really user friendly and requires authentication (which is rather burdensome for home use). So I looked up the web for a solution and through sheer luck I stumbled upon ScanservJs. The purpose of this article is to guide you through the setup for this particular device but it can be done on other devices.&#xA;&#xA;Edit :&#xA;Since I wrote the article I&#39;ve also set everything up with an HP Envy Inspire 7200. I&#39;ve expanded the examples to adapt a bit more to this model.&#xA;&#xA;ScanserveJs has been recently updated to version 3.&#xA;The new version has some breaking changes, the article has been updated to accomodate for these changes. If you already have it installed pull the latest image and take a look at the directory mappings.&#xA;&#xA;Disclaimer&#xA;&#xA;This is not by any means an entry-level tutorial. I omit some details that will not be easy to figure out if you have no prior experience.&#xA;&#xA;These are the details I skipped (as far as I&#39;m aware) :&#xA;&#xA;Setting up the scanner over the network (docs for HP Envy Pro 6442)&#xA;Accessing the scanner web UI&#xA;Setting up the server OS&#xA;Assigning your server a static IP to your server so that users can access it (and specifying a domain name for your server)&#xA;Testing this setup with other other devices than mine (other models from HP or any other manufacturer)&#xA;&#xA;Setting up the scanner&#xA;&#xA;Based on my experience I recommend assigning a static IP address. You can achieve this from your router making a static DHCP assignment. If you aren&#39;t able to do this you can do this from the scanner web interface.&#xA;&#xA;Under Network   Wireless   Network Address (IPv4) you can choose Manual IP and choose the IP address you wish.&#xA;&#xA;Setting up the server&#xA;&#xA;Since the scanner is setup over the network, it doesn&#39;t need to be physically connected to the server. You can setup the server on bare metal or a hypervisor such as Proxmox VE, XCP-ng or even Hyper-V. I personally used Virtual Machine Manager on my Synology NAS to make a VM. If you have a QNAP NAS you could also use Virtualization Station. Since this detail is not really relevant for our purposes I won&#39;t go into details. Use whatever suits your needs.&#xA;&#xA;For my OS I used Ubuntu 22.04 Server. You might be able to use other flavors of LINUX but no success is guaranteed. First of all we want to make sure our OS has the latest updates.&#xA;&#xA;Note&#xA;I did some testing on Ubuntu 22.10 and the install failed due to an unavailable dependency (python3-pyqt4).&#xA;&#xA;    sudo apt-get update&#xA;    sudo apt-get upgrade&#xA;&#xA;Installing HP drivers&#xA;&#xA;This part of the guide depends mainly on the device manufacturer of your scanner. The following steps are only applicable if you own a HP scanner/printer. If you have another one you&#39;ll need to figure this out on your own.&#xA;&#xA;Edit :&#xA;The current version of the hplip software is 3.23.12.&#xA;&#xA;On my server package updates broke my existing installation.&#xA;I had to install the latest driver version in order to get everything working again. Needless to say you need to be very careful when applying updates with your package manager, since it won&#39;t automatically update your hplip installation.&#xA;&#xA;Drivers for other manufacturers&#xA;Epson &#xA;Canon&#xA;Brother&#xA;Samsung&#xA;&#xA;Luckily HP provides a LINUX driver for their devices. Although they just worked fine the documentation and installation process is a bit flaky.&#xA;&#xA;First of all go to https://developers.hp.com/hp-linux-imaging-and-printing/gethplip and select your server’s distro. If you followed along the distro we&#39;re using is ubuntu.&#xA;&#xA;Hp driver downloader pageHp driver downloader page&#xA;&#xA;Then just click the download button and the download should start through SourceForge.&#xA;&#xA;Driver download startingDriver download starting&#xA;&#xA;Once the download is completed you should get a file called hplip-3.22.10.run. Please note that the version might change over time as well the download source.&#xA;&#xA;Before we attempt to install the driver we need to install some required packages on the server. You can find the required packages on https://developers.hp.com/hp-linux-imaging-and-printing/install/manual/distros/ubuntu.&#xA;&#xA;HP requirements documentationHP requirements documentation&#xA;&#xA;sudo apt-get install --assume-yes libcups2 cups libcups2-dev cups-bsd cups-client avahi-utils libavahi-client-dev libavahi-core-dev libavahi-common-dev libcupsimage2-dev libdbus-1-dev build-essential gtk2-engines-pixbuf ghostscript openssl libjpeg-dev libatk-adaptor libgail-common libsnmp-dev snmp-mibs-downloader libtool libtool-bin libusb-1.0-0-dev libusb-0.1-4 wget policykit-1 policykit-1-gnome automake1.11 python3-dbus.mainloop.pyqt5 python3-reportlab python3-notify2 python3-pyqt5 python3-dbus python3-gi python3-lxml python3-dev python3-pil python-is-python3 libsane libsane-dev sane-utils xsane -yq&#xA;&#xA;This command should do the trick but please be aware that this could change over time. Check the documentation to make sure you have all the requirements.&#xA;&#xA;Now it&#39;s time to install the driver on the server. At this stage I stumbled upon a hurdle. I was trying to get the installer file into my server but since I had a headless server I couldn&#39;t figure out how to get the installer file into my server. Finally I settled with downloading it on my Windows machine and transferring the file to my server through sftp. I used a program called FileZilla to do this.&#xA;&#xA;I made a new directory called HP where I put this file.&#xA;&#xA;    mkdir hp&#xA;&#xA;Then I went into this directory.&#xA;&#xA;    cd hp&#xA;&#xA;Once you are in the directory you can proceed with this command&#xA;(Please be aware of the version number).&#xA;&#xA;    sh hplip-3.22.10.run&#xA;&#xA;Screenshot of the driver installation processScreenshot of the driver installation process&#xA;&#xA;This is the point where you need a lot of patience because depending on your system this step might take a while.&#xA;&#xA;Once the installation is completed you can proceed setting up the scanner. You can do this with the following command. Please be aware that you will need to adapt the IP address depending on whatever you set up previously.&#xA;&#xA;    sudo hp-setup -i 192.168.0.70&#xA;&#xA;Once this is completed you can check if everything is working with this command&#xA;&#xA;    scanimage -L&#xA;&#xA;This command should give the following output. If you don&#39;t see anything at this point your driver is not working.&#xA;&#xA;Example of outputExample of output&#xA;&#xA;Installing docker&#xA;&#xA;The most easy way to get the ScanservJs software working is through docker. I performed the install with the following command. Please be aware that this might change over time and read the documentation on https://docs.docker.com/engine/install/ubuntu/.&#xA;&#xA;    sudo apt-get install \&#xA;        ca-certificates \&#xA;        curl \&#xA;        gnupg \&#xA;        lsb-release&#xA;    sudo mkdir -p /etc/apt/keyrings&#xA;    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg&#xA;    echo \&#xA;      &#34;deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \&#xA;      $(lsbrelease -cs) stable&#34; | sudo tee /etc/apt/sources.list.d/docker.list   /dev/null&#xA;    sudo apt-get update&#xA;    sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin&#xA;&#xA;Screenshot of the official Docker documentationScreenshot of the official Docker documentation&#xA;&#xA;Setup the server environment for the docker container&#xA;&#xA;Before spinning up the container there are a couple of things we want to do. &#xA;First make a folder containing all our docker configurations. Be aware that you should change your user and user groups to yours in my example. It&#39;s also possible to use other folders to store the configuration, if you do so you also will need to adapt the container setup later on.&#xA;&#xA;    sudo mkdir /docker&#xA;    sudo chown sane:sane /docker&#xA;&#xA;Now make a folder containing the container we are about to spin up.&#xA;&#xA;    mkdir -p /docker/sane/config&#xA;    mkdir -p /docker/sane/images&#xA;&#xA;Now make a file containing the configuration.&#xA;&#xA;    nano /docker/sane/config/config.local.js&#xA;&#xA;Paste the following contents into the file. This example is based on the documentation on github.&#xA;&#xA;The device id is derived from the AIRSCANDEVICES environment variable of the docker container. The device name can be whatever you want.&#xA;&#xA;If you look carefully you can see I specified the resolutions for my scanner. In case you have another scanner than mine, read the documentation in order to figure how to change the them for yours.&#xA;&#xA;    / eslint-disable no-unused-vars /&#xA;    &#xA;    module.exports = {&#xA;      afterDevices(devices) {&#xA;     const deviceNames = {&#xA;          /&#xA;            &#39;device id&#39;:&#39;device name&#39;&#xA;          /&#xA;          &#39;airscan:e0:Hp Envy Pro 6442&#39;: &#39;Hp Envy Pro 6442&#39;&#xA;        };&#xA;     &#xA;    /&#xA;      replace the id in the filter&#xA;    /&#xA;     devices&#xA;          .filter(d =  d.id == &#39;airscan:e0:Hp Envy Pro 6442&#39;)&#xA;          .forEach(device =  {&#xA;            device.features[&#39;--resolution&#39;].default = 400;&#xA;            device.features[&#39;--resolution&#39;].options = [100, 150, 200, 300, 400, 600];&#xA;            /&#xA;              Disable batch modes if they are not available on your printer&#xA;            /&#xA;             device.settings.batchMode.options = [&#39;none&#39;, &#39;manual&#39;];&#xA;             /&#xA;             Specify the default pipeline&#xA;             /&#xA;             device.settings.pipeline.default = [&#39;PNG&#39;];&#xA;          });&#xA;     &#xA;      devices&#xA;          .filter(d =  d.id in deviceNames)&#xA;          .forEach(d =  d.name = deviceNames[d.id]);&#xA;      }&#xA;    };&#xA;&#xA;Scantopl pipeline for uploading to Paperles-ng&#xA;&#xA;With this additional configuration you can automatically upload scanned documents to Paperless&#xA;&#xA;    afterConfig(config) {&#xA;        const pipelines = [&#xA;          {&#xA;            extension: &#39;pdf&#39;,&#xA;            description: &#39;Paperless&#39;,&#xA;            commands: [&#xA;            &#39;convert @- -quality 100 tmp-%04d.png &amp;&amp; ls tmp-.png&#39;,&#xA;            &#39;convert @- scan-0000.pdf&#39;,&#xA;            &#39;ls scan-.&#39;&#xA;            ],&#xA;            afterAction: &#39;Rename for paperless&#39;&#xA;          }&#xA;        ];&#xA;    &#xA;        config.pipelines.splice(0, 0, ...pipelines);&#xA;      },&#xA;      actions: [&#xA;        {&#xA;          name: &#39;Rename for paperless&#39;,&#xA;          async execute(fileInfo) {&#xA;            return await Process.spawn(mv &#39;${fileInfo.fullname}&#39; &#39;${fileInfo.path}/pl${fileInfo.name}&#39;);&#xA;          }&#xA;        }&#xA;      ]&#xA;&#xA;Spin up the docker container&#xA;&#xA;Now its finally time to setup our awesome scan server through docker.&#xA;&#xA;You should read the documentation on github about this topic if you have any issues.&#xA;&#xA;In the example command you should change the SANEDNETHOSTS and AIRSCANDEVICES to fit the settings of your scanner. If you change the device id in the AIRSCANDEVICES variable you&#39;ll need to adjust the provided configuration file.&#xA;&#xA;I also deviated a bit from the suggested configuration changing the web port mapping. In my opinion it&#39;s better to use the default web port, so that users can type the address of the server without specifying any port.&#xA;&#xA;Docker command&#xA;&#xA;    docker run -d \&#xA;      -e SANEDNETHOSTS=&#34;192.168.0.70&#34; \&#xA;      -e AIRSCANDEVICES=&#39;&#34;Hp Envy Pro 6442&#34; = &#34;http://192.168.0.70/eSCL&#34;&#39; \&#xA;      -p 80:8080 \&#xA;      -v /var/run/dbus:/var/run/dbus \&#xA;      -v /docker/sane/config:/app/config \&#xA;      -v /docker/sane/images:/app/data/output \&#xA;      --restart unless-stopped \&#xA;      --name scanservjs-container \&#xA;      --privileged sbs20/scanservjs:latest&#xA;&#xA;Docker compose&#xA;&#xA;    version: &#34;3&#34;&#xA;    services:&#xA;      scanservjs:&#xA;        image: sbs20/scanservjs:latest&#xA;        privileged: true&#xA;        environment:&#xA;          UID=${UID}&#xA;          GID=${GID}&#xA;          SANEDNETHOSTS=${SANEDNETHOSTS}&#xA;          AIRSCANDEVICES=${AIRSCANDEVICES}&#xA;        volumes:&#xA;          /docker/sane/images:/app/data/output&#xA;          /docker/sane/config:/app/config&#xA;          /var/run/dbus:/var/run/dbus&#xA;        ports:&#xA;          ${WEBPORT}:8080&#xA;        restart: unless-stopped&#xA;      scantopl:&#xA;        image: ghcr.io/celedhrim/scantopl:master&#xA;        environment:&#xA;          PLURL=http://paperless.instance&#xA;          PLTOKEN=paperlesstoken&#xA;        volumes:&#xA;          /docker/sane/images:/output&#xA;&#xA;Once you execute this command you&#39;ve setup your own scan server ! &#xA;This web interface is just awesome.&#xA;&#xA;Screenshot of the scan serverScreenshot of the scan server&#xA;&#xA;Remarks&#xA;&#xA;You could set up the scan server with multiple devices. If you have scanners from different manufacturers (or maybe different models) you will need to figure out the driver situation for each of them.&#xA;&#xA;I also noted some issues with my particular scanner. When trying to scan with some resolutions my scanner would crash (that&#39;s why I specified them in the configuration file). If you have a different device than mine you should make sure you test this.&#xA;&#xA;Another issue that I found out is with the combination of scanner source and batch selection. By example if you choose the flatbed and the automatic batch selection the scanner would crash.&#xA;Choosing the ADF source with a manual batch selection will have the same effect. This is the only hurdle that I wasn&#39;t able to figure out. This is not so user friendly since every time this happens the scanner needs to be restarted. If you figure this one out please let me know how figured this one out.&#xA;&#xA;Further suggestions&#xA;&#xA;Network scanner&#xA;&#xA;Do you want to make your scans available over the network ?&#xA;Then you should need to map /docker/sane/images* to a network share.&#xA;I won&#39;t provide the detailed instructions since this write-up is already long enough. But I’ll give you some hints. (extra documentation)&#xA;&#xA;    sudo apt install smbclient -yq&#xA;    sudo apt install cifs-utils&#xA;    sudo nano /etc/fstab&#xA;    &#xA;    #add this line to the fstab file&#xA;    //fileserver/Scans /docker/sane/images/ cifs username=guest,iocharset=utf8,filemode=0777,dirmode=0777&#xA;    &#xA;&#xA;    sudo reboot&#xA;&#xA;Setup print server&#xA;&#xA;This is another write-up on its own and it might not be related to this topic.&#xA;But since you went to the trouble of setting up a scan server, you could also setup CUPS to have your own print server, which by the way is much easier than what we just did.&#xA;&#xA;Setting up Portainer&#xA;&#xA;Since we setup Docker we could also install Portainer to have a nice management interface for docker. You can find the official setup guide on this page.&#xA;&#xA;If you have any remarks or suggestions please let me know …&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>Printers and scanners…, I have a love/hate relationship with them. They are very handy when they work, but sometimes they cause a lot of struggle.</p>

<p>I recently set myself up for a challenge. Since a couple of months I own an HP Envy Pro 6442, a printer/scanner combo. While not horrible, the experience using this device as a scanner leaves a lot to be desired. By default, you are limited to use the Windows scanner utility or to installing HP’s app on your phone. I thought the experience could be much better if somehow you could scan images through a webpage.</p>

<p>Admittedly, this device in particular provides a web interface that allows scanning from the web browser. But this interface is not really user friendly and requires authentication (which is rather burdensome for home use). So I looked up the web for a solution and through sheer luck I stumbled upon <a href="https://github.com/sbs20/scanservjs" rel="nofollow">ScanservJs</a>. The purpose of this article is to guide you through the setup for this particular device but it can be done on other devices.</p>

<p><em>Edit :</em>
Since I wrote the article I&#39;ve also set everything up with an HP Envy Inspire 7200. I&#39;ve expanded the examples to adapt a bit more to this model.</p>

<p>ScanserveJs has been recently updated to version 3.
The new version has some breaking changes, the article has been updated to accomodate for these changes. If you already have it installed pull the latest image and take a look at the directory mappings.</p>

<h2 id="disclaimer">Disclaimer</h2>

<p>This is not by any means an entry-level tutorial. I omit some details that will not be easy to figure out if you have no prior experience.</p>

<p>These are the details I skipped (as far as I&#39;m aware) :</p>
<ul><li>Setting up the scanner over the network (<a href="https://support.hp.com/us-en/document/ish_1780623-1698506-16" rel="nofollow">docs</a> for HP Envy Pro 6442)</li>
<li>Accessing the scanner web UI</li>
<li>Setting up the server OS</li>
<li>Assigning your server a static IP to your server so that users can access it (and specifying a domain name for your server)</li>
<li>Testing this setup with other other devices than mine (other models from HP or any other manufacturer)</li></ul>

<h2 id="setting-up-the-scanner">Setting up the scanner</h2>

<p>Based on my experience I recommend assigning a static IP address. You can achieve this from your router making a static DHCP assignment. If you aren&#39;t able to do this you can do this from the scanner web interface.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/1_EEbyrAADhPxEoUtjo8LQjQ.png" alt=""></p>

<p>Under <strong>Network &gt; Wireless &gt; Network Address (IPv4)</strong> you can choose <strong>Manual IP</strong> and choose the IP address you wish.</p>

<h2 id="setting-up-the-server"><strong>Setting up the server</strong></h2>

<p>Since the scanner is setup over the network, it doesn&#39;t need to be physically connected to the server. You can setup the server on bare metal or a hypervisor such as <a href="https://www.proxmox.com/en/proxmox-ve" rel="nofollow">Proxmox VE</a>, <a href="https://xcp-ng.org/" rel="nofollow">XCP-ng</a> or even <a href="https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/about/" rel="nofollow">Hyper-V</a>. I personally used <a href="https://www.synology.com/en-global/dsm/feature/virtual_machine_manager" rel="nofollow">Virtual Machine Manager</a> on my Synology NAS to make a VM. If you have a QNAP NAS you could also use <a href="https://www.qnap.com/en-me/software/virtualization-station" rel="nofollow">Virtualization Station</a>. Since this detail is not really relevant for our purposes I won&#39;t go into details. Use whatever suits your needs.</p>

<p>For my OS I used <a href="https://ubuntu.com/download/server" rel="nofollow">Ubuntu 22.04 Server</a>. You might be able to use other flavors of LINUX but no success is guaranteed. First of all we want to make sure our OS has the latest updates.</p>

<p><em>Note</em>
I did some testing on Ubuntu 22.10 and the install failed due to an unavailable dependency (python3-pyqt4).</p>

<pre><code>    sudo apt-get update
    sudo apt-get upgrade
</code></pre>

<h3 id="installing-hp-drivers">Installing HP drivers</h3>

<p>This part of the guide depends mainly on the device manufacturer of your scanner. The following steps are only applicable if you own a HP scanner/printer. If you have another one you&#39;ll need to figure this out on your own.</p>

<p><em>Edit :</em>
The current version of the hplip software is 3.23.12.</p>

<p>On my server package updates broke my existing installation.
I had to install the latest driver version in order to get everything working again. Needless to say you need to be very careful when applying updates with your package manager, since it won&#39;t automatically update your hplip installation.</p>

<p><em>Drivers for other manufacturers</em>
<a href="https://epson.com/Support/wa00821" rel="nofollow">Epson </a>
<a href="https://www.canon-europe.com/support/consumer_products/operating_system_information/#linux" rel="nofollow">Canon</a>
<a href="https://help.brother-usa.com/app/answers/detail/a_id/52188/~/install-drivers-%28deb-or-rpm%29-using-the-driver-install-tool---linux" rel="nofollow">Brother</a>
<a href="https://www.bchemnet.com/suldr/" rel="nofollow">Samsung</a></p>

<p>Luckily HP provides a LINUX driver for their devices. Although they just worked fine the documentation and installation process is a bit flaky.</p>

<p>First of all go to <a href="https://developers.hp.com/hp-linux-imaging-and-printing/gethplip" rel="nofollow">https://developers.hp.com/hp-linux-imaging-and-printing/gethplip</a> and select your server’s distro. If you followed along the distro we&#39;re using is ubuntu.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/1_C-dgg8_kvKL4ZZSoiN0zBw.png" alt="Hp driver downloader page"><em>Hp driver downloader page</em></p>

<p>Then just click the download button and the download should start through SourceForge.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/1_1HACkx4NEQnzbd8hZmQOjg.png" alt="Driver download starting"><em>Driver download starting</em></p>

<p>Once the download is completed you should get a file called <em>hplip-3.22.10.run</em>. Please note that the version might change over time as well the download source.</p>

<p>Before we attempt to install the driver we need to install some required packages on the server. You can find the required packages on <a href="https://developers.hp.com/hp-linux-imaging-and-printing/install/manual/distros/ubuntu" rel="nofollow">https://developers.hp.com/hp-linux-imaging-and-printing/install/manual/distros/ubuntu</a>.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/1_jAIiaIqWb8QWuQP7DFUCMA.png" alt="HP requirements documentation"><em>HP requirements documentation</em></p>

<pre><code>sudo apt-get install --assume-yes libcups2 cups libcups2-dev cups-bsd cups-client avahi-utils libavahi-client-dev libavahi-core-dev libavahi-common-dev libcupsimage2-dev libdbus-1-dev build-essential gtk2-engines-pixbuf ghostscript openssl libjpeg-dev libatk-adaptor libgail-common libsnmp-dev snmp-mibs-downloader libtool libtool-bin libusb-1.0-0-dev libusb-0.1-4 wget policykit-1 policykit-1-gnome automake1.11 python3-dbus.mainloop.pyqt5 python3-reportlab python3-notify2 python3-pyqt5 python3-dbus python3-gi python3-lxml python3-dev python3-pil python-is-python3 libsane libsane-dev sane-utils xsane -yq
</code></pre>

<p>This command should do the trick but please be aware that this could change over time. Check the documentation to make sure you have all the requirements.</p>

<p>Now it&#39;s time to install the driver on the server. At this stage I stumbled upon a hurdle. I was trying to get the installer file into my server but since I had a headless server I couldn&#39;t figure out how to get the installer file into my server. Finally I settled with downloading it on my Windows machine and transferring the file to my server through sftp. I used a program called FileZilla to do this.</p>

<p>I made a new directory called <em>HP</em> where I put this file.</p>

<pre><code>    mkdir hp
</code></pre>

<p>Then I went into this directory.</p>

<pre><code>    cd hp
</code></pre>

<p>Once you are in the directory you can proceed with this command
(Please be aware of the version number).</p>

<pre><code>    sh hplip-3.22.10.run
</code></pre>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/1_pMUGko-1vateMhCqpMVwkw.png" alt="Screenshot of the driver installation process"><em>Screenshot of the driver installation process</em></p>

<p>This is the point where you need a lot of patience because depending on your system this step might take a while.</p>

<p>Once the installation is completed you can proceed setting up the scanner. You can do this with the following command. Please be aware that you will need to adapt the IP address depending on whatever you set up previously.</p>

<pre><code>    sudo hp-setup -i 192.168.0.70
</code></pre>

<p>Once this is completed you can check if everything is working with this command</p>

<pre><code>    scanimage -L
</code></pre>

<p>This command should give the following output. If you don&#39;t see anything at this point your driver is not working.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/1_uqfQ16agWWvk8reKCU_9fg.png" alt="Example of output"><em>Example of output</em></p>

<h3 id="installing-docker">Installing docker</h3>

<p>The most easy way to get the <a href="https://github.com/sbs20/scanservjs" rel="nofollow">ScanservJs </a>software working is through docker. I performed the install with the following command. Please be aware that this might change over time and read the documentation on <a href="https://docs.docker.com/engine/install/ubuntu/" rel="nofollow">https://docs.docker.com/engine/install/ubuntu/</a>.</p>

<pre><code>    sudo apt-get install \
        ca-certificates \
        curl \
        gnupg \
        lsb-release
    sudo mkdir -p /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    echo \
      &#34;deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
      $(lsb_release -cs) stable&#34; | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
    sudo apt-get update
    sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
</code></pre>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/1_zsYuLDRvwzmYMu2qX5THKQ.png" alt="Screenshot of the official Docker documentation"><em>Screenshot of the official Docker documentation</em></p>

<h3 id="setup-the-server-environment-for-the-docker-container">Setup the server environment for the docker container</h3>

<p>Before spinning up the container there are a couple of things we want to do.
First make a folder containing all our docker configurations. Be aware that you should change your user and user groups to yours in my example. It&#39;s also possible to use other folders to store the configuration, if you do so you also will need to adapt the container setup later on.</p>

<pre><code>    sudo mkdir /docker
    sudo chown sane:sane /docker
</code></pre>

<p>Now make a folder containing the container we are about to spin up.</p>

<pre><code>    mkdir -p /docker/sane/config
    mkdir -p /docker/sane/images
</code></pre>

<p>Now make a file containing the configuration.</p>

<pre><code>    nano /docker/sane/config/config.local.js
</code></pre>

<p>Paste the following contents into the file. This example is based on the documentation on <a href="https://github.com/sbs20/scanservjs/blob/master/docs/10-configuration.md#example-file" rel="nofollow">github</a>.</p>

<p>The device id is derived from the <em>AIRSCAN_DEVICES</em> environment variable of the docker container. The device name can be whatever you want.</p>

<p>If you look carefully you can see I specified the resolutions for my scanner. In case you have another scanner than mine, read the documentation in order to figure how to change the them for yours.</p>

<pre><code>    /* eslint-disable no-unused-vars */
    
    module.exports = {
      afterDevices(devices) {
     const deviceNames = {
          /*
            &#39;device id&#39;:&#39;device name&#39;
          */
          &#39;airscan:e0:Hp Envy Pro 6442&#39;: &#39;Hp Envy Pro 6442&#39;
        };
     
    /*
      replace the id in the filter
    */
     devices
          .filter(d =&gt; d.id == &#39;airscan:e0:Hp Envy Pro 6442&#39;)
          .forEach(device =&gt; {
            device.features[&#39;--resolution&#39;].default = 400;
            device.features[&#39;--resolution&#39;].options = [100, 150, 200, 300, 400, 600];
            /*
              Disable batch modes if they are not available on your printer
            */
             device.settings.batchMode.options = [&#39;none&#39;, &#39;manual&#39;];
             /*
             Specify the default pipeline
             */
             device.settings.pipeline.default = [&#39;PNG&#39;];
          });
     
      devices
          .filter(d =&gt; d.id in deviceNames)
          .forEach(d =&gt; d.name = deviceNames[d.id]);
      }
    };
</code></pre>

<p><strong>Scantopl pipeline for uploading to Paperles-ng</strong></p>

<p>With this additional configuration you can automatically upload scanned documents to Paperless</p>

<pre><code>    afterConfig(config) {
        const pipelines = [
          {
            extension: &#39;pdf&#39;,
            description: &#39;Paperless&#39;,
            commands: [
            &#39;convert @- -quality 100 tmp-%04d.png &amp;&amp; ls tmp-*.png&#39;,
            &#39;convert @- scan-0000.pdf&#39;,
            &#39;ls scan-*.*&#39;
            ],
            afterAction: &#39;Rename for paperless&#39;
          }
        ];
    
        config.pipelines.splice(0, 0, ...pipelines);
      },
      actions: [
        {
          name: &#39;Rename for paperless&#39;,
          async execute(fileInfo) {
            return await Process.spawn(`mv &#39;${fileInfo.fullname}&#39; &#39;${fileInfo.path}/pl_${fileInfo.name}&#39;`);
          }
        }
      ]
</code></pre>

<h3 id="spin-up-the-docker-container">Spin up the docker container</h3>

<p>Now its finally time to setup our awesome scan server through docker.</p>

<p>You should read the documentation on <a href="https://github.com/sbs20/scanservjs/blob/master/docs/02-docker.md" rel="nofollow">github</a> about this topic if you have any issues.</p>

<p>In the example command you should change the <em>SANED<em>NET</em>HOSTS</em> and <em>AIRSCAN_DEVICES</em> to fit the settings of your scanner. If you change the device id in the <em>AIRSCAN_DEVICES</em> variable you&#39;ll need to adjust the provided configuration file.</p>

<p>I also deviated a bit from the suggested configuration changing the web port mapping. In my opinion it&#39;s better to use the default web port, so that users can type the address of the server without specifying any port.</p>

<p>Docker command</p>

<pre><code>    docker run -d \
      -e SANED_NET_HOSTS=&#34;192.168.0.70&#34; \
      -e AIRSCAN_DEVICES=&#39;&#34;Hp Envy Pro 6442&#34; = &#34;http://192.168.0.70/eSCL&#34;&#39; \
      -p 80:8080 \
      -v /var/run/dbus:/var/run/dbus \
      -v /docker/sane/config:/app/config \
      -v /docker/sane/images:/app/data/output \
      --restart unless-stopped \
      --name scanservjs-container \
      --privileged sbs20/scanservjs:latest
</code></pre>

<p>Docker compose</p>

<pre><code>    version: &#34;3&#34;
    services:
      scanservjs:
        image: sbs20/scanservjs:latest
        privileged: true
        environment:
          - UID=${UID}
          - GID=${GID}
          - SANED_NET_HOSTS=${SANED_NET_HOSTS}
          - AIRSCAN_DEVICES=${AIRSCAN_DEVICES}
        volumes:
          - /docker/sane/images:/app/data/output
          - /docker/sane/config:/app/config
          - /var/run/dbus:/var/run/dbus
        ports:
          - ${WEB_PORT}:8080
        restart: unless-stopped
      scantopl:
        image: ghcr.io/celedhrim/scantopl:master
        environment:
          - PLURL=http://paperless.instance
          - PLTOKEN=paperless_token
        volumes:
          - /docker/sane/images:/output
</code></pre>

<p>Once you execute this command you&#39;ve setup your own scan server !
This web interface is just awesome.</p>

<p><img src="https://images.claeyscloud.com/images/2024/11/27/1_9syP6JpIVOlGaZH_aExeNA.png" alt="Screenshot of the scan server"><em>Screenshot of the scan server</em></p>

<h2 id="remarks">Remarks</h2>

<p>You could set up the scan server with multiple devices. If you have scanners from different manufacturers (or maybe different models) you will need to figure out the driver situation for each of them.</p>

<p>I also noted some issues with my particular scanner. When trying to scan with some resolutions my scanner would crash (that&#39;s why I specified them in the configuration file). If you have a different device than mine you should make sure you test this.</p>

<p>Another issue that I found out is with the combination of scanner source and batch selection. By example if you choose the flatbed and the automatic batch selection the scanner would crash.
Choosing the ADF source with a manual batch selection will have the same effect. This is the only hurdle that I wasn&#39;t able to figure out. This is not so user friendly since every time this happens the scanner needs to be restarted. If you figure this one out please let me know how figured this one out.</p>

<h2 id="further-suggestions">Further suggestions</h2>

<h3 id="network-scanner">Network scanner</h3>

<p>Do you want to make your scans available over the network ?
Then you should need to map <em>/docker/sane/images</em> to a network share.
I won&#39;t provide the detailed instructions since this write-up is already long enough. But I’ll give you some hints. (<a href="https://linuxhint.com/mount-smb-shares-ubuntu/" rel="nofollow">extra documentation</a>)</p>

<pre><code>    sudo apt install smbclient -yq
    sudo apt install cifs-utils
    sudo nano /etc/fstab
    
    #add this line to the fstab file
    //fileserver/Scans /docker/sane/images/ cifs username=guest,iocharset=utf8,file_mode=0777,dir_mode=0777
    

    sudo reboot
</code></pre>

<h3 id="setup-print-server">Setup print server</h3>

<p>This is another write-up on its own and it might not be related to this topic.
But since you went to the trouble of setting up a scan server, you could also setup <a href="http://www.cups.org/" rel="nofollow">CUPS</a> to have your own print server, which by the way is much easier than what we just did.</p>

<h3 id="setting-up-portainer">Setting up Portainer</h3>

<p>Since we setup Docker we could also install <a href="https://www.portainer.io/" rel="nofollow">Portainer</a> to have a nice management interface for docker. You can find the official setup guide on this <a href="https://docs.portainer.io/start/install/server/docker/linux" rel="nofollow">page</a>.</p>

<p>If you have any remarks or suggestions please let me know …</p>
]]></content:encoded>
      <guid>https://blog.claeyscloud.com/david/scanservjs-a-make-your-own-scan-server</guid>
      <pubDate>Thu, 28 Sep 2023 07:27:08 +0000</pubDate>
    </item>
  </channel>
</rss>