You've successfully subscribed to Florin Loghiade
Great! Next, complete checkout for full access to Florin Loghiade
Welcome back! You've successfully signed in
Success! Your account is fully activated, you now have access to all content.
Migrating containers to Universal Base Images

Migrating containers to Universal Base Images


Going back to an article that I wrote a while ago, Container security in the Cloud. A state of uncertainty. - Florin Loghiade, I was writing about the problems that we can have if we don't take good care of our containers. We need to have a patch strategy for our containers and reduce the attack surface as much as possible to avert possible disasters.

In that article, I was talking about migrating to distroless. Still, after careful thought and some experience in dealing with multiple complex systems, I realized that we need something even smaller in attack surface.

After working a while with RHEL UBI images, I got to the point where their minimal images are good enough for operational work and an excellent start for securing other systems. With PowerShell 7, we can run it on anything we want as long as it has the right packages, so running PowerShell in RHEL UBI images is not impossible.

The first thing you need to do is analyze what you're doing exactly and then set up a migration plan.

My suggestion would be to start building the base images you require and then copy your PS files with the modules you need to run.

Let's start with the base PowerShell Image. In the docker file below, you can find how to install PowerShell in an ubi-minimal image.


# Define ENVs for Localization/Globalization
ENV PSModuleAnalysisCachePath=/var/cache/microsoft/powershell/PSModuleAnalysisCache/ModuleAnalysisCache

RUN curl | tee /etc/yum.repos.d/microsoft.repo && microdnf update

RUN microdnf install -y powershell

RUN  pwsh \
    -NoLogo \
    -NoProfile \
    -Command " \
    \$ErrorActionPreference = 'Stop' ; \
    \$ProgressPreference = 'SilentlyContinue' ; \
    while(!(Test-Path -Path \$env:PSModuleAnalysisCachePath)) {  \
    Write-Host "'Waiting for $env:PSModuleAnalysisCachePath'" ; \
    Start-Sleep -Seconds 6 ; \

CMD [ "pwsh" ]

As you can see, the process is pretty simple. The UBI image uses microdnf to install packages, and the last RUN command instantiates the module's cache path. Take the file from above, slap it in a docker build, and voila, you have a PowerShell 7 base image that can run all our .ps1 scripts.

You can create another base image that contains the modules you require. For my case, I have a modules image that I build monthly, including the AZ, Graph, and other installed modules. I do this because installing those modules takes too much time, and it's not worth always having the latest bits.

FROM as build

RUN pwsh \
  -NoLogo \
  -NoProfile \
  -Command " \
  Set-PSRepository PSGallery -InstallationPolicy Trusted; \
  Save-Module -Name PSWriteHTML -Path modules/ -Confirm:\$False -Force;"

RUN pwsh \
  -NoLogo \
  -NoProfile \
  -Command " \
  Set-PSRepository PSGallery -InstallationPolicy Trusted; \
  Save-Module -Name CosmosDB -Path modules/ -Confirm:\$False -Force;"

RUN pwsh \
  -NoLogo \
  -NoProfile \
  -Command " \
  Set-PSRepository PSGallery -InstallationPolicy Trusted; \
  Save-Module -Name Microsoft.Graph -Path modules/ -Confirm:\$False -Force;"

RUN pwsh \
  -NoLogo \
  -NoProfile \
  -Command " \
  Set-PSRepository PSGallery -InstallationPolicy Trusted; \
  Save-Module -Name Az -Path modules/ -Confirm:\$False -Force;"

You can say it's not the best docker file that you might have seen, but it works, and in the end, I don't care how many layers I have for this module images. When I need a new module installed, I copy lines 3-8, paste them to the bottom, modify and commit in Git.

The end build that ends up in the automation flow looks something like this:

FROM as build


COPY --from=build modules/ /root/.local/share/powershell/Modules

COPY . .

A simple image that contains the bits that I care about, and Kubernetes sends the RUN command. This example works perfectly for scripts that need to run in cronjobs, which means you can have everything in one image and then set up the cronjob with the correct run command.

What about Azure Functions?

We can use almost the same example for Azure Functions as well. The process is a bit different, but the result is the same, well for PowerShell functions at least :)
Building the Azure Function PowerShell runtime is a two-part process.

The main part is making the runtime image and copying everything over to the UBI image.

In a nutshell, the Azure Functions runtime requires ASP.NET, and there are UBI images for that as well, so we can adjust the Microsoft dockerfile to fit our needs.

Later edit: I have updated the docker file so that it has the latest bits

FROM AS runtime-image

ENV PublishWithAspNetCoreTargetManifest=false

# Build WebJobs.Script.WebHost from source
RUN BUILD_NUMBER=$(echo ${HOST_VERSION} | cut -d'.' -f 3) && \
    git clone --branch v${HOST_VERSION} /src/azure-functions-host && \
    cd /src/azure-functions-host && \
    HOST_COMMIT=$(git rev-list -1 HEAD) && \
    dotnet publish -v q /p:BuildNumber=$BUILD_NUMBER /p:CommitHash=$HOST_COMMIT src/WebJobs.Script.WebHost/WebJobs.Script.WebHost.csproj -c Release --output /azure-functions-host --runtime linux-x64 && \
    rm -rf /azure-functions-host/workers/powershell/7 && \
    mv /azure-functions-host/workers /workers && mkdir /azure-functions-host/workers && \
    rm -rf /root/.local /root/.nuget /src

# Install extension bundles
RUN apt-get update && \
    apt-get install -y gnupg wget unzip && \
    EXTENSION_BUNDLE_FILENAME_V2=Microsoft.Azure.Functions.ExtensionBundle.${EXTENSION_BUNDLE_VERSION_V2} && \
    mkdir -p /FuncExtensionBundles/Microsoft.Azure.Functions.ExtensionBundle/$EXTENSION_BUNDLE_VERSION_V2 && \
    unzip /$EXTENSION_BUNDLE_FILENAME_V2 -d /FuncExtensionBundles/Microsoft.Azure.Functions.ExtensionBundle/$EXTENSION_BUNDLE_VERSION_V2 && \
    EXTENSION_BUNDLE_FILENAME_V3=Microsoft.Azure.Functions.ExtensionBundle.${EXTENSION_BUNDLE_VERSION_V3} && \
    mkdir -p /FuncExtensionBundles/Microsoft.Azure.Functions.ExtensionBundle/$EXTENSION_BUNDLE_VERSION_V3 && \
    unzip /$EXTENSION_BUNDLE_FILENAME_V3 -d /FuncExtensionBundles/Microsoft.Azure.Functions.ExtensionBundle/$EXTENSION_BUNDLE_VERSION_V3 && \
    find /FuncExtensionBundles/ -type f -exec chmod 644 {} \;


# set runtime env variables
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
    HOME=/home \

# copy bundles, host runtime and powershell worker from the build image
COPY --from=runtime-image ["/azure-functions-host", "/azure-functions-host"]
COPY --from=runtime-image ["/FuncExtensionBundles", "/FuncExtensionBundles"]
COPY --from=runtime-image ["/workers/powershell", "/azure-functions-host/workers/powershell"]

CMD [ "/azure-functions-host/Microsoft.Azure.WebJobs.Script.WebHost" ]

You will have a tagged image that runs the ubi Dotnet runtime with PowerShell scripts. Integrating the image is quite simple and well documented on the Microsoft Docs website, so I won't go over it here.

So why bother? Well, as you can see, the process is not complicated; it reduces the attack footprint, image footprint, and even the memory usage, which gives you a higher density in your clusters, ACI instances, or App Service plans.

That being said, have a good one!