Azure Labs – What is it and use cases

As a trainer, I always have a set of prerequisites when I’m about to deliver a training. Usually those prerequisites are sent weeks in advance but most of the times if not all, the participants never have them installed. What I have in my back pocket is an ARM template with two-three predefined images which I mass deploy before a training and provide access to the participants so we prevent this hassle. 

The reality is that having this approach is complicated. My images are created with Packer within a an Azure DevOps pipeline and while it’s all fun and geeky to do everything by yourself, you don’t always have time to update the packages, you forget VMs running and so on. 

I was stoked when Microsoft came out with a new feature in Azure called Lab Services which opened up the possibility of doing everything I just mentioned, in a simple, secure setting. 

This feature / offering is similar to DevTest labs but it provides a new portal where the Lab creators and Lab participants  can open without much hassle. 

So how can we use it? 

Creating and using the Azure Lab Service is pretty simple as shown below: 

Creating a Lab services account is pretty simple, you go to the Azure Portal, type in Lab Services and create it in a resource group.  

After you created the lab, your next step is to add yourself and or other people as the Lab Creator RBAC role via the IAM blade because even if you’re owner, you will not be able to use the labs. Once that’s done, you can proceed to https://labs.azure.com

On a first look, the lab portal is pretty simple. If it’s newly created, you will be prompted to create a new lab.

Step by step process

If you want to create a new lab, go to the new lab icon in the upper left corner, type in a name and set the maximum number of VMs per lab. Don’t worry the number you set there is not permanent and you can change it later if required.

After you press save, you will be presented with the next screen where you can select what virtual machine you will want to use for your template. You have a number of virtual machines presented in that list but if you want to expand that list, you have to go to the Azure portal on the Labs resource and select the Marketplace images from the policies tab where you have the option of enabling other type of images.

Once you select the image that you want and press next, you will be prompted with the next screen where you will input the username and password for the template VM and all the VMs that will be created after it.

After you press create, the template will be created and you’re going to have to wait a while for it to be completed πŸ™‚

Next up is the configuration phase where you will connect to the VM, do your configuration and then complete the lab configuration.

Next screen is a review screen where you can either publish the lab or save for later.

The publishing phase takes a while so this is the time to get a donut or hit that Netflix show πŸ™‚

How does it look like?

Once the lab is done, it will pop up in the main screen where if you’re a lab creator, you will have the option of customizing some settings for the lab like:

  • Re-configuring the template
  • Republishing the lab
  • Set up on/off schedule for the VMs
  • Configure a restricted user list for the labs or make it public if they have the registration link.

Caveats 

One of the minor caveats of the solution is that the participants require to log in using either an MSA or a work account. I call it minor because most of the times, the participants have an MSA or work account, but there are times when you’re doing public hands-on labs, workshop settings and others where you cannot expect that all of the participants have that.

The solution to this problem is Azure B2C. You create an Azure B2C tenant, link it to your Azure Subscription and create B2C accounts and add them to the lab services. That’s the best solution out there for these kinds of cases because you don’t deal with e-mail accounts and any other PII information and second, you have complete control over the user accounts.

Another issue that I found is that if you’re Lab Creator owner with the same account on multiple labs, it will not prompt you which lab you want so waiting for a fix on that.

For the final notes, this is an excellent offering for me as I will be using it heavily for my training session or workshops.

Azure DevOps and “VSProfessional” licenses

Azure DevOps and “VSProfessional” licenses

This is something I encountered at a client and I figured that I should write it here because it took a while to find the solution and the only answer came via a support ticket to MS. 

A while back when Azure DevOps was called VSTS or Visual Studio Online, you had the possibility to link the tenant to your Azure subscription for billing purposes. This thing allowed you to purchase basic use right licenses to the platform and it even allowed you to purchase Visual Studio Professional licenses which allowed you to license the user VS Pro installations via the platform. 

Azure view

The problem that I faced with this customer was that he was in this position and suddenly they started facing issues with the VS Pro licenses starting to expire and not working anymore. We tried figuring out what was the problem and why it didn’t work but unfortunately we hit a dead end and had to open a support ticket so we can get some assistance while in parallel we were investigating. 

We knew that Visual Studio monthly licenses were located in the marketplace –https://marketplace.visualstudio.com/items?itemName=ms.vs-professional-monthly – but we didn’t understand the correlation between one and another. 

On a hunch we purchased a few VS Pro Monthly licenses for some users to test out a theory and the lucky part was it worked but we didn’t have an answer as to why the issue existed. 

The answer came from the support person on MS end which provided an awesome explanation as to why the problem existed and how to basically fix it. 

The problem was that licensing users via the Azure Portal was deprecated a while ago and MS didn’t have a solution for seamless migrations to the new licensing model, so they allowed it to work for existing customers while they removed the capability from the portal. 

The licenses that appeared on the billing invoice were called “VSPRO – Monthly” which coincidentally matches the name with the VS Pro licenses from the marketplace. The reality was that the licenses that you could get from the Azure Portal were “Professional” Licenses which were tied to the old VSOnline model and it was allowed to work in parallel until it died by itself. 

Basically the old Professional license allowed you to run Visual Studio Professional and be a licensed user in VSTS / Azure DevOps but being deprecated, updates or newer versions of Visual Studio (starting from 2017 and going to 2019) simply started not being able to parse that licensing info assigned to the work account for the user and the instances ended up in an Extended Trial mode. 

The solution to this problem was to simply purchase the licenses from the marketplace, assigned them to all the “Professional” and after a day or two just remove the offering from the Azure Portal. 

After doing the whole operation, everything licensed correctly and the issue was solved.

Signing out. Have a good one!

AKS – Working with GPUs

AKS – Working with GPUs

I’ve discussed about AKS before but recently I have been doing a lot of production deployments of AKS, and the recent deployment I’ve done was with Nvidia GPUs. 

This blog post will take you through my learnings after dealing with a deploying of this type because boy some things are not that simple as they look. 

The first problems come after deploying the cluster. Most of the times if not all, the NVIDIA driver doesn’t get installed and you cannot deploy any type of GPU constrained resources. The solution is to basically install an NVIDIA daemon and go from there but that also depends on the AKS version.

For example, if your AKS is running version 1.10 or 1.11 then the NVIDIA Daemon plugin must be 1.10 or 1.11 or anything that matches your version located here

The code snip from above creates a DaemonSet that installs the NVIDIA driver on all the nodes that are provisioned in your cluster. So for three nodes, you will have 3 Nvidia pods.

The problem that can appear is when you upgrade your cluster. You go to Azure and upgrade the cluster and guess what, you forgot to update the yaml file and everything that relies on those GPUs dies on you.

The best example I can give is the TensorFlow Serving container which crashed with a very “informative” error that the Nvidia version was wrong.

Other problems that appear is monitoring. How can I monitor GPU usage? What tools should I use?

Here you have a good solution which can be deployed via Helm. If you do a helm search for prometheus-operator you will find the best solution to monitor your cluster and your GPU πŸ™‚

The prometheus-operator chart comes with Prometheus, Grafana and Alertmanager but out of the box you will not get the GPU metrics that are required for monitoring because of an error in the Helm chart which sets the cAdvisor metrics with https, the solution would be to modify the exporter HTTPS to false.

And import the dashboard required to monitor your GPUs which you can find here: https://grafana.com/dashboards/8769/revisions and set it up as a configmap.

In most cases, you will want to monitor your cluster from outside and for that you will need to install / upgrade the prometheus-operator chart with the grafana.ingress.enabled value as true and grafana.ingress.hosts={domain.tld}

Next in line, you have to deploy your actual containers that use the GPU. As a rule, a container cannot use a part of a GPU but only the whole GPU so thread carefully when you’re deploying your cluster because you can only scale horizontally as of now.

When you’re defining the POD, add in the container spec the following snip below:

End result would look like this deployment:

What happens if everything blows up and nothing is working?

In some rare cases, the Nvidia driver may blow up your data nodes. Yes that happened to me and needed to solved it.

The manifestation looks like this. The ingress controller works randomly, cluster resources show as evicted. The nvidia device restarts frequently and your GPU containers are stuck in pending.

The way to fix it is first by deleting the evicted / error status pods by running this command:

And then restart all the data nodes from Azure. You can find them in the Resource Group called MC_<ClusterRG><ClusterName><Region>

That being said, it’s fun times to run AKS in production πŸ™‚

Signing out.

Azure Firewall – What is it and how to use it

Azure Firewall – What is it and how to use it

There’s no shortage of solutions when it comes to NGFW in the cloud but they all come at a hefty price, steep learning curve and require continuous maintenance from the ops teams. We have solutions from Barracuda, Fortigate, Checkpoint, Cisco and so on but in the end, they are some Linux Virtual Machines that have some third party software on them with or without built-in HA. Azure Firewall is here to provide another solution that can solve some of these issues that come from NVAs deployed in the cloud…but not all of them.

Let’s start off with what Azure Firewall can do and what it can not do at this moment:

Azure Firewall is:

  • A stateful firewall as a service
  • Has built-in high availability
  • Can do FQDN filtering
  • It has support for FQDN tags – At the time of writing we have support for Windows Update, ASE and Azure Backup
  • You can add network traffic filtering rules
  • Has outbound SNAT support
  • Has inbound DNAT support
  • You can centrally create, enforce, and log application and network connectivity policies across Azure subscriptions and VNETs

Azure Firewall is NOT:

  • An Intrusion Prevention System (IPS)
  • An Intrusion Detection System (IDS)

If you compare Azure Firewall with any NGFW solution from the marketplace you will see that it lacks a lot of features and might not appear to solve any of today’s current issues but stay a while and listen πŸ™‚

Think of this. The current third-party firewalls started from the on-premises environment as physical appliances and then got slowly evolved towards virtual appliances, so most (not all) of them have features that are useless in the cloud (and you pay for them). Another thing is that you have to manage them end to end and even back them up. They are not a managed service that you licenses from a provider and just consume the service, it’s a full-blown IaaS machine and the list can go on.

What is Azure Firewall for?

Azure Firewall is a cloud-native stateful firewalling service that is not deployed as a VM. It’s a fully managed security service by Microsoft that scales automatically and requires no maintenance from the user (hence the fully managed part), and the only thing that you need to do is to configure it correctly.

At the time of writing this post, Azure Firewall blocks all inbound/outbound traffic with the possibility allow IP addresses, FQDNs or CIDR blocks and it deploys a UDR in the VNET it creates to redirect the 0/0 traffic through it, just like an NVA and it also plugs into Azure Monitor and I suspect that it will plug into Traffic Analytics and ASC because it makes sense on the long term.

Deploying an Azure Firewall is pretty simple and it doesn’t require too much configuration and a reference architecture looks something like this:

Azure Firewall Ref Architecture ; Source MS Docs

The best-practices around Azure Firewall show that it should be configured in a hub & spoke architecture where you deploy your core / shared services and have spokes that connect through them. The main reason for this is that the entry price is 780 EUR per scaling unit. The way I see it is that in combination with NSGs, App Gateway WAF and other services like DDOS Protection Standard would add more value to the enterprise client than anything else.

Ref Architecture; MS Ignite

Finally I would like to add that from my point of view, Azure Firewall is still a work in progress but a very welcome addition to the cloud security offering that Microsoft adds in Azure.

Resetting RHEL Root PW with Azure Serial Console

Oh Snap.

Did this problem ever happen to you? If yes, then you know that the way to solving this issue is by booting the distro into the Single User mode. But how do you do that in Azure? Well Serial Console to the rescue!

Usually this is easily solvable using the Run Command or by using the Reset Password blade but in this case imagine that they don’t work. This is the case of the SAP deployment using the RHEL VMs. You cannot do anything if you’ve lost access and if the VM crashes it’s even worse.

Nope, No SYSRQ for you.

You need to get to grub so you can boot the VM in single user mode. The problem here is that the VM is very fast for the serial console to connect and press the ESC button in the magic moment.

So what can you do?

The solution to that problem is to stop the VM without de-allocating it. This means that the VM on the Hyper-V server in the backend is not deleted but preserved. This means that you can have the serial console in standby to have a chance at that magic moment. How do you know that? Check figs 1 and 2.

Fig.1 This is where you have to be.
Fig.2. If you’re here, repeat the first step.

Once you’ve gotten to the screens that the VM is starting, this is what you need to watch for and then mash the ESC button:

Once you’ve managed to enter GRUB, you’re home free to reset the password using the steps below Press e in the Serial Console to edit the first OS line.

  • Go to the kernel line which starts with linux16
  • Add rd.break to the end of the line which will break the boot cycle. If selinux is enabled then add rd.break enforcing=0
  • Exit GRUB and reboot with the rd.break command saved by pressing ctrl x
  • During this reboot, the VM will go into the Emergency Mode where you have to mount the systemroot using the “mount -o remount,rw /sysroot” command.
  • This will boot you in single user mode, where you will have to type in chroot /sysroot to switch into the sysroot jail and then reset the password for the root user with passwd
  • Edit the sshd_config file “nano /etc/ssh/sshd_config” using your preferred editor so you enable root access using the Serial Console by setting PermitRootLogin yes
  • Once you’re done, reboot the VM and you’ve gotten root access.

GIF from Azure Docs – Grub editing representation

After you’re done resetting all the passwords, installing all the agents so you’re not confronted with this again, set PermitRootLogin no and you’re golden πŸ™‚

Have a good one!

Pin It on Pinterest