Moving to the cloud is “easy”; Managing it is another ordeal.
I’m going to start with a disclaimer; This post focuses on achieving governance and security in Azure; This doesn’t mean that what I’m going to write here cannot apply to AWS or Google, they have other technologies that can help you achieve full governance;
When they first came as an offering a long time ago, Azure SQL Databases always had a mystery attached to them regarding the performance tier you should use.
Database Throughput Unit or DTU is a way to describe the relative capacity of a performance SKU of Basic, Standard, and Premium databases. DTUs are based on a measure of CPU, memory, I/O reads, and writes. When you want to increase the “power” a database, you just increase the number of DTUs that are allocated to that database. A 100 DTU DB is much more powerful than a 50 DTU one.
KEDA (Kubernetes-based Event-Driven Autoscaling) is an opensource project built by Microsoft in collaboration with Red Hat, which provides event-driven autoscaling to containers running on an AKS (Azure Kubernetes Service), EKS ( Elastic Kubernetes Service), GKE (Google Kubernetes Engine) or on-premises Kubernetes clusters 😉
KEDA allows for fine-grained autoscaling (including to/from zero) for event-driven Kubernetes workloads. KEDA serves as a Kubernetes Metrics Server and allows users to define autoscaling rules using a dedicated Kubernetes custom resource definition. Most of the time, we scale systems (manually or automatically) using some metrics that get triggered.
For example, if CPU > 60% for 5 minutes, scale our app service out to a second instance. By the time we’ve raised the trigger and completed the scale-out, the burst of traffic/events has passed. KEDA, on the other hand, exposes rich events data like Azure Message Queue length to the horizontal pod auto-scaler so that it can manage the scale-out for us. Once one or more pods have been deployed to meet the event demands, events (or messages) can be consumed directly from the source, which in our example is the Azure Queue.
Running a container in the cloud is quite easy these days but how about multi-web app containers?
In this post, we will find out about the multi-container feature in Azure Web Apps and how we can leverage it.
Getting started with a multi-container in Azure Web Apps is quite easy as a matter of fact; The documentation is quite good as a starting point but when we want it to in production is where the real problems start.
Ever heard of a jump server or bastion server? No? Well then this post is for you
Before we dive into what’s Azure Bastion, we should understand how things work right now with regular jump servers or bastion hosts.
A jump server/bastion host is a virtual machine/server which sits inside a network with the purpose to allow remote access to servers and services without adding public IPs to them, thus exposing them to the internet.
Access to that jump server can be granted in numerous ways but the most common are:
Public Endpoint with Access Control authentication e.g., Cloudflare Access rules
Public Endpoint with a Just In Time solution
Remote Desktop Gateway access with AD Authentication
The list can go on; The idea is that the endpoint that’s used to access the production network is as secure as possible because it’s being exposed in one way or another.