Today marks the end of the Azure Academy pilot. I had fun talking with students about Azure, Automation, PowerShell and DevOPS.
For those that do not know what Azure Academy is. It’s the school for training students and people that want a chance to start a new chapter in their professional career using Azure as there starting point.
My involvement in Azure Academy is to teach IaaS, Networking, best practices in Azure, infrastructure design patterns, automation with ARM and PowerShell and doing infrastructure as code with CI/CD pipelines.
This was an amazing experience for me because I got to see first-hand what passionate people look like and see what determination and the will to change and evolve looks like.
It was also a pleasure to see how people are able to change their mindsets in order to understand what it means to be in the Azure world and work hard to get to the point of actually changing there lives.
If we look at the statistics of Azure, we will see that most of the Virtual Machines that are deployed are running Linux. There’s a good reason as to why to run Linux applications, and I’m not going to cover that in this blog article. Today I will be talking about running Linux Web Applications in Azure’s App Services offering.
You may or may not know that Azure App Services run on IIS so in a nutshell, when you spin up an App Service and deploy a Web Application, you’re deploying that code using the same worker process with one application pool per website. So you’re not provisioning a single virtual machine to host your Web App instead you’re receiving space on an existing VM to host your application.
The main problem with App Services was that you could only host applications on IIS thus limiting your options. You have the possibility of running PHP or Java applications on IIS, but they wouldn’t be as performant as you would expect. Microsoft solved the problem by introducing Containers for Web Apps. You spin up a Linux App Service, and from there you deploy your application on an Apache solution (prebuilt) or a container built by you.
Why containers you might say?
Containers have been around for a long time, and they allow you to consistently run your application in any environment you want without having to say “It works on my machine”. API that was chosen to create, deploy, run containers is Docker. Most people call those containers as Docker Containers, but in reality, Docker is just the API that allows you to create/manage those containers. The main idea is that if you create a container that runs your web application without problems, then you can just take that container and deploy it anywhere because the code and all your dependencies are held in that image.
So how publish/pushh / push my container in a Linux App Service?
Taking your image and pushing it in a Linux App Service is very simple. You first have to have your container image pushed in a public/private repository like Azure Container Registry or Docker Hub then you just create a Web App for Containers and reference your image.
How do I deploy my code in a consistent manner?
Creating images in a centralized consistent manner is quite different than working alone on your laptop. Web App for Container has integration with most of the thing that you would use to deploy your code in a regular Windows App Service.
There are a couple of ways of pushing your code to the Linux App Service:
1. You can create your container image and push it to a repository like Azure Container Registry or Docker Hub.
2. You can use a CI/CD engine like VSTS to create your image and push it to the registry.
3. You just upload your files via FTP and be done with it 🙂
Down below is a demo flow of how you would push an image to the Web App for Containers service
Now if you’re used to App Services running IIS, there are some limitations that you should be aware of.
1. Containers are stateless by nature – If you need persistence you need to leverage blob storage or use another service like Redis Cache or you can leverage a feature to mount the /home directory to an Azure Files share. The latter will downgrade your performance a lot so tread carefully.
2. You only get ports 80/443 so if you need a custom port for your web application then App Services will not allow it.
3. You don’t have Web Jobs
4. You cannot do VNET integration
5. You cannot do Authorization with Azure AD
This is just a number of limitations that you should be aware of. Some features that you get from a regular App Service will eventually pop up in the Linux ones but until then, you need to work with what you have 🙂
That being said, take a look at Web Apps in Containers, play around with them and see what you can come up with.
Another year another ITCamp. This year was the 7th edition of ITCamp, and it get’s better each year. This year we had 5 tracks, 40 speakers, over 40 sessions and 500+ participants!
I’ve had fun with the conference staff and as a speaker. It’s always challenging to do technical and logistical work at a conference that you’re also speaking at.
When it comes to the logistical part, it was a challenge and a pleasure to make sure that everything is OK for the speakers and our participants. Recording the sessions was a challenge like any year and having all the speakers leave with a smile on there faces at the end and promising to come back the next year with even more exciting information, was the best feeling in the world when it comes to IT Camp.
To give you an idea about the technical part, my presentation at ITCamp was about Testing your PowerShell Code with Pester where I showed how useful Pester is to test your PowerShell code.
My Sessions Description:
Infrastructure as Code is growing more popular, system administrators and devs started writing more and more sophisticated systems code and scripts.
Testing code is something that devs have been doing for a long time while system administrators just started adopting the idea. With the growing popularity of PowerShell, more and more system administrators and devs began to write PowerShell code for provisioning and configuring infrastructure either on-premises or in the cloud, but the biggest problem was that there was no useful framework to test that code when a breaking change occurred.
This is the concept of “I ran it, and it worked,” did it now?
Pester is a unit testing framework for PowerShell. It provides a few simple-to-use keywords that let you create tests for your scripts. Pester implements a test drive to isolate your test files, and it can replace almost any command in PowerShell with your implementation. This makes it an excellent framework for both Black-box and White-box testing.
In this presentation, you will learn what Pester is, how you can use pester as your daily driver when you’re writing scripts and how you can use Pester to make your life better when change happens.
We also spoke about one of my favourite testing phases: “Stupidity testing”. We all know that as much as we want there will be that one person that will not go through the process the way we intend them to and they will find a different path that will break whatever it is what we have built. For these people, there is a process that I propose we all use, and that is the stupidity test. Why make your life hard when you can know that everything will go as intended?
Let me know if you want to know more about this, and I hope to see you at next years, IT Camp. I will not be missing the fun and the opportunity to meet such great minds and learn from them.
For the 5th year in a row, ITCamp Community is organising Global Azure Boot Camp (https://global.azurebootcamp.net/).
This is a global event that takes place in over 159 locations around the world. Like last year, Cluj-Napoca is hosting a GABC and appears on the Azure map.
On April 22nd, you are invited to join this event which will have three 90 minute workshops which will be part theoretical and part practical, so we advise you to bring a laptop 🙂
I will be speaking at the event, and my workshop is about ARM templates 🙂
The event will start at 9:00 AM and will finish at around 2:00 PM.
Here are the event workshops:
Azure Functions (Radu Vunvulea)
What are Azure Functions? AWS Lambda from Azure. This is the fastest way how we can present Azure Functions. During this workshop, we will have a challenge to create a system that can process and analyze data without VMs or other computation units. We will use only Azure Functions for it. Sounds interesting, then let’s meet from 09:30 and find out how you can do this.
Machine learning for mere mortals with Azure ML (Silviu Niculita)
Machine learning has been leveraged to radically change many industry verticals. The problem is the learning curve has always been very steep. Exotic languages, complex tools, little or no documentation.But innovative cloud-based ML platforms are changing that and democratizing access. During this session, you will learn the basics of machine learning, and you will see a demo of how you can build a prediction model using real-world data, evaluate several different algorithms and modeling strategies, then deploy the finished model as a scalable RESTful API within minutes.
ARM Templates, how to create them, and use them in your CD pipeline (Florin Loghiade)
Azure has an excellent API that permits the user to automate the creation of every complex environment, using one single JSON document. Those documents are called ARM Templates, and they can be used to create, manage and even refresh any type of resource available in Azure. Using ARM templates and PowerShell combined with a CI/CD tool like VSTS, TeamCity, Jenkins, you can automate the build and deployment of the most complex application out there. In this hands-on lab, you will learn about the benefits of using Azure Resource Manager templates, when and how to use PowerShell in the CI/CD pipeline, and what it takes to create ARM Templates.
Here is the meetup link and I hope to see you at the next Global Azure BootCamp!
In an earlier blog post, I talked about what is a managed disk and why you should use it and in this post, we will cover how easy is to move your existing virtual machines from the storage account model to managed disk model.
The first thing that you need to do is to plan for failure. Yes, you heard that right. You need to be prepared for things to go wrong so you can have a plan to recover.
Planning and taking action, in this case, is simple. You need to have a recent backup of the VM and to plan for downtime. The conversion process from regular storage disks to managed disks is not an online operation so your VMs need a reboot. If you have VMs in availability sets then this is quite simple as you’re going to take them down one at a time.
Another thing that you need to check, is that if you have any extensions, then all of them should have a success status otherwise this will fail.
After you have a plan to recover in case of a failure, it’s time to convert the VMs.
The VM I have created for this example is pretty simple. It’s a single VM which has an OS Disk and a Data Disk in a Storage Account.
The conversion process.
In order to convert the VM, you will need to turn to PowerShell so you can run some Azure cmdlets.
The first thing you need to make sure is that you have the latest version of the AzureRM PowerShell cmdlets otherwise this will not work.
Install-Module AzureRM -Verbose -Force
Update-Module AzureRm -verbose -Force
Once your AzureRM cmdlets are installed/updated, login to your Azure Subscription
Use the following PowerShell block to convert your VM.
$ResourceGroupName = "Convert-VM"
$VMName = "convert-vm"
$SubscriptionID = "123456-123456-123456-123456"
Select-AzureRmSubscription -SubscriptionId $SubscriptionID
Stop-AzureRmVM -ResourceGroupName $ResourceGroupName -Name $VMName -Force
ConvertTo-AzureRmVMManagedDisk -ResourceGroupName $ResourceGroupName -VMName $VMName
Let’s see what does the code block from above do:
Select-AzureRmSubscription – This selects the subscription you’re going to perform the conversion process
Stop-AzureRmVM – This as the name says will stop the VM (Remember the planning phase)
ConverTo-AzureRmVmManagedDisk – This cmdlet will convert the VM from normal storage to Managed Disks and will start it up after you
If you have multiple VMs in an availability set then the first thing you need to do is to convert the Availability Set to support Managed Disks. You can do that with the code down below.
$ResourceGroupName = "Convert-VM"
$AVSetName = "Convert-AVS"
$AVSetName = Get-AzureRmAvailabilitySet -ResourceGroupName $ResourceGroupName -Name $AVSetName
Update-AzureRmAvailabilitySet -AvailabilitySet $AVSetName -Sku Aligned
Update-AzureRmAvailabilitySet – This command will convert the Availability Set to support managed disks. It will not disrupt available VMs in that availability set, so you can run it without issues.
Once the conversion process is done, the VMs boot up and you’re golden. Just verify if everything is OK and delete the storage account.
That’s the whole process. If you encounter any errors during the conversion phase, just run the cmdlet again to unblock the process. This can happen due to a transient error on Azure’s side and all it needs is a re-run.
Have a good one!
In my last blog post, I talked about why we should stop using regular storage accounts for our IaaS VMs and why should we use Managed Disks. In today’s blog post I will talk about how you can modify your existing ARM templates that deploy your VMS to use Managed Disks from now on.
Let’s take a look at a regular storage account based ARM Template:
## Resource block:
## VM Storage Block:
"uri": "[concat(reference(concat('Microsoft.Storage/storageAccounts/', variables('storageAccountName')), '2015-06-15').primaryEndpoints.blob, variables('vmStorageAccountContainerName'),'/',variables('OSDiskName'),'.vhd')]"
"uri": "[concat(reference(concat('Microsoft.Storage/storageAccounts/', variables('storageAccountName')), '2015-06-15').primaryEndpoints.blob, variables('vmStorageAccountContainerName'),'/',variables('dataDisk1VhdName'),'.vhd')]"
We have a resources block where we specify a storage account, and we use that resource to create an OS Disk and a Data Disk for the particular VM.
If we want to add more disks then we copy paste the what’s between the dataDisks array a couple of times, modify the LUN and name and we’re happy.
Converting the template to a managed disk format is pretty easy. You first need to reference in the template the API Compute (do note that we’re not modifying storage API) version 2016-04-30-preview or a later version (never use -preview in your production templates!)
You change the storage profile to reference managed disks as shown the code snip below:
Hard? I don’t think so, MS made it very easy to convert existing templates to use Managed Disks. You basically remove some code from the template 🙂
This sample is for single VMs;
If you have multiple VMs in an Availability set, you need to add a “managed” property to the Availability Set block like shown below:
Hope this was usefull, if you have any comments, write them down below 🙂
Have a good one!
Every service that you use in Azure uses storage. We want everything that we create in Azure to be persistent because if it would be temporary, then we would have a problem. When you create a Virtual Machine, you need to create one or two storage accounts where the VMs disks and diagnostic data will sit. This worked well for a while, but when we’re talking about scale, then this becomes an issue when you’re talking about high availability, disaster recovery or even disk maintenance.
To solve the problems from above and all other VM storage-related problems, Microsoft announced a new type of offering called “Managed Disks.”
Why were storage accounts a problem for Virtual Machines?
For starters; Using the ARM model, you would create VMs in a Resource Group and select one single storage account where all those VM disks would sit in. The problem was that when you’re provisioning a storage account, basically you’re tying it to a storage stamp (cluster) and you would get limited performance and scalability. Storage in Azure has the biggest SLA when compared to other services, but it’s not 100%, and mistakes/issues do happen. When you’re saving all your OS / Data disks in one storage account thus having all your eggs in one basket which you should know by now that that’s a bad thing 🙂
To make matters worse, each storage account has a limited amount of IOPS and can have a limited number of disks, e.g., 30000 IOPS for standard storage and 50000 IOPS for premium storage. With ten premium storage disks, you would hit the storage account limits.
Usually, people created 5-10-20 VMs per a single storage account and then they had performance issues because, in peaks, those VMs were consuming all the IOPS of that storage account. Performance is not the single problem that you would have, the other problem was availability, you would create a couple of VMs, set them in an Availability Group for that 99.95% SLA and then a storage issue would happen, and all of them go down. Why? Well because your storage account was created in a single storage cluster which had a problem.
The solution to most of the performance/availability problems were to architect your VM deployments in such a way to benefit from multiple storage accounts. You would create for example 5 storage accounts and use them in a mesh so that if one of them goes down then you’re still online, but that added more work to the IT Admin who had to manage more than one storage account.
Another problem was the security aspect. You would have all your eggs in one basket and no easy way to grant access to that storage account. If you would allow access to the storage account then anybody would be able to download what’s in it. You had no way to assign permissions in a granular form as ARM allows you to. That’s one aspect of the problem. The other element was that storage accounts have access keys and anybody who has access to those keys, would have access to all the contents inside of it.
How does Managed Disks solve all of those problems?
The idea of Managed Disks was to simply IaaS VMs management and security by removing storage accounts from the equation. When you want to create a VM with a managed disk, you specify the type (Standard or Premium) and Azure does all the work for you. No more architecting and managing storage accounts, no more scalability issues and the best one is that each managed disk is considered an ARM resource in the Azure portal, thus granting you the ability to apply RBAC rules to them.
By using Managed Disks, you would get some excellent benefits like:
1. Independent resource management from a security and operation perspective
2. No single point of failure
3. Copy disks instantly inside the same region.
4. Share images without storage account copy operations.
This is just the tip of the iceberg.
Now from a price standpoint things changed a bit. With regular standard storage accounts (not premium) you would be billed the Pay-As-You-Go model, meaning that if I use 1 GB then I pay for 1GB even tho I provisioned a 1 TB disk for my VM. Managed disks are don’t have the same billing model. With them, you pay a fixed amount for each one you provision. If you provision ten 1 TB disks, then you would pay ten times x 1 TB disk at today’s rate. If you use premium disks, then you know exactly the price model. You won’t pay the same price as you would pay for a premium disk but just so that you know, this will be a fixed price depending on the size of the disk.
From a business standpoint, this makes much more sense because you can actually price VMs much easier. You have one VM, and an N amount of disks that costs X. No more storage transactions or any other “hidden” costs.
I won’t reference any price because these change and this would become outdated information faster than I press submit 🙂
The size model is almost identical to the premium disk offering; You have S4, S6, S10, S20 and S30 Standard disks and for reference, I have attached a screenshot with Standard and Premium disks performance and limits.
As you can see, HDD storage is marked with an S (Standard) and SSD storage is marked as before with P (Premium). So if I want a 1 TB HDD storage disk, then I would go for an S30 disk. That simple.
Talking about simplicity, creating a VM with a Managed Disk is easy as you would say 1,2,3. When you create a VM from the portal, you have a step where you have to configure networking, and in that step, you will be asked if you want to use managed disks. If you press yes then that’s it 🙂
From a high availability standpoint, you are not obligated anymore to put all your eggs in one basket or do storage account design to have high availability in case of an underlying storage issue. When you create a Managed Disk, you provision that much space on a storage stamp, and when you create another one, it will go to a different stamp thus having multiple fault domains.
This can be controlled by using availability sets. When you specify an availability set for your deployment, the fault domains for your VMs will also be applied to your Managed Disks.
There is one catch though; Managed Disks are LRS (Locally Redundant Storage) which means that if you need GRS (Geo-Redundant Storage), then you’re still stuck with storage accounts.
I will end this right here as I don’t want to add too much wall of text to a single blog post. I suggest that you give them a try and see what you get 🙂
Have a good one!
A friend of mine was having issues connecting to a couple of VMs that he provisioned using an ARM template. The template worked perfectly until he added the JSON block to add an NSG.
Every time he started a deployment with the NSG block in the ARM template, he wouldn’t be able to connect to the VMs in any way. The fun fact was that even deleting the NSG didn’t solve the issue, so he had to recreate the whole environment from scratch and trust me that took a while 🙂
So what was the problem, you may ask?
He was using a source TAG Internet which for some reason (I still haven’t figured this one out), killed the connectivity on the VMs on both sides (Private and Public IPs) and funnily enough, the logs didn’t show anything.
If you encounter a problem like this one, double check your NSG blocks to not have sourceAddressPrefix: Internet but sourceAddressPrefix: *
"description": "Allow RDP Connections",
"description": "Allow RDP Connections",
So far I haven’t been able to reproduce it, and I’m still looking into what’s causing the issue for that particular ARM template but if you encounter something similar, give it a try and let me know your findings 🙂
Have a good one!
Happy New Year!
For me this is a great start of the year as I’ve just received an e-mail from Microsoft, announcing me that I’ve been awarded the Microsoft Most Valuable Professional award in the Microsoft Azure category!
With this occasion, I would like to thank my good friends and colleagues, Tudor Damian and Mihai Tataran for supporting me to achieve this goal.
Great start of a year, hope the next one comes with the same awesome news 😀
With that being said, happy new year again, and as always have a great one!
LE: Added pics!
I’ve been working on migrating customers on-premise e-mail solution to Office 365, so they could benefit from all the goodness that Office 365 offers, we encountered some issues that we couldn’t find in the official documentation. By reading the migration documentation – IMAP Migration Documentation – we thought that we planned every black scenario that could happen, but Murphy’s law happened and we faced some dreadful issues.
In this blog post, I will write about what I encountered during an IMAP migration of a Zimbra on-premise e-mail solution and what you guys should consider if you ever do an IMAP migration of a non-documented e-mail solution.