✍🏻 Industry use cases of Azure Kubernetes Service

What is Kubernetes?
Kubernetes is open-source orchestration software for deploying, managing and scaling containers. Modern applications are increasingly built using containers, which are microservices packaged with their dependencies and configurations. Kubernetes is open-source software for deploying and managing those containers at scale.
How Kubernetes works
As applications grow to span multiple containers deployed across multiple servers, operating them becomes more complex. To manage this complexity, Kubernetes provides an open source API that controls how and where those containers will run.
Kubernetes orchestrates clusters of virtual machines and schedules containers to run on those virtual machines based on their available compute resources and the resource requirements of each container. Containers are grouped into pods, the basic operational unit for Kubernetes and those pods scale to your desired state.
Kubernetes also automatically manages service discovery, incorporates load balancing, tracks resource allocation and scales based on compute utilization. And, it checks the health of individual resources and enables apps to self-heal by automatically restarting or replicating containers.
Why use Kubernetes?
Keeping containerized apps up and running can be complex because they often involve many containers deployed across different machines. Kubernetes provides a way to schedule and deploy those containers — plus scale them to your desired state and manage their lifecycles. Use Kubernetes to implement your container-based applications in a portable, scalable and extensible way.
📍 Make workloads portable
📍 Scale containers easily
📍 Build more extensible apps
Azure Kubernetes Service (AKS)
Highly available, secure and fully managed Kubernetes service
Accelerate containerized application development
Easily define, deploy, debug and upgrade even the most complex Kubernetes applications and automatically containerize your applications. Use modern application development to accelerate time to market.
Add a full CI/CD pipeline to your AKS clusters with automated routine tasks and set up a canary deployment strategy in just a few clicks. Detect failures early and optimise your pipelines with deep traceability into your deployments.
Gain visibility into your environment with the Kubernetes resources view, control-plane telemetry, log aggregation and container health, accessible in the Azure portal and automatically configured for AKS clusters.

Increased operational efficiency
Rely on built-in automated provisioning, repair, monitoring, and scaling. Get up and running quickly and minimize infrastructure maintenance.
- Easily provision fully managed clusters with Prometheus based monitoring capabilities.
- Use Azure Advisor to optimize your Kubernetes deployments with real-time, personalized recommendations.
- Save on costs by using deeply discounted capacity with Azure Spot.
- Elastically add compute capacity with serverless Kubernetes, in seconds.
- Achieve higher availability and protect applications from datacenter failures using availability zones.

Build on an enterprise-grade, more secure foundation
- Dynamically enforce guardrails defined in Azure Policy at deployment or in CI/CD workflows. Deploy only validated images to your private container registry.
- Get fine-grained identity and access control to Kubernetes resources using Azure Active Directory.
- Enforce pod security context and configure across multiple clusters with Azure Policy. Track, validate, reconfigure, and get compliance reports easily.
- Achieve superior security with a hardened operating system image, automated patching, and more. Automate threat detection and remediation using Azure Security Center.
- Use Azure Private Link to limit Kubernetes API server access to your virtual network. Use network policy to secure your communication paths.

Run any workload in the cloud, at the edge or as a hybrid
Orchestrate any type of workload running in the environment of your choice. Whether you want to move .NET applications to Windows Server containers, modernise Java applications in Linux containers or run microservices applications in the public cloud, at the edge or in hybrid environments, Azure has the solution for you.


Bosch increases vehicle safety using map-matching algorithms and Azure Kubernetes Service
“When we started our journey on Azure, we were a really small team — just one or two developers. Our partnership with Microsoft, the support from their advisory teams, the great AKS documentation and enterprise expertise — it all helped us very much to succeed.” — Bernhard Rode: software engineer, Bosch
The team decided to offload the work of scaling and cluster maintenance to a managed service in a public cloud with a global reach. Thanks to the trusted partnership Bosch had with Microsoft, Azure Kubernetes Service was the obvious choice. A team of Microsoft cloud solution architects worked closely with Bosch engineers, who provided valuable feedback to Azure product teams. Microsoft continues to work with Bosch teams around the world. Working together, they devised a solution that produced the speed Bosch needed.
The key was orchestration. By orchestrating the deployment of containers using AKS, Bosch would get repeatable, manageable clusters of containers. Bosch already had a continuous integration (CI) and continuous deployment (CD) process to use in producing the container images and orchestration. The result: increased speed and reliability of deployments.
“We were looking for a cloud option where we could run our core business logic with zero changes on top of a new infrastructure,” explains Le.
AKS also offered the simplicity of a managed Kubernetes service in the cloud. It provided the elastic provisioning that Bosch wanted, without the need to manage its own infrastructure. In addition, the developers did not have to rethink all their design decisions. Instead, they could take the core business logic developed on-premises using the open-source tools they knew and run the solution virtually as is, within a faster infrastructure with a worldwide reach. The developers can deploy self-managed AKS clusters as needed, and they get the benefit of running their services within a secured network environment.
In addition, by running their solution on Azure and AKS, the average time to calculate whether a driver is going the wrong way could be improved to approximately 60 milliseconds.
The team was also interested in exploring other Azure services, such as solutions for managing APIs and security.
“We didn’t want to have to handle security from the outside, like a web application firewall or something like that. With Azure, we get that,” Rode says.
“Using AKS was a strategic decision. We looked for a managed orchestrator so we could offload the work of patching, upgrading, and production-level services. That’s why we chose AKS — and it’s a very open, flexible platform.” — Hai Dang Le: technical lead, Bosch
How the solution works
The wrong-way driver warning solution runs as a service on Azure and provides an SDK. Service providers, such as smartphone app developers and OEM partners, can install the WDW SDK to make use of the service within their products. The SDK maintains a list of hotspots within which GPS data is collected anonymously. These hotspots include specific locations, such as segments of divided highways and on-ramps. Every time a driver enters a hotspot, the client generates a new ID, so the service remains anonymous.
Today the solution ingests approximately 6 million requests per day from devices emitting GPS data or from a partner’s back-end system. Anyone can download the SDK and try it out. The APIs grant a free request quota for test accounts. For production use, service providers request permission and then use the WDW SDK to register themselves for their own API authentication keys via the Azure API Management developer portal. Within their application, they configure the service’s endpoints by authenticating with their key for ingress and push notifications. The WDW service on Azure does the rest.
When a driver using a WDW-configured app or in-car system enters a hotspot, the WDW SDK begins to collect GPS signals and sensor events, such as acceleration and rotational data and heading information. These data points are packaged as observations and sent in the frequency of 1 Hertz (Hz) — one event per second — via HTTP to the WDW service on Azure, either directly or to the service provider’s back end, and then to Azure. The SDK supports both routes so that service providers stay in charge of the data that is sent to the WDW system.
If the WDW service determines that the driver is going the wrong way within a hotspot, it sends a notification to the originating device and to other drivers in the vicinity who are also running an app with the WDW SDK.
The entire service is deployed using a CI/CD pipeline essentially lifted from on-premises and moved to Azure. Currently self-hosted in GitLab, the CI/CD pipeline is triggered when the code changes, at which point it automatically builds Docker images for every microservice. Each service is tested before being deployed into the AKS clusters.
AKS is deployed within a custom virtual network that keeps the applications isolated. “This allowed us to implement our security guidelines in a more elegant way,” explains Rode. “On the back end, our cluster is fully closed for external communication except through API Management. From a development perspective, it is very favorable for us to be able to deploy our apps in a very private virtual network environment.”
Conclusion
“What we like about AKS is the simplified Kubernetes experience. It’s click and deploy, it’s click and scale. It’s infrastructure as code too, which is quite cool for us.” — Christian Jeschke: product owner, Bosch
Thanks everyone for reading my article.
Keep Learning, Keep Sharing…
Thank you.