Monthly Archives: July 2019

Kubernetes baby steps: deploying our first .Net App to a cluster

This blog post is the first of a new topic about Kubernetes.

Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation. 


This post introduces some basic concepts of Kubernetes with a working example based on .Net and Visual Studio.
We’ll develop a simple Web API service, package it as a Docker container, upload it to Docker Hub, and we’ll run it onto Kubernetes.

Our API is built from a project called IC6.RedSvc (Red Service) that when invoked returns a name of a red object. It’s a pretty simple business logic because the main purpose of this post is about learning Kubernetes. The source code for this working example is hosted on my GitHub.

Let’s get started!


Create a new project ASP.NET Core Web Application.

Solution name: IC6.ColorSvcs, project name IC6.RedSvc.

Rename the DefaultController.cs file into RandomObjectController.cs and replace the code of the class with the following code. This is the controller that implements our simple business logic.

namespace IC6.RedSvc.Controllers
    public class RandomObjectController : ControllerBase
        string[] redObjects = { "Ferrari", "Coca Cola", "Pendolino", "Cherry", "Strawberry" };

        // GET api/values
        public ActionResult<string> Get()
            return redObjects[DateTime.Now.Ticks % redObjects.Length];

Add Dockerfile

Right click on the Solution Explorer onto the Project, Add, Docker Support…

And we chose Linux.
This step creates the Dockerfile to build our Docker image.

Before we start the application edit launchSettings.json to change the default page. This step makes it easier to test in our browser the API when we launch the project with F5.

Docker build

Now we build our Docker image.

docker build -f "c:devic6.colorsvcsic6.redsvcdockerfile" -t phenixita/ic6redsvc:latest --name=IC6.RedSvc" "c:devic6.colorsvcs"

We have built the Docker image of our app!

Before we can move on with Kubernetes we need to upload this image to Docker Hub because Kubernetes will try to download from it the images for the containers that it will create. In order to do that we need an account (free for open projects) of Docker Hub.

Docker login

The first thing we need to do is a docker login from the command line to handle the authentication to our repository.


Now we can push our image with:

docker push phenixita/ic6redsvc:latest

And it’s done!

Kubernetes time!

We can deploy our app onto Kubernetes!
I recommend to use a managed Kubernetes instance from your provider of choice (AWS, Azure, you name it) or run in your machine something like Minikube. Docker for Windows provides support for a single Kubernetes cluster with low hardware requirements.

This is a Kubernetes Cluster. It is composed by a master node and working nodes. The master node is the coordinator of all the activities that happens inside a cluster. The working nodes are the computer power (most of the time they are virtual machines) that will do the job of running our code.

Check that kubectl is configured to talk to your cluster, by running the kubectl version command.

To view the nodes in the cluster, run the kubectl get nodes command:

Here we see the available nodes (1 in our case). Kubernetes will choose where to deploy our application based on Node available resources. Let’s run our first app on Kubernetes with the kubectl run command.


The run command creates a new deployment. We need to provide the deployment name and app image location (include the full repository url for images hosted outside Docker hub). We want to run the app on a specific port so we add the –port parameter:

kubectl run ic6redsvc --image phenixita/ic6redsvc:latest --port=80


We need to get the pod name with:

kubectl get pods
Our app is running in a private network and we need to activate a proxy in another terminal to be able to test it. This is not the standard approach for production environments, we'll see in future blog post how to do it properly. Now we try to keep things as simple as possible.


This is the final step where we call our API. We compose the command like the following example.

curl http://localhost:8001/api/v1/namespaces/default/pods/<podId>/proxy/api/RandomObject

If we did everything correctly we’ll se a repsonse message with one of the objects returned as implemented in the C# code.


With this blog post we learned how to do a very simple deployment of a Docker Image onto Kubernetes. Now we have the foundations to create more meaningful scenarios in future blog posts.

Reduce your build time with parallelism in Azure DevOps

Your team works with a project in Azure DevOps. Your build time starts to increase as the project’s complexity grows but you want your CI build to deliver results as quickly as possible. How can you do that? With parallelism, of course!

Let’s do this together.


Before we start to design a build pipeline with parallelism we must be aware of how Azure DevOps orchestrate parallelism and how many parallel jobs we can start. I recommend to read the official Microsoft Docs page about this.

Designing the build

The following example shows how to design a build with:

  1. A first “initialization” job.
  2. The proper build jobs: build 1 and build 2 that we want to run in parallel after the step 1.
  3. A final step that we want to execute after that build 1 and build 2 are completed.

We start with configuring the build to look like the following picture:

To orchestrate the jobs as we specified before we use the “Dependencies” feature. For the first job we have no dependencies so leave the field blank.

For the Build 1 job we set the value to Init. This way we’re instructing Azure DevOps to start the Build 1 job only after that Init has completed.

We do the same thing with the Build 2 job.

For the final step we set Build 1 and Build 2 as dependencies so this phase will wait for the 2 previous builds to complete before starting.

Here we can see the build pipeline while it’s executing.


With this brief tutorial we learned how to design a build pipeline with dependencies and parallelism that can reduce the delay of our CI processes. A fast and reliable CI process is always a good practice because we must strive to gather feedback as quickly as possible from our processes and tools. This way we can resolve issues in the early stages of our ALM, keeping the costs down and avoiding problems with customers.