When referring to an image, an artifact, a registry, a tag, what exactly is the reference? Do we mean:
For the sake of clarity of communications, there are several elements that make up an artifact or image name, and they are fairly important when we think about artifacts moving from one registry to another. See Choosing a Docker ContainerRegistry for more context
Should we really refer to an image, tied to a specific location? As humans, would we really say the fully qualified name, or would we use shorthand references? And what terminology would, or should we use? There are several terms we use interchangeably, which I’ll call out their meanings:
Image / Artifact
Images & Artifacts
The first thing you may notice is I reference Images and Artifacts interchangeably. It turns out the infrastructure we use to store…
If your Windows 10 Search Bar is broken you’re encountering an issue that is spreading to many Windows 10 users at the time of this writing.
This is NOT an official statement (I’m not responsible for any side effects caused by this) about how to fix but running this script inside a PowerShell session with administrator privileges and restarting the machine can solve your issue (it worked on my machine).
The development platform is the production environment for the job of creating software. / Michael T. Nygard
When you work with many customers you start to collect your personal database about the best and worst practices that you see and how much they are spreaded.
A thing that is very common is that development or IT related activities are treated like second class citizens in the company. Low end laptops (some with traditional HDD and not SSD!), low budget for testing or QA environment. I can’t explain why this is happening… I suppose it’s a cultural issue. Let’s look at an analogy. Suppose your company CRM went down so people from the selling department couldn’t do their job. That would be at least a severity 2 outage!
So I couldn’t agree more with Nygard: treat IT tools as production because it really is production!
The Microsoft Visual Studio Setup Project is an old technology to create installer developed by Microsoft. It is out of support from nearly a decade and not present in Visual Studio anymore but when I visit customer sites I find legacy technologies and I need to deal with it on the short-term.
A couple of days ago I was working on an automated CI build on Azure DevOps and we hit an issue when trying to compile an old VDPROJ (migration to Wix in progress, btw ☺). We encountered an HRESULT 8000000A error.
Mi sento un po’ in colpa perché arrivo 3 giorni in ritardo a festeggiare i tre anni di questo blog. Un po’ come quando ti ricordi in ritardo del compleanno di un tuo amico importante. Questo blog per me è a tutti gli effetti un amico importante. Non si lamenta quando lo frequento poco ed è sempre disponibile ad ascoltarmi quando ho voglia di scriverci qualcosa sopra.
Agosto per me rappresenta da qualche anno un momento dell’anno di riflessione dove faccio un punto della situazione famigliare, personale e professionale. Di solito inizio anche un progetto personale di studio o miglioramento. Il progetto di quest’anno è imparare una terza lingua: il tedesco. Chissà, magari tra qualche mese proverò a scrivere anche qualche articolo in tedesco per sperimentare come feci con l’inglese a suo tempo. Ho iniziato seriamente proprio oggi trovando un buon canale per assoluti niubbi su YouTube. Tutti mi dicono essere difficile ma non sono uno che si fa spaventare dall’imparare qualcosa di nuovo. Dirò la mia tra qualche tempo!
Today it’s an important day because it’s six months that I’m working at Microsoft and it’s the end of the probationary period.
I’ll remember forever my first day when I entered from main entrance at the Microsoft House in Milan and asked for a temporary badge because I was a new hire. One year and half before I entered the same door to attend a free workshop about Desktop Bridge and asked for a guest badge. I looked at everything with different eyes and many emotions. Life can be very rewarding if you have an objective and you work for it.
That day, I put the badge onto a sensor near a door and the door opened, I thought: “It’s real! I’m not dreaming!”.
This blog post is the first of a new topic about Kubernetes.
Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation.
This post introduces some basic concepts of Kubernetes with a working example based on .Net and Visual Studio. We’ll develop a simple Web API service, package it as a Docker container, upload it to Docker Hub, and we’ll run it onto Kubernetes.
Before we can move on with Kubernetes we need to upload this image to Docker Hub because Kubernetes will try to download from it the images for the containers that it will create. In order to do that we need an account (free for open projects) of Docker Hub.
The first thing we need to do is a docker login from the command line to handle the authentication to our repository.
Now we can push our image with:
docker push phenixita/ic6redsvc:latest
And it’s done!
We can deploy our app onto Kubernetes! I recommend to use a managed Kubernetes instance from your provider of choice (AWS, Azure, you name it) or run in your machine something like Minikube. Docker for Windows provides support for a single Kubernetes cluster with low hardware requirements.
This is a Kubernetes Cluster. It is composed by a master node and working nodes. The master node is the coordinator of all the activities that happens inside a cluster. The working nodes are the computer power (most of the time they are virtual machines) that will do the job of running our code.
Check that kubectl is configured to talk to your cluster, by running the kubectl version command.
To view the nodes in the cluster, run the kubectl get nodes command:
Here we see the available nodes (1 in our case). Kubernetes will choose where to deploy our application based on Node available resources. Let’s run our first app on Kubernetes with the kubectl run command.
The run command creates a new deployment. We need to provide the
deployment name and app image location (include the full repository url for
images hosted outside Docker hub). We want to run the app on a specific port so
we add the –port parameter:
kubectl run ic6redsvc --image phenixita/ic6redsvc:latest --port=80
We need to get the pod name with:
kubectl get pods
Our app is running in a private network and we need to activate a proxy in another terminal to be able to test it. This is not the standard approach for production environments, we'll see in future blog post how to do it properly. Now we try to keep things as simple as possible.
This is the final step where we call our API. We compose the command like the following example.
Your team works with a project in Azure DevOps. Your build time starts to increase as the project’s complexity grows but you want your CI build to deliver results as quickly as possible. How can you do that? With parallelism, of course!
The following example shows how to design a build with:
A first “initialization” job.
The proper build jobs: build 1 and build 2 that we want to run in parallel after the step 1.
A final step that we want to execute after that build 1 and build 2 are completed.
We start with configuring the build to look like the following picture:
To orchestrate the jobs as we specified before we use the “Dependencies” feature. For the first job we have no dependencies so leave the field blank.
For the Build 1 job we set the value to Init. This way we’re instructing Azure DevOps to start the Build 1 job only after that Init has completed.
We do the same thing with the Build 2 job.
For the final step we set Build 1 and Build 2 as dependencies so this phase will wait for the 2 previous builds to complete before starting.
Here we can see the build pipeline while it’s executing.
With this brief tutorial we learned how to design a build pipeline with dependencies and parallelism that can reduce the delay of our CI processes. A fast and reliable CI process is always a good practice because we must strive to gather feedback as quickly as possible from our processes and tools. This way we can resolve issues in the early stages of our ALM, keeping the costs down and avoiding problems with customers.