About The Azure Container Registry

The Azure Container Registry is based on the open-source Docker Registry 2.0 and is used to store container images and related artifacts. In addition to Docker-compatible container images, Azure Container Registry also supports Helm charts and Open Container Initiative (OCI) image formats. The Azure Container Registry can be used for different scenarios. For example, developers can use a container registry in their CI/CD pipelines to push container images. It’s also possible to configure ACR Tasks, which is a suite of features within Azure Container Registry. ACR Tasks provides container image building in the cloud and can automate OS and framework patching. You can use ACR Tasks to automate image builds when your team commit code to a Git repository, or to automatically rebuild application images when their base images are updated. For example, with base image update triggers, you can automate your OS and application framework patching workflow.

Important Features Of The Azure Container Registry

Azure Container Registry serves as a catalog for your container images. There are serveral ways to authenticate with an Azure Container Registry. For example, users can authenticate to a registry directly via individual login, but applications and services authenticate by using a service principal. If you want an Azure service to access your Azure Container Registry, there is a good chance that this service can use a managed identity for authentication. In that case you don’t have to manage the credentials yourself, which is even better. For all these scenarios, you can use role-based access control (RBAC) to control what the user, application or service is allowed to access.

You can limit access to the registry even further by assigning virtual network private IP addresses to the registry endpoints and using Azure Private Link. This way, all network traffic between clients on the virtual network and the registry’s private endpoints traverses the virtual network and a private link on the Microsoft backbone network. There is no exposure to the public internet.

The Azure Container Registry also offers Geo-replication. By using Geo-replication, you can host your container images in multiple regions around the world. The replication happens automatically for you while you manage it as a single registry. A geo-replicated registry improves performance and reliability of regional deployments with network-close registry access. It provides a highly available registry that is resilient to regional outages. It also reduces data transfer costs because image layers are pulled from a local, replicated registry in the same or nearby region as your container host.

Which Service Tier Should You Choose

There are three service tiers available to choose from: basic, standard and premium. The basic service tier is a cost-optimized entry point for developers to get started. It has the same programmatic capabilities as the standard and premium service tiers. The standard service tier should be used for most production scenarios. It offers the same capabilities as the basic service tier, but includes increased storage and throughput. Premium provide the highest amount of storage and throughput, enabling hyper-scale performance. In addition to higher throughput, premium adds features such as geo-replication, availability zones, private link with private endpoints and content trust.

Use the service tier’s limits for read and write operations and bandwidth as a guide if you expect a high rate of registry operations. These limits affect operations such as listing, deleting, pushing and pulling images and other artifacts. For example, you may experience throttling of pull or push operations when the registry determines the rate of requests exceeds the limits allowed for the service tier. You may see an HTTP 429 error similar to too many requests.

Storage, Throughput & Features By Service Tier

The following table details the storage, throughput and features for each service tier. Note the storage and throughput difference between the basic and standard tier. The standard tier provides ten times more storage, three times more read operations per minute and five times more write operations per minute. It also provides twice as much download and upload speeds.

Included storage (GiB)10100500
Storage limit (TiB)202020
Maximum image layer size (GiB)200200200
Maximum manifest size (MiB)444
ReadOps per minute1,0003,00010,000
WriteOps per minute1005002,000
Download bandwidth (Mbps)3060100
Upload bandwidth (Mbps)102050
Availability zonesN/AN/ASupported
Content trustN/AN/ASupported
Private link with private endpointsN/AN/ASupported
• Private endpointsN/AN/A200
Public IP network rulesN/AN/A100
Service endpoint VNet accessN/AN/APreview
• Virtual network rulesN/AN/A100
Customer-managed keysN/AN/ASupported
Repository-scoped permissionsSupportedSupportedSupported
• Tokens10050050,000
• Scope maps10050050,000
• Actions500500500
• Repositories per scope map500500500
Anonymous pull accessN/APreviewPreview

Geo-replication And Availability Zones For High Availability

Geo-replication and availability zones are both premium features. By configuring geo-replication, you improve the performance of regional deployments with network-close registry access. The container images are replicated across multiple regions while you manage it as a single registry. You also reduce data transfer costs by pulling images from a nearby replicated registry. Furthermore, you also achieve a higher registry resilience if a regional outage occurs.

Let’s suppose you run a containerized web application in an Azure Kubernetes Service (AKS) in West US, East US, Canada Central and West Europe. You also configured the Azure Container Registry (ACR) for geo-replication in the same regions. The web application that runs in a Docker container utilizes the same code and image across all regions. Each regional deployed web application has its own database and configuration. At some point in time, you add a new feature to the web application, start an ACR build action and push the updated image layers. Because only updated image layers are replicated across regions, and each AKS pulls the new container image from the same or nearby region, data transfer costs are reduced to a minimum. You also achieve a higher performance because the nearest region is used to pull the image.

In order to achieve an even higher availability and resiliency, you can choose to enable zone-redundancy. Availability zones are unique physical locations within an Azure region. To ensure resiliency, there’s a minimum of three separate zones in all enabled regions. Each zone has one or more datacenters equipped with independent power, cooling, and networking. When configured for zone redundancy, a registry (or a registry replica in a different region) is replicated across all availability zones in the region, keeping it available if there are datacenter failures.

Use Content Trust For Pushing And Pulling Signed Images

Content trust allows publishers to sign the container images they push to the registry. When consumers configure their Docker client to pull signed images, it will verify both the publisher (source) and the integrity of the image data. As a result, consumers are assured that the signed images were indeed published by the publisher and have not been tampered with after publication. Repositories can contain images with both signed and unsigned tags. For example, you might sign only the myimage:stable and myimage:latest images, but not myimage:dev.

In order to push a trusted image to the Azure Container Registry, you need to enable content trust at the registry level. The user or system also need both the AcrPush and AcrImageSigner roles. After that, you need to enable content trust in the Docker client. The first time you pushed an image with a signed tag, you’re asked to create a passphrase for both a root signing key and a repository signing key. These keys are stored locally on your machine. On each subsequent push to the same repository, you’re asked only for the repository key. Each time you push a trusted image to a new repository, you’re asked to supply a passphrase for a new repository key.

In order to pull a trusted image, you need to enable content trust in the Docker client as well. In this case, the user or system only needs the AcrPull role. When consumers enable content trust, they can only pull images with signed tags. If a client with content trust enabled tries to pull an unsigned tag, the operations fails.

Use Cases Where Azure Container Registry Can By Used

The Azure Container Registry can integrate with all sorts of Azure services and container orchestration systems. Take for example the Azure Kubernetes Service, which is an orchestration service. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. You can configure Kubernetes to pull container images from your Azure Container Registry and run them as Docker containers.

You can take this a step further and automatically deploy your container images on code commit. This is done using a CI/CD pipeline and you can use GitHub Actions or Azure DevOps pipelines depending on where your code lives. For example, the steps of a CI/CD pipeline could look something like this:

  1. Checkout source code after commit on main branch.
  2. Start a build action and push the container image to Azure Container Registry.
  3. Deploy the container image to Azure Kubernetes Service.

The Azure Container Registry also integrates well with other container orchestration systems like DC/OS or Docker Swarm.

Explaining The Contents Of A Docker File Used To Build A Container Image

A Docker file is a text file that contains all commands to build an image. It adheres to a specific format and set of instructions as shown in the example below. A Docker image consists of read-only layers each representing a Dockerfile instruction. These layers are stacked and each one is a delta of the changes from the previous layer. When you run the image and generate a container, you add a new layer on top of the underlying layers. This is also called a writeable layer. Changes such as writing new files, modifying existing files, and deleting files, are written to this writable container layer.

FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
COPY ["ExampleWebApp.csproj", "."]
RUN dotnet restore "./ExampleWebApp.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "ExampleWebApp.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "ExampleWebApp.csproj" -c Release -o /app/publish /p:UseAppHost=false

FROM base AS final
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "ExampleWebApp.dll"]

In the example above, each instruction creates one layer:

  • FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base creates a layer from the aspnet:6.0 Docker image and sets the stage name to ‘base’ so we can access it later using that name.
  • WORKDIR /app sets the working directory to ‘/app’ for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile.
  • EXPOSE 80 informs Docker that the container listens on port 80.
  • EXPOSE 443 informs Docker that the container listens on port 443.

  • FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build creates a layer from the sdk:6.0 Docker image and sets the stage name to ‘build’.
  • WORKDIR /src sets the working directory to ‘/src’
  • COPY ["ExampleWebApp.csproj", "."] adds the WebApplication1.csproj from the current directory to the ‘src/.’ directory.
  • RUN dotnet restore "./ExampleWebApp.csproj" executes the dotnet restore command to restore dependencies.
  • COPY . . adds all files from current directory to the ‘src’ working directory.
  • WORKDIR "/src/." sets the working directory to ‘/src.’
  • RUN dotnet build "ExampleWebApp.csproj" -c Release -o /app/build executes the dotnet build command which builds the project and all its dependencies.
  • FROM build AS publish means the publish stage picks up where the build stage left off.
  • RUN dotnet publish "ExampleWebApp.csproj" -c Release -o /app/publish /p:UseAppHost=false publishes the application and its dependencies to be used for deployment in the ‘app/publish’ directory.
  • FROM base AS final means the final stage picks up where the base stage left off.
  • WORKDIR /app sets the working directory to ‘/app’
  • COPY --from=publish /app/publish . copies all contents from the ‘/app/publish’ directory in the publish stage to ‘/app’ in the base stage. By doing this, we keep the final image small because only the published files are copied. We do not need the source code or sdk.
  • ENTRYPOINT ["dotnet", "ExampleWebApp.dll"] runs the web application using the dotnet command.

Container Images Are Made Up Of One Or More Layers

Each instruction in a Dockerfile translates to an image layer.

By sharing common layers between container images, you increase storage efficiency. For example, several images in different repositories might have a common ASP.NET Core base layer, but only one copy of that layer is stored in the registry. By sharing layers, you also optimize layer distribution to nodes and increase pull performance. For example, when a node pulls an image from the registry, and the ASP.NET Core base layer is already present, that same layer isn’t transfered to the node. Instead, it references the layer already existing on the node.

Create Container Registry And Perform A Build Using ACR Quick Task

The Dockerfile explained earlier is used to build a container image for an ASP.NET Web App. Let’s use this example to execute an ACR Build Task in the Azure Container Registry. You can do this by using the commands below. Make sure to update the ACRName variable to the name of your choosing. The commands below will do the following:

  1. Create a Resource Group in West Europe.
  2. Create an Azure Container Registry in the Resource Group.
  3. Start an ACR Build Task which will download the source code from GitHub, build it and store the image as examplewebapp:v1.

An important part to note in the example below is the last part of the GitHub URL: #main:ExampleWebApp . This will set the execution context to the ExampleWebApp folder in the main branch. If you don’t set the context to this folder, the Docker build will throw an error that it cannot find the ExampleWebApp.csproj file.


az group create --resource-group $ResourceGroup --location westeurope

az acr create --resource-group $ResourceGroup --name $ACRName --sku Standard --location westeurope

az acr build --registry $ACRName --image examplewebapp:v1  https://github.com/wbosland/acr-build-example-aspnet.git#main:ExampleWebApp

As shown in the ACR Build output below, ACR Tasks displays the dependencies discovered. This enables ACR Tasks to automate image builds on base image updates. For example, when a base image is updated with OS or framework patches.

Sending context to registry: wboslandcontainerregistry...
Queued a build with ID: cbc
Waiting for an agent...
2024/03/03 18:29:53 Downloading source code...
2024/03/03 18:29:55 Finished downloading source code
2024/03/03 18:29:56 Using acb_vol_1663ba0e-5359-46ef-a0b5-61d977fb71d0 as the home volume
2024/03/03 18:29:56 Setting up Docker configuration...
2024/03/03 18:29:57 Successfully set up Docker configuration
2024/03/03 18:29:57 Logging in to registry: wboslandcontainerregistry.azurecr.io
2024/03/03 18:29:57 Successfully logged into wboslandcontainerregistry.azurecr.io
2024/03/03 18:29:57 Executing step ID: build. Timeout(sec): 28800, Working directory: 'ExampleWebApp', Network: ''
2024/03/03 18:29:57 Scanning for dependencies...
2024/03/03 18:29:58 Successfully scanned dependencies
2024/03/03 18:29:58 Launching container with name: build
Sending build context to Docker daemon  8.215MB
Step 1/17 : FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
6.0: Pulling from dotnet/aspnet
5d0aeceef7ee: Pulling fs layer
7c2bfda75264: Pulling fs layer
950196e58fe3: Pulling fs layer
ecf3c05ee2f6: Pulling fs layer
819f3b5e3ba4: Pulling fs layer
ecf3c05ee2f6: Waiting
819f3b5e3ba4: Waiting
7c2bfda75264: Verifying Checksum
7c2bfda75264: Download complete
5d0aeceef7ee: Verifying Checksum
5d0aeceef7ee: Download complete
950196e58fe3: Verifying Checksum
950196e58fe3: Download complete
5d0aeceef7ee: Pull complete
7c2bfda75264: Pull complete
950196e58fe3: Pull complete
ecf3c05ee2f6: Verifying Checksum
ecf3c05ee2f6: Download complete
ecf3c05ee2f6: Pull complete
819f3b5e3ba4: Verifying Checksum
819f3b5e3ba4: Download complete
819f3b5e3ba4: Pull complete
Digest: sha256:894c9f49ae9a72b64e61ef6071a33b6b616d0cf48ef25c83c4cf26d185f37565
Status: Downloaded newer image for mcr.microsoft.com/dotnet/aspnet:6.0
 ---> 9dace3b3a992
Step 2/17 : WORKDIR /app
 ---> Running in b91968d8a784
Removing intermediate container b91968d8a784
 ---> edfcc9b13bef
Step 3/17 : EXPOSE 80
 ---> Running in 8c8a6e40f6fb
Removing intermediate container 8c8a6e40f6fb
 ---> 172b0820a320
Step 4/17 : EXPOSE 443
 ---> Running in 8bc7acc3f184
Removing intermediate container 8bc7acc3f184
 ---> 328a2cff4919
Step 5/17 : FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
6.0: Pulling from dotnet/sdk
5d0aeceef7ee: Already exists
7c2bfda75264: Already exists
950196e58fe3: Already exists
ecf3c05ee2f6: Already exists
819f3b5e3ba4: Already exists
19984358397d: Pulling fs layer
d99f9f96f040: Pulling fs layer
d6d23fc1b8fc: Pulling fs layer
d6d23fc1b8fc: Verifying Checksum
d6d23fc1b8fc: Download complete
19984358397d: Verifying Checksum
19984358397d: Download complete
19984358397d: Pull complete
d99f9f96f040: Verifying Checksum
d99f9f96f040: Download complete
d99f9f96f040: Pull complete
d6d23fc1b8fc: Pull complete
Digest: sha256:fdac9ba57a38ffaa6494b93de33983644c44d9e491e4e312f35ddf926c55a073
Status: Downloaded newer image for mcr.microsoft.com/dotnet/sdk:6.0
 ---> 694fe26693f8
Step 6/17 : WORKDIR /src
 ---> Running in 91a6c2be5a25
Removing intermediate container 91a6c2be5a25
 ---> 246e83d8f6bc
Step 7/17 : COPY ["ExampleWebApp.csproj", "."]
 ---> 1103b4fcbd66
Step 8/17 : RUN dotnet restore "./ExampleWebApp.csproj"
 ---> Running in 8d5e652589f1
  Determining projects to restore...
  Restored /src/ExampleWebApp.csproj (in 783 ms).
Removing intermediate container 8d5e652589f1
 ---> c541a9129506
Step 9/17 : COPY . .
 ---> 9836918eea96
Step 10/17 : WORKDIR "/src/."
 ---> Running in a1cecf54a574
Removing intermediate container a1cecf54a574
 ---> 8c6ced95149c
Step 11/17 : RUN dotnet build "ExampleWebApp.csproj" -c Release -o /app/build
 ---> Running in 08a6ce358f79
MSBuild version 17.3.2+561848881 for .NET
  Determining projects to restore...
  All projects are up-to-date for restore.
  ExampleWebApp -> /app/build/ExampleWebApp.dll

Build succeeded.
    0 Warning(s)
    0 Error(s)

Time Elapsed 00:00:05.98
Removing intermediate container 08a6ce358f79
 ---> 67818c8f57b0
Step 12/17 : FROM build AS publish
 ---> 67818c8f57b0
Step 13/17 : RUN dotnet publish "ExampleWebApp.csproj" -c Release -o /app/publish /p:UseAppHost=false
 ---> Running in d360bde5bce2
MSBuild version 17.3.2+561848881 for .NET
  Determining projects to restore...
  All projects are up-to-date for restore.
  ExampleWebApp -> /src/bin/Release/net6.0/ExampleWebApp.dll
  ExampleWebApp -> /app/publish/
Removing intermediate container d360bde5bce2
 ---> a25f2c422b85
Step 14/17 : FROM base AS final
 ---> 328a2cff4919
Step 15/17 : WORKDIR /app
 ---> Running in b833e85c05b1
Removing intermediate container b833e85c05b1
 ---> e408ee36c66d
Step 16/17 : COPY --from=publish /app/publish .
 ---> ed2ec5db8ee7
Step 17/17 : ENTRYPOINT ["dotnet", "ExampleWebApp.dll"]
 ---> Running in c6c45caaaea8
Removing intermediate container c6c45caaaea8
 ---> 8656ed9598dd
Successfully built 8656ed9598dd
Successfully tagged wboslandcontainerregistry.azurecr.io/examplewebapp:v1
2024/03/03 18:30:40 Successfully executed container: build
2024/03/03 18:30:40 Executing step ID: push. Timeout(sec): 3600, Working directory: 'ExampleWebApp', Network: ''
2024/03/03 18:30:40 Pushing image: wboslandcontainerregistry.azurecr.io/examplewebapp:v1, attempt 1
The push refers to repository [wboslandcontainerregistry.azurecr.io/examplewebapp]
0021d47298f0: Preparing
4e42b658b6d7: Preparing
407a0f7a925f: Preparing
5bb6a06c6676: Preparing
76049aadf39b: Preparing
a54d0098e057: Preparing
0baf2321956a: Preparing
a54d0098e057: Waiting
0baf2321956a: Waiting
4e42b658b6d7: Pushed
0021d47298f0: Pushed
407a0f7a925f: Pushed
a54d0098e057: Pushed
76049aadf39b: Pushed
5bb6a06c6676: Pushed
0baf2321956a: Pushed
v1: digest: sha256:947575bca39c6e683768bc281cd5d38fd2b1d73e1837d3a28a43b50f442d3ae6 size: 1788
2024/03/03 18:31:01 Successfully pushed image: wboslandcontainerregistry.azurecr.io/examplewebapp:v1
2024/03/03 18:31:01 Step ID: build marked as successful (elapsed time in seconds: 42.337604)
2024/03/03 18:31:01 Populating digests for step ID: build...
2024/03/03 18:31:03 Successfully populated digests for step ID: build
2024/03/03 18:31:03 Step ID: push marked as successful (elapsed time in seconds: 21.285054)
2024/03/03 18:31:03 The following dependencies were found:
2024/03/03 18:31:03 
- image:
    registry: wboslandcontainerregistry.azurecr.io
    repository: examplewebapp
    tag: v1
    digest: sha256:947575bca39c6e683768bc281cd5d38fd2b1d73e1837d3a28a43b50f442d3ae6
    registry: mcr.microsoft.com
    repository: dotnet/aspnet
    tag: "6.0"
    digest: sha256:894c9f49ae9a72b64e61ef6071a33b6b616d0cf48ef25c83c4cf26d185f37565
  - registry: mcr.microsoft.com
    repository: dotnet/sdk
    tag: "6.0"
    digest: sha256:fdac9ba57a38ffaa6494b93de33983644c44d9e491e4e312f35ddf926c55a073
    git-head-revision: 30f09de00045a6de53e13def46ca5697ee2489f0

Run ID: cbc was successful after 1m10s

Automate Container Image Build On Code Commit Using ACR Tasks

In order to automate a container image build on code commit in GitHub, you need a Personal Access Token (PAT). Since I used my own GitHub repository, I was able to create one myself. If you want, you could clone the example code to your own GitHub repository and create the PAT. Or, you could ask the owner of a repository to create a PAT for you. Make sure to select the correct scopes so that ACR Tasks can access the repository. For public GitHub repositories, the scopes repo:status and public_repo are sufficient. For private GitHub repositories, ACR Tasks needs full repo control.

Create the ACR Task in the container registry you created in the previous chapter using the following PowerShell script:


az acr task create `
    --registry $ACRName `
    --name task-example-web-app `
    --image "examplewebapp:{{.Run.ID}}" `
    --context https://github.com/$GitUser/acr-build-example-aspnet.git#main:ExampleWebApp `
    --file Dockerfile `
    --git-access-token $GitPAT

After creation, you can test the task manually using the following command:

az acr task run --registry $ACRName --name task-example-web-app

And after you committed a change to the repository, run the following command to check if the ACR Task has ran:

az acr task list-runs --registry $ACRName --output table

As shown below, the task has been triggered both manually and by a commit to the GitHub repository.

RUN ID    TASK                  PLATFORM    STATUS     TRIGGER    STARTED               DURATION
--------  --------------------  ----------  ---------  ---------  --------------------  ----------
cbe       task-example-web-app  linux       Succeeded  Commit     2024-03-20T12:09:31Z  00:01:04
cbd       task-example-web-app  linux       Succeeded  Manual     2024-03-20T11:25:15Z  00:01:10

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top