In this post, you'll learn how to run a Java Spring Boot application on Azure Kubernetes Service (AKS) and connects to Azure PostgreSQL using Azure AD Pod identity. Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage applications based on microservices. Azure Active Directory pod-managed identities uses Kubernetes primitives to associate managed identities for Azure resources and identities in Azure Active Directory (AAD) with pods. What we’ll cover in this post: Create an AKS cluster and Pod Identity. Create an Azure Database for PostgreSQL server. Prepare Java Spring Boot application for AKS. Deploy Azure Container Registry (ACR). Deploy Java application to Kubernetes with . Kustomize The following diagram shows the architecture of the above steps: AKS cluster and POD Identity I will assume you already have an Azure Subscription setup. Before going any further, we will need to register EnablePodIdentityPreview and install Azure CLI extension. aks-preview az feature register --name EnablePodIdentityPreview --namespace Microsoft.ContainerService az extension update --name aks-preview Let’s create a resource group and an AKS cluster with Azure CNI and pod-managed identity enabled. RESOURCE_GROUP=demo-k8s-rg CLUSTER_NAME=my-k8s-cluster az group create --name=${RESOURCE_GROUP} --location eastus az aks create -g ${RESOURCE_GROUP} -n ${CLUSTER_NAME} --enable-managed-identity --enable-pod-identity --network-plugin azure --enable-addons monitoring --node-count --generate-ssh-keys export export 1 Then create an identity, IDENTITY_RESOURCE_GROUP= IDENTITY_NAME= az group create --name ${IDENTITY_RESOURCE_GROUP} --location eastus az identity create --resource-group ${IDENTITY_RESOURCE_GROUP} --name ${IDENTITY_NAME} export "my-identity-rg" export "sp-application-identity" Then, assign required permissions for the created identity. The identity must have Reader permission in the resource group that contains the virtual machine scale set of our AKS cluster and permission in the resource group to access repositories to pull images from ACR. acrpull IDENTITY_CLIENT_ID= IDENTITY_RESOURCE_ID= RG_RESOURCE_ID= NODE_GROUP=$(az aks show -g -n --query nodeResourceGroup -o tsv) NODES_RESOURCE_ID=$(az group show -n -o tsv --query ) az role assignment create --role --assignee --scope az role assignment create --role --assignee --scope export " " $(az identity show -g ${IDENTITY_RESOURCE_GROUP} -n ${IDENTITY_NAME} --query clientId -otsv) export " " $(az identity show -g ${IDENTITY_RESOURCE_GROUP} -n ${IDENTITY_NAME} --query id -otsv) export " " $(az group show -g ${RESOURCE_GROUP} --query id -otsv) export ${RESOURCE_GROUP} ${CLUSTER_NAME} export $NODE_GROUP "id" "Reader" " " $IDENTITY_CLIENT_ID $NODES_RESOURCE_ID "acrpull" " " $IDENTITY_CLIENT_ID $RG_RESOURCE_ID Next, let’s create a pod identity for the cluster using the following command. az aks pod-identity add --resource-group --cluster-name --namespace dev-ns --name my-sp-pod-identity --identity-resource-id ${RESOURCE_GROUP} ${CLUSTER_NAME} ${IDENTITY_RESOURCE_ID} Now the first step is done, and we move on to the next step. Azure Database for PostgreSQL server In this section, we will create an Azure SQL server and PostgreSQL database, grant database access to the identity we created in the first step. First, create an Azure SQL server and database. DB_SERVER=my-sp-db-server DB_RG=my-db-rg DB_NAME=my-sp-db PGSSLMODE= az group create --name=my-database-rg --location eastus az postgres server create --resource-group ${DB_RG} --name ${DB_SERVER} --location eastus --admin-user myadmin --admin-password P@ssword123 --sku-name B_Gen5_1 az postgres db create -g ${DB_RG} -s ${DB_SERVER} -n ${DB_NAME} export export export export require After the SQL server is ready, secure the server by setting the IP firewall rule. az postgres server firewall-rule create --resource-group --server -name --start-ip-address YOUR_LOCAL_CLIENT_IP --end-ip-address YOUR_LOCAL_CLIENT_IP ${DB_RG} ${DB_SERVER} "AllowAllLinuxAzureIps" Next, add the Azure AD Admin user to SQL Server; for more details about AD authentication, please refer to this . link After the AD admin user has been set up, connect as the Azure AD administrator user to the PostgreSQL database using Azure AD authentication and run the following SQL statements: aad_validate_oids_in_tenant = ; myuser LOGIN azure_ad_user; my-sp-db; ALL ALL myuser; SET off CREATE ROLE WITH PASSWORD '<YOU_IDENTITY_CLIENT_ID>' IN ROLE CREATE DATABASE GRANT PRIVILEGES ON TABLES IN SCHEMA public TO Replace <YOUR_IDENTITY_CLIENT_ID> with your identity client id that we created in section 1. Spring Boot application for AKS The demo/sample application is a simple Spring Boot REST API; we will build the API in a docker image and then push it to Azure Container Registry (ACR). The complete Java project is in my Github , clone the repo and run the following command in the project root directory: repo mvn install dependency:copy-dependencies -DskipTests && target/dependency; jar -xf ../*.jar && ../.. cd cd Make sure you have JAVA SDK and maven installed on your computer. Next, Create Azure Container Registry. az acr create --resource-group --location eastus --name myspdemo --sku Basic ${RESOURCE_GROUP} Then login to the ACR, build and push the Java container image to the registry. az acr login --name myspdemo && mvn compile jib:build Note that we have the plugin in the Spring Boot project; for more details visit this . jib link To connect to the PostgreSQL database using managed identity, we have to acquire an OAuth access token from the MSI endpoint: http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=< > YOUR_IDENTITY_CLIENT_ID Then, configure a DataSource programmatically in Spring Boot. The configuration script would look similar to this: com.example.awesomeprject; javax.sql.DataSource; org.springframework.context.annotation.Bean; org.springframework.cloud.context.config.annotation.RefreshScope; org.springframework.context.annotation.Configuration; org.springframework.beans.factory.annotation.Value; org.springframework.boot.jdbc.DataSourceBuilder; org.json.JSONTokener; org.json.JSONObject; java.net.*; java.io.BufferedReader; java.io.InputStreamReader; org.apache.log4j.Logger; { Logger logger = Logger.getLogger( ); ( ) String Host; ( ) String User; ( ) String Database; ( ) String ClientId; { DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create(); { URL url = URL( + ClientId); HttpURLConnection con = (HttpURLConnection) url.openConnection(); con.setRequestMethod( ); con.setRequestProperty( , ); BufferedReader in = BufferedReader( InputStreamReader(con.getInputStream())); JSONTokener tokener = JSONTokener(in); JSONObject json = JSONObject(tokener); String accessToken = json.getString( ); logger.info( + accessToken); dataSourceBuilder.url( .Host); dataSourceBuilder.username( .User); dataSourceBuilder.password(accessToken); in.close(); con.disconnect(); } (Exception e) { e.printStackTrace(); } dataSourceBuilder.build(); } } package import import import import import import import import import import import import @Configuration public class DataSourceConfig public static "global" @Value "${db.host}" private @Value "${db.user}" private @Value "${db.name}" private @Value "${client_id}" private @Bean @RefreshScope DataSource public getDataSource () try new "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=" "GET" "Metadata" "true" new new new new "access_token" "accessToken: " this this catch return Now that we have all of the resources, it’s time to deploy our application pod. Deploy to Kubernetes with Kustomize With Kustomize, we can create multiple overlays and deploy the application with multi environments to Kubernetes. Kustomize is a tool included with kubectl 1.14 that “lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is.” Make a directory for all the default configuration templates: .k8s/base kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - namespace.yaml - service.yaml - deployment.yaml deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: demo-deployment labels: app: demo spec: selector: matchLabels: app: demo strategy: type: Recreate template: metadata: labels: app: demo aadpodidbinding: my-sp-pod-identity spec: containers: - image: myspdemo2021.azurecr.io/awesomeprject:latest name: demo ports: - containerPort: 8080 env: - name: DB_SCHEMA valueFrom: configMapKeyRef: name: sp-config key: DB_SCHEMA - name: DB_DATA valueFrom: configMapKeyRef: name: sp-config key: DB_DATA - name: DS_INIT_MODE valueFrom: configMapKeyRef: name: sp-config key: DS_INIT_MODE - name: DB_HOST valueFrom: configMapKeyRef: name: sp-config key: DB_HOST - name: DB_USER valueFrom: configMapKeyRef: name: sp-config key: DB_USER - name: DB_NAME valueFrom: configMapKeyRef: name: sp-config key: DB_NAME - name: CLIENT_ID valueFrom: secretKeyRef: name: sp-secret key: CLIENT_ID volumeMounts: - name: config mountPath: 'app/resources/config' readOnly: true volumes: - name: config configMap: name: sp-config items: - key: 'schema.sql' path: 'schema.sql' - key: 'data.sql' path: 'data.sql' namespace.yaml apiVersion: v1 kind: Namespace metadata: name: ns service.yaml apiVersion: v1 kind: Service metadata: name: demo-service labels: app: demo spec: ports: - protocol: TCP port: 80 targetPort: 8080 selector: app: demo type: LoadBalancer Then, make .k8s/dev for development environment configuration, Kustomize called this an overlay. Add new configMap.yml, configmap.yml, and kustomization.yaml files into the overlay directory . .k8s/dev Kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namePrefix: dev- namespace: dev-ns commonLabels: variant: dev # patchesStrategicMerge: resources: - configmap.yaml - secret.yaml bases: - ../base configMap.yml apiVersion: v1 kind: ConfigMap metadata: name: sp-config data: DS_INIT_MODE: always DB_USER: myuser@my-sp-db-server DB_HOST: jdbc:postgresql://my-sp-db-server.postgres.database.azure.com:5432/my-sp-db?sslmode=require DB_NAME: my-sp-db DB_SCHEMA: config/schema.sql DB_DATA: config/data.sql data.sql: | INSERT INTO "user" (firstName, lastName) SELECT 'William','Ferguson' WHERE NOT EXISTS ( SELECT id FROM "user" WHERE firstName = 'William' AND lastName = 'Ferguson' ); schema.sql: | DROP TABLE IF EXISTS "user"; CREATE TABLE "user" ( id SERIAL PRIMARY KEY, firstName VARCHAR(100) NOT NULL, lastName VARCHAR(100) NOT NULL ); secret.yaml apiVersion: v1 kind: Secret data: CLIENT_ID: CLIENT_ID_ENCODED_WITH_BASE64 metadata: name: sp-secret type: Opaque In this demo, I store identity base 64 encoded client id into secret, for production, I suggest that we store the client id in Azure Key Vault and . integrate Azure Key Vault with AKS We are almost there! Now deploy these configuration files to the Kubernetes cluster. kustomize build .k8s/dev/. | kubectl apply -f - Once the application has been deployed, use to check the status of our application pod: kubectl kubectl get pods -n dev-ns We will eventually see our application Pod is in Running status and 1/1 containers in the READY column: NAME READY STATUS RESTARTS AGE dev-demo-deployment b5 srzz / Running h -6499974 -2 1 1 0 23 We can view Kubernetes logs, events, and pod metrics in real-time in the Azure Portal. Container insights includes the Live Data feature, which is an advanced diagnostic feature allowing you direct access to your Azure Kubernetes Service (AKS) container logs (stdout/stderror), events, and pod metrics. It exposes direct access to kubectl logs -c, kubectl get events, and kubectl top pods. For more details about Container Insight, refer to this . link Conclusion With these steps above, we now have a Java Spring Boot REST API running in Kubernetes and connects to an Azure PostgreSQL database using AAD Pod Identities. I also walked you through how to deploy applications to Kubernetes with Kustomize. For the complete code of this sample/demo, please refer to my GitHub . repo Read behind a paywall at https://codeburst.io/deploying-a-spring-boot-rest-api-on-azure-kubernetes-service-with-azure-database-for-postgresql-4bf86a8059e0