Deploy a Micronaut Microservices Application to Oracle Cloud Infrastructure Container Engine for Kubernetes

This guide describes how to deploy a Micronaut® application, consisting of three microservices, to the Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) using the Micronaut Kubernetes project.

The Micronaut Kubernetes project provides integration between Micronaut and Kubernetes. It adds support for the following features:

  • Service Discovery
  • Configuration client for config maps and secrets
  • Kubernetes blocking and non-blocking clients built on top of the official Kubernetes Java SDK

OKE is a managed Kubernetes service for deploying containerized applications to the cloud.

The guide demonstrates how to use Kubernetes Service Discovery and Distributed Configuration to connect three microservices, and discover how Micronaut integration with Kubernetes simplifies deployment to OKE.

The application consists of three microservices:

  • users - contains customer data that can place orders on items, also a new customer can be created. It requires HTTP basic authentication to access it.
  • orders - contains all orders that customers have created as well as available items that customers can order. This microservice also enables the creation of new orders. It requires HTTP basic authentication to access it.
  • api - acts as a gateway to the orders and users microservices. It combines results from both microservices and checks data when a customer creates a new order.

Prerequisites #

A note regarding your development environment

Consider using Visual Studio Code, which provides native support for developing applications with the Graal Development Kit extension.

Note: If you use IntelliJ IDEA, enable annotation processing.

Windows platform: The GDK guides are compatible with Gradle only. Maven support is coming soon.

1. Create or Download a Microservices Application #

You can create a microservices application from scratch by following this guide, or you can download the completed example:

The application ZIP file will be downloaded in your default downloads directory. Unzip it and proceed to the next steps.

Note: By default, a Micronaut application detects its runtime environment. A detected environment (in this case, k8s) overrides the default specified environment (in this case, oraclecloud). This means that you should locate your configuration in the application-k8s.properties and bootstrap-k8s.properties files. Alternatively, you can specify the oraclecloud environment passing it as a command-line option (-Dmicronaut.environments=oraclecloud) or via an environment variable (MICRONAUT_ENVIRONMENTS=oraclecloud).

2. Set Up Oracle Cloud Infrastructure #

This guide requires the following Oracle Cloud Infrastructure resources (known as a “stack”):

Instead of creating the stack manually, use the following steps to provision the resources using Oracle Cloud Infrastructure Resource Manager:

  1. Download the Terraform configuration GDK Kubernetes Download Terraform Configuration to your default downloads directory. (For a general introduction to Terraform and the “infrastructure-as-code” model, see https://www.terraform.io.)

  2. Follow the instructions in Creating a Stack from a Zip File.
    • In the “ssh_public_key” field of the Configure variables panel, paste the contents of your public SSH key.
    • In the Review panel, click Run apply.
  3. Click Create. (It can take up to 15 minutes to provision the stack. The Log provides details of progress.)

    Note: These resources must not already exist in your Oracle Cloud Infrastructure tenancy, otherwise the Resource Manager will fail to provision the stack—in the Log you will see a message similar to Error: 409-NAMESPACE_CONFLICT, Repository already exists. Delete the conflicting resources before re-attempting to provision the stack.

  4. When the job completes, click Outputs in the list of the job’s resources and make note of the following values:
    • compute_instance_public_ip (the public IP address of the Compute instance)
    • tenancy-namespace (the namespace of the tenancy)
    • region (your region identifier)
    • cluster-id (the OCID of the cluster you created)
  5. Define an environment variable for the public IP address of the compute instance:
    export COMPUTE_INSTANCE_PUBLIC_IP=<compute_instance_public_ip>
    set COMPUTE_INSTANCE_PUBLIC_IP=<compute_instance_public_ip>
    $COMPUTE_INSTANCE_PUBLIC_IP = "<compute_instance_public_ip>"

    Replacing <compute_instance_public_ip> with its value.

3. Prepare to Deploy the Microservices #

  1. From your local terminal, copy the ZIP file to the Compute instance using scp:

    scp -i /path/to/ssh-key \
           /path/to/downloads/oci-kubernetes-demo-java-gradle.zip \
           opc@$COMPUTE_INSTANCE_PUBLIC_IP:
    scp -i /path/to/ssh-key \
           /path/to/downloads/oci-kubernetes-demo-java-maven.zip \
           opc@$COMPUTE_INSTANCE_PUBLIC_IP:
  2. From the same terminal, use ssh to connect to your Compute instance (ssh -i /path/to/ssh-key opc@$COMPUTE_INSTANCE_PUBLIC_IP) and unzip the ZIP file using unzip, then cd into the newly created directory.

  3. Define some environment variables to make the deploying process easier:
    export OCI_TENANCY_NAMESPACE=<tenancy-namespace>
    export OCI_REGION=<region>
    export OCI_CLUSTER_ID=<cluster-id>
    set OCI_TENANCY_NAMESPACE=<tenancy-namespace>
    set OCI_REGION=<region>
    set OCI_CLUSTER_ID=<cluster-id>
    $OCI_TENANCY_NAMESPACE = "<tenancy-namespace>"
    $OCI_REGION = "<region>"
    $OCI_CLUSTER_ID = "<cluster-id>"
  4. Before you can publish a container image to Container Registry, you must first authenticate with it. For this, follow the steps in Getting an Auth Token to create an authentication token.

  5. Authenticate with docker to Container Registry with your region identifier:

     docker login $OCI_REGION.ocir.io
    
    • When asked for a username, provide <tenancy-namespace>/<username>, for example ocideveloper/example@example.com.

    • When asked for a password, provide the Auth token.

    The command should complete by printing “Login Succeeded”. (It may take some time before your authentication token activates.)

3.1. Create a Container Image of the Native Users Microservice #

To create a container image of the native users microservice named “users”, run the following command from the users/ directory:

./gradlew dockerBuildNative

Note: If you encounter problems creating a container image, run the following command from the users/build/docker/native-main/ directory:

docker build . -t users-oci -f DockerfileNative
./mvnw clean package -Dpackaging=docker-native -Pgraalvm

Note: If you encounter problems creating a container image, run the following command from the users/target/ directory:

docker build . -t users-oci -f Dockerfile

3.2. Push the Users Microservice to a Container Repository #

The Terraform job created a container repository named gdk-k8s/users-oci.

  1. Tag the existing users microservice container image with details of the container repository:

     docker tag users-oci:latest $OCI_REGION.ocir.io/$OCI_TENANCY_NAMESPACE/gdk-k8s/users-oci:latest
    
  2. Push the tagged users microservice container image to the remote repository:

     docker push $OCI_REGION.ocir.io/$OCI_TENANCY_NAMESPACE/gdk-k8s/users-oci:latest
    

3.3. Update the Users Microservice #

Edit the file named users/k8s-oci.yml as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: gdk-k8s
  name: users
spec:
  selector:
    matchLabels:
      app: users
  template:
    metadata:
      labels:
        app: users
    spec:
      serviceAccountName: gdk-service
      containers:
        - name: users
          image: '<region>.ocir.io/<tenancy-namespace>/gdk-k8s/users-oci:latest' # <1>
          imagePullPolicy: Always # <2>
          ports:
            - name: http
              containerPort: 8080
          readinessProbe:
            httpGet:
              path: /health/readiness
              port: 8080
            initialDelaySeconds: 5
            timeoutSeconds: 3
          livenessProbe:
            httpGet:
              path: /health/liveness
              port: 8080
            initialDelaySeconds: 5
            timeoutSeconds: 3
            failureThreshold: 10
          env:
            - name: MICRONAUT_ENVIRONMENTS
              value: oraclecloud
      imagePullSecrets:
        - name: ocirsecret # <3>
---
apiVersion: v1
kind: Service
metadata:
  namespace: gdk-k8s
  name: users
spec:
  selector:
    app: users
  type: NodePort
  ports:
    - protocol: TCP
      port: 8080

1 The tag of the container image in Container Registry. Change the <region> to your region and replace <tenancy-namespace> with your tenancy namespace. (This MUST match the tag you created above in step 1 of section 3.2.)

2 The imagePullPolicy is Always, which means that Kubernetes will always pull the latest version of the image from the container registry.

3 The name of a secret to pull container images from Container Registry. (You will create the secret in section 4.)

3.4. Create a Container Image of the Native Orders Microservice #

To create a container image of the native orders microservice named “orders”, run the following command from the orders/ directory:

./gradlew dockerBuildNative

Note: If you encounter problems creating a container image, run the following command from the orders/build/docker/native-main/ directory:

docker build . -t orders-oci -f DockerfileNative
./mvnw clean package -Dpackaging=docker-native -Pgraalvm

Note: If you encounter problems creating a container image, run the following command from the orders/target/ directory:

docker build . -t orders-oci -f Dockerfile

3.5. Push the Orders Microservice to a Container Repository #

The Terraform job created a container repository named gdk-k8s/orders-oci.

  1. Tag the existing orders microservice container image with details of the container repository:

     docker tag orders-oci:latest $OCI_REGION.ocir.io/$OCI_TENANCY_NAMESPACE/gdk-k8s/orders-oci:latest
    
  2. Push the tagged orders microservice container image to the remote repository:

     docker push $OCI_REGION.ocir.io/$OCI_TENANCY_NAMESPACE/gdk-k8s/orders-oci:latest
    

3.6. Update the Orders Microservice #

Edit the file named orders/k8s-oci.yml as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: gdk-k8s
  name: orders
spec:
  selector:
    matchLabels:
      app: orders
  template:
    metadata:
      labels:
        app: orders
    spec:
      serviceAccountName: gdk-service
      containers:
        - name: orders
          image: '<region>.ocir.io/<tenancy-namespace>/gdk-k8s/orders-oci:latest' # <1>
          imagePullPolicy: Always # <2>
          ports:
            - name: http
              containerPort: 8080
          readinessProbe:
            httpGet:
              path: /health/readiness
              port: 8080
            initialDelaySeconds: 5
            timeoutSeconds: 3
          livenessProbe:
            httpGet:
              path: /health/liveness
              port: 8080
            initialDelaySeconds: 5
            timeoutSeconds: 3
            failureThreshold: 10
          env:
            - name: MICRONAUT_ENVIRONMENTS
              value: oraclecloud
      imagePullSecrets:
        - name: ocirsecret # <3>
---
apiVersion: v1
kind: Service
metadata:
  namespace: gdk-k8s
  name: orders
spec:
  selector:
    app: orders
  type: NodePort
  ports:
    - protocol: TCP
      port: 8080

1 The container image tag that exists in Container Registry. Change the <region> to your region and replace <tenancy-namespace> with your tenancy namespace. (This MUST match the tag you created above in step 1 of section 3.5.)

2 The imagePullPolicy to Always, which means that Kubernetes will always pull the latest version of the image from the container registry.

3 The name of a secret to pull container images from Container Registry. (You will create the secret in section 4.)

3.7. Create a Container Image of the Native API (Gateway) Microservice #

To create a container image of the native api microservice named “api”, run the following command from the api/ directory:

./gradlew dockerBuildNative

Note: If you encounter problems creating a container image, run the following command from the api/build/docker/native-main/ directory:

docker build . -t api-oci -f DockerfileNative
./mvnw clean package -Dpackaging=docker-native -Pgraalvm

Note: If you encounter problems creating a container image, run the following command from the api/target/ directory:

docker build . -t api-oci -f Dockerfile

3.8. Push the API Microservice to a Container Repository #

The Terraform job created a container repository named gdk-k8s/api-oci.

  1. Tag the existing api microservice container image with details of the container repository:

     docker tag api-oci:latest $OCI_REGION.ocir.io/$OCI_TENANCY_NAMESPACE/gdk-k8s/api-oci:latest
    
  2. Push the tagged api microservice container image to the remote repository:

     docker push $OCI_REGION.ocir.io/$OCI_TENANCY_NAMESPACE/gdk-k8s/api-oci:latest
    

3.9. Update the API Microservice #

Edit the file named api/k8s-oci.yml as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: gdk-k8s
  name: api
spec:
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      serviceAccountName: gdk-service
      containers:
        - name: api
          image: '<region>.ocir.io/<tenancy-namespace>/gdk-k8s/api-oci:latest' # <1>
          imagePullPolicy: Always # <2>
          ports:
            - name: http
              containerPort: 8080
          readinessProbe:
            httpGet:
              path: /health/readiness
              port: 8080
            initialDelaySeconds: 5
            timeoutSeconds: 3
          livenessProbe:
            httpGet:
              path: /health/liveness
              port: 8080
            initialDelaySeconds: 5
            timeoutSeconds: 3
            failureThreshold: 10
          env:
            - name: MICRONAUT_ENVIRONMENTS
              value: oraclecloud
      imagePullSecrets:
        - name: ocirsecret # <3>
---
apiVersion: v1
kind: Service
metadata:
  namespace: gdk-k8s
  name: api
  annotations: # <4>
    oci.oraclecloud.com/load-balancer-type: lb
    service.beta.kubernetes.io/oci-load-balancer-shape: flexible
    service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: '10'
    service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: '10'
spec:
  selector:
    app: api
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 8080

1 The container image name that exists in Container Registry. Change the <region> to your region and replace <tenancy-namespace> with your tenancy namespace. (This MUST match the tag you created above in step 1 of section 3.8.)

2 The imagePullPolicy to Always, which means that Kubernetes will always pull the latest version of the image from the container registry.

3 The name of a secret to pull container images from Container Registry. (You will create the secret in section 4.) 4 Metadata annotations for an Oracle Cloud Infrastructure Load Balancer.

4. Deploy Microservices to OKE #

  1. Create a directory for a kubectl configuration:

     mkdir -p $HOME/.kube
    
  2. Generate a kubectl configuration for authentication to OKE:

     oci ce cluster create-kubeconfig \
           --cluster-id $OCI_CLUSTER_ID \
           --file $HOME/.kube/config \
           --region $OCI_REGION \
           --token-version 2.0.0 \
           --kube-endpoint PUBLIC_ENDPOINT
    
  3. Deploy the auth.yml file that you created in the Deploy a Micronaut Microservices Application to a Local Kubernetes Cluster guide:

     kubectl apply -f auth.yml
    
  4. Create an ocirsecret secret for authentication to Container Registry using the command below. The secret is a object to store user credential data (encrypted data), for example, the database username and password. In this case, OKE uses it to authenticate to Container Registry to be able to pull microservices container images.

     kubectl create secret docker-registry ocirsecret \
           --docker-server=$OCI_REGION.ocir.io \
           --docker-username=<username> \
           --docker-password=<auth-token> \
           --namespace=gdk-k8s
    
  5. Deploy the users microservice:

     kubectl apply -f users/k8s-oci.yml
    
  6. Deploy the orders microservice:

     kubectl apply -f orders/k8s-oci.yml
    
  7. Run the next command to deploy the api microservice:

     kubectl apply -f api/k8s-oci.yml
    

5. Test Integration Between the Microservices Deployed to OKE #

  1. Run the following command to check the status of the pods and make sure that all of them have the status “Running”:
     kubectl get pods -n=gdk-k8s
    
     NAME                      READY   STATUS    RESTARTS   AGE
     api-6fb4cd949f-kxxx8      1/1     Running   0          13s
     orders-595887ddd6-6lzp4   1/1     Running   0          25s
     users-df6f78cd7-lgnzx     1/1     Running   0          37s
  2. Run this command to check the status of the microservices:

     kubectl get services -n=gdk-k8s
    
     NAME         TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)             AGE
     api          LoadBalancer   10.96.70.48    129.146.149.81   8080:31666/TCP      2m9s
     orders       NodePort       10.96.94.130   <none>           8080:31702/TCP      2m22s
     users        NodePort       10.96.34.174   <none>           8080:31528/TCP      2m33s

    If the EXTERNAL-IP property of the api service has a <pending> status, wait a few seconds and then run the command again. If the <pending> status persists for more than one minute, try the following:

    • Verify that a Load Balancer was created and has an external (public) IP address.
      1. In the Oracle Cloud Console, open the navigation menu.
      2. Click Networking, and then click Load balancers.
      3. Click Load balancer.
      4. Ensure a Load Balancer has been created for your service and has an external IP address. It usually takes a few minutes to allocate an IP address.
    • Check Load Balancer quota (if a Load Balancer was not created, you may have reached the quota limit).
      1. In the Oracle Cloud Console, open the navigation menu.
      2. Click Governance & Administration. Under Tenancy Management, click Limits, Quotas and Usage.
      3. Select the Load Balancer quota. If the quota limit has been reached, request a quota increase or delete unused Load Balancers.
  3. Retrieve the URL of the api microservice and set it as the value of the $API_URL environment variable:

     export API_URL=http://$(kubectl get svc api -n=gdk-k8s -o json | jq -r '.status.loadBalancer.ingress[0].ip'):8080
    
  4. Run a curl command to create a new user via the api microservice:

     curl -X "POST" "$API_URL/api/users" \
          -H 'Content-Type: application/json; charset=utf-8' \
          -d '{ "first_name": "Nemanja", "last_name": "Mikic", "username": "nmikic" }' \
          | jq
    

    Your output should look like:

     {
       "id":1,
       "username":"nmikic",
       "first_name":"Nemanja",
       "last_name":"Mikic"
     }
    
  5. Run a curl command to create a new order via the api microservice:

     curl -X "POST" "$API_URL/api/orders" \
          -H 'Content-Type: application/json; charset=utf-8' \
          -d '{ "user_id": 1, "item_ids": [1,2] }' \
          | jq
    

    Your output should include details of the order, as follows:

     {
       "id": 1,
       "user": {
         "first_name": "Nemanja",
         "last_name": "Mikic",
         "id": 1,
         "username": "nmikic"
       },
       "items": [
         {
           "id": 1,
           "name": "Banana",
           "price": 1.5
         },
         {
           "id": 2,
           "name": "Kiwi",
           "price": 2.5
         }
       ],
       "total": 4.0
     }
    
  6. Run a curl command to list the orders:

     curl "$API_URL/api/orders" \
          -H 'Content-Type: application/json; charset=utf-8' \
          | jq
    

    You should see output that is similar to the following:

     [
       {
         "id": 1,
         "user": {
           "first_name": "Nemanja",
           "last_name": "Mikic",
           "id": 1,
           "username": "nmikic"
         },
         "items": [
           {
             "id": 1,
             "name": "Banana",
             "price": 1.5
           },
           {
             "id": 2,
             "name": "Kiwi",
             "price": 2.5
           }
         ],
         "total": 4.0
       }
     ]
    
  7. Try to place an order for a user who does not exist (with id 100). Run a curl command:

     curl -X "POST" "$API_URL/api/orders" \
          -H 'Content-Type: application/json; charset=utf-8' \
          -d '{ "user_id": 100, "item_ids": [1,2] }' \
          | jq
    

    You should see the following error message:

     {
       "message": "Bad Request",
       "_links": {
         "self": [
           {
             "href": "/api/orders",
             "templated": false
           }
         ]
       },
       "_embedded": {
         "errors": [
           {
             "message": "User with id 100 doesn't exist"
           }
         ]
       }
     }
    

6. Clean Up Cloud Resources #

After you have finished this guide, clean up the resources you created:

  1. Clean up the Oracle Cloud Infrastructure Load Balancer by deleting the namespaces you created:
     kubectl delete namespaces gdk-k8s
    
  2. Destroy the remaining Oracle Cloud Infrastructure resources associated with the stack by following the steps in Creating a Destroy Job.

  3. Follow the instructions in Deleting a Stack to delete the stack.

Summary #

This guide demonstrated how to use Kubernetes Service Discovery and Distributed Configuration, provided with the Micronaut Kubernetes integration, to connect three microservices, and to deploy these microservices to a Kubernetes cluster in Oracle Cloud Infrastructure Container Engine for Kubernetes.