Expert’s Guide to Deploy a Micronaut Microservices Application to Oracle Cloud Infrastructure Container Engine for Kubernetes
This guide describes how to deploy a Micronaut® application, consisting of three microservices, to the Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) using the Micronaut Kubernetes project.
The Micronaut Kubernetes project provides integration between Micronaut and Kubernetes. It adds support for the following features:
Service Discovery
Configuration client for config maps and secrets
Kubernetes blocking and non-blocking clients built on top of the official Kubernetes Java SDK
Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
OKE is a managed Kubernetes service for deploying containerized applications to the cloud.
Note: The guide assumes that the reader is familiar with using the Oracle Cloud Infrastructure Command Line Interface (CLI).
Note on the Sample Application
The application consists of three microservices:
-
users - contains customer data that can place orders on items, also a new customer can be created. It requires HTTP basic authentication to access it.
-
orders - contains all orders that customers have created as well as available items that customers can order. This microservice also enables the creation of new orders. It requires HTTP basic authentication to access it.
-
api - acts as a gateway to the orders and users services. It combines results from both services and checks data when a customer creates a new order.
Prerequisites
-
JDK 17 or higher. See Setting up Your Desktop.
-
An Oracle Cloud Infrastructure account. See Setting up Your Cloud Accounts.
-
An Oracle Cloud Infrastructure compartment.
-
Appropriate permission is granted to your user account to manage OKE. (For more information, see Policy Configuration for Cluster Creation and Deployment.)
-
The Oracle Cloud Infrastructure CLI installed with local access configured.
-
A Docker-API compatible container runtime such as Rancher Desktop or Docker installed and running.
-
jq
: a lightweight and flexible command-line JSON processor. -
kubectl
to deploy the application to OKE.
A note regarding your development environment
Consider using Visual Studio Code, which provides native support for developing applications with the Graal Development Kit for Micronaut Extension Pack.
Note: If you use IntelliJ IDEA, enable annotation processing.
Windows platform: The GDK guides are compatible with Gradle only. Maven support is coming soon.
1. Create or Download a Microservices Application
You can create a microservices application from scratch by following this guide, or you can download the completed example:
Follow the steps below to create the application from scratch. However, you can also download the completed example:
The application ZIP file will be downloaded in your default downloads directory. Unzip it and proceed to the next steps.
Note: By default, a Micronaut application detects its runtime environment. A detected environment (in this case, k8s) overrides the default specified environment (in this case, oraclecloud). This means that you should locate your configuration in the application-k8s.properties and bootstrap-k8s.properties files. Alternatively, you can specify the
oraclecloud
environment passing it as a command-line option (-Dmicronaut.environments=oraclecloud
) or via an environment variable (MICRONAUT_ENVIRONMENTS=oraclecloud
).
2. Create an Oracle Cloud Infrastructure Kubernetes Cluster
You will use Quick Create cluster option in OKE to create Kubernetes cluster.
-
Login to the Oracle Cloud console and open the navigation menu. Click Developer Services. Under Containers & Artifacts, click Kubernetes Clusters (OKE).
-
Select your compartment in the Compartment drop-down list.
-
Click Create cluster and select the Quick create option. Click Submit.
-
Enter the name for your cluster, for example, gdk-k8s. Select Public Endpoint for the Kubernetes API endpoint; Private Workers for worker nodes; and select the default the shape. Click Next.
-
Click Create cluster.
-
It may take a few minutes to create all the resources. When they are all complete, click Close.
-
From the Cluster details tab click Copy to copy the Cluster Id (you will need it later).
3. Prepare to Deploy Microservices
3.1. Export Environment Variables
Define some environment variables to make the deploying process easier:
-
OCI_USER_ID
to store your user OCID, which you can find in your Oracle Cloud Infrastructure configuration file (usually ~/.oci/config). -
OCI_TENANCY_NAMESPACE
to store the tenancy namespace (retrieve your tenancy namespace using the Oracle Cloud Infrastructure CLI). -
OCIR_USERNAME
to store your username in the format<tenancy_namespace>/<username>
. You can reuseOCI_TENANCY_NAMESPACE
and only edit the<username>
part. -
OCI_REGION
to store your cloud region identifier, for example, "us-phoenix-1". -
OCI_CLUSTER_ID
to store the Cluster Id that you copied earlier. -
OCI_COMPARTMENT_ID
to store your compartment id.
For example:
export OCI_USER_ID="ocid1.user.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
export OCI_TENANCY_NAMESPACE=$(oci os ns get | jq .data -r)
export OCIR_USERNAME="$OCI_TENANCY_NAMESPACE/<username>"
export OCI_REGION="<region>"
export OCI_CLUSTER_ID="ocid1.cluster.oc1.iad.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
export OCI_COMPARTMENT_ID="ocid1.compartment.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
3.2. Authenticate to Oracle Cloud Infrastructure Registry
-
Create an
AUTH_TOKEN
to authenticate to Oracle Cloud Infrastructure Registry (also known as Container Registry). (Oracle Cloud Infrastructure restricts you to two authentication tokens at the same time. If you already have two tokens, use an existing one or delete one that you are not using.) -
Export the
AUTH_TOKEN
variable:export AUTH_TOKEN=$(oci iam auth-token create --user-id $OCI_USER_ID --description gdk-k8s-token | jq -r '.data.token')
-
Run the next command to log in to
ocir.io
(Container Registry):docker login $OCI_REGION.ocir.io -u $OCIR_USERNAME -p $AUTH_TOKEN
The command should complete by printing "Login Succeeded". (It may take some time before your authentication token activates.)
3.3. Create a Container Image of the Native users Microservice
To create a container image of the native users microservice named "users", run the following command from the users/ directory:
Note: If you are having issues building a container image, try the following command from the users/target/ directory:
Note: Ensure that you construct container images for the correct CPU architecture. For instance, if you are using AArch64, modify the
DOCKER_DEFAULT_PLATFORM
environment variable to the valuelinux/amd64
. Alternatively, you have the option to use AArch64 instances within your Kubernetes cluster.
3.4. Push the users Microservice to a Container Repository
-
Create a container repository named “gdk-k8s/users-oci” in your compartment:
export USERS_REPOSITORY=$(oci artifacts container repository create --display-name gdk-k8s/users-oci --compartment-id $OCI_COMPARTMENT_ID | jq .data.id -r)
-
Tag the existing users microservice container image with details of the container repository:
docker tag users-oci:latest $OCI_REGION.ocir.io/$OCI_TENANCY_NAMESPACE/gdk-k8s/users-oci:latest
-
Push the tagged users microservice container image to the remote repository:
docker push $OCI_REGION.ocir.io/$OCI_TENANCY_NAMESPACE/gdk-k8s/users-oci:latest
3.5. Update the users Microservice
Edit the file named users/k8s-oci.yml as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: gdk-k8s
name: "users"
spec:
selector:
matchLabels:
app: "users"
template:
metadata:
labels:
app: "users"
spec:
serviceAccountName: gdk-service
containers:
- name: "users"
image: <region-key>.ocir.io/<tenancy-namespace>/gdk-k8s/users-oci:latest (1)
imagePullPolicy: Always (2)
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /health/readiness
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 3
livenessProbe:
httpGet:
path: /health/liveness
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 3
failureThreshold: 10
env:
- name: MICRONAUT_ENVIRONMENTS
value: "oraclecloud"
imagePullSecrets:
- name: ocirsecret (3)
---
apiVersion: v1
kind: Service
metadata:
namespace: gdk-k8s
name: "users"
spec:
selector:
app: "users"
type: NodePort
ports:
- protocol: "TCP"
port: 8080
1 The container image tag that exists in Container Registry. Change the <region>
to your region and replace <tenancy-namespace>
with your tenancy namespace. (This MUST match the tag you created above in step 1 of section 3.5.)
2 The imagePullPolicy
to Always
, which means that Kubernetes will always pull the latest version of the image from the container registry.
3 The name of a secret to pull container images from Container Registry. (You will create the secret in section 4.)
3.6. Create a Container Image of the Native orders Microservice
To create a container image of the native orders microservice named "orders", run the following command from the orders/ directory:
Note: If you are having issues building a container image, try the following command from the orders/target/ directory:
Note: Ensure that you construct container images for the correct CPU architecture. For instance, if you are using AArch64, modify the
DOCKER_DEFAULT_PLATFORM
environment variable to the valuelinux/amd64
. Alternatively, you have the option to use AArch64 instances within your Kubernetes cluster.
3.7. Push the orders Microservice to a Container Repository
-
Create a container repository named “gdk-k8s/orders-oci” in your compartment:
export ORDERS_REPOSITORY=$(oci artifacts container repository create --display-name gdk-k8s/orders-oci --compartment-id $OCI_COMPARTMENT_ID | jq .data.id -r)
-
Tag the existing orders microservice container image with details of the container repository:
docker tag orders-oci:latest $OCI_REGION.ocir.io/$OCI_TENANCY_NAMESPACE/gdk-k8s/orders-oci:latest
-
Push the tagged orders microservice container image to the remote repository:
docker push $OCI_REGION.ocir.io/$OCI_TENANCY_NAMESPACE/gdk-k8s/orders-oci:latest
3.8. Update the orders Microservice
Edit the file named orders/k8s-oci.yml as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: gdk-k8s
name: "orders"
spec:
selector:
matchLabels:
app: "orders"
template:
metadata:
labels:
app: "orders"
spec:
serviceAccountName: gdk-service
containers:
- name: "orders"
image: <region-key>.ocir.io/<tenancy-namespace>/gdk-k8s/orders-oci:latest (1)
imagePullPolicy: Always (2)
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /health/readiness
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 3
livenessProbe:
httpGet:
path: /health/liveness
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 3
failureThreshold: 10
env:
- name: MICRONAUT_ENVIRONMENTS
value: "oraclecloud"
imagePullSecrets:
- name: ocirsecret (3)
---
apiVersion: v1
kind: Service
metadata:
namespace: gdk-k8s
name: "orders"
spec:
selector:
app: "orders"
type: NodePort
ports:
- protocol: "TCP"
port: 8080
1 The container image tag that exists in Container Registry. Change the <region>
to your region and replace <tenancy-namespace>
with your tenancy namespace. (This MUST match the tag you created above in step 1 of section 3.5.)
2 The imagePullPolicy
to Always
, which means that Kubernetes will always pull the latest version of the image from the container registry.
3 The name of a secret to pull container images from Container Registry. (You will create the secret in section 4.)
3.9. Create a Container Image of the Native api Microservice
To create a container image of the native api microservice named "api", run the following command from the api/ directory:
Note: If you are having issues building a container image, try the following command from the api/target/ directory:
Note: Ensure that you construct container images for the correct CPU architecture. For instance, if you are using AArch64, modify the
DOCKER_DEFAULT_PLATFORM
environment variable to the valuelinux/amd64
. Alternatively, you have the option to use AArch64 instances within your Kubernetes cluster.
3.10. Push the api Microservice to a Container Repository
-
Create a container repository named “gdk-k8s/api-oci” in your compartment:
export API_REPOSITORY=$(oci artifacts container repository create --display-name gdk-k8s/api-oci --compartment-id $OCI_COMPARTMENT_ID | jq .data.id -r)
-
Tag the existing api microservice container image with details of the container repository:
docker tag api-oci:latest $OCI_REGION.ocir.io/$OCI_TENANCY_NAMESPACE/gdk-k8s/api-oci:latest
-
Push the tagged api microservice container image to the remote repository:
docker push $OCI_REGION.ocir.io/$OCI_TENANCY_NAMESPACE/gdk-k8s/api-oci:latest
3.11. Update the api Microservice
Edit the file named api/k8s-oci.yml as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: gdk-k8s
name: "api"
spec:
selector:
matchLabels:
app: "api"
template:
metadata:
labels:
app: "api"
spec:
serviceAccountName: gdk-service
containers:
- name: "api"
image: <region-key>.ocir.io/<tenancy-namespace>/gdk-k8s/api-oci:latest (1)
imagePullPolicy: Always (2)
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /health/readiness
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 3
livenessProbe:
httpGet:
path: /health/liveness
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 3
failureThreshold: 10
env:
- name: MICRONAUT_ENVIRONMENTS
value: "oraclecloud"
imagePullSecrets:
- name: ocirsecret (3)
---
apiVersion: v1
kind: Service
metadata:
namespace: gdk-k8s
name: "api"
annotations: (4)
oci.oraclecloud.com/load-balancer-type: "lb"
service.beta.kubernetes.io/oci-load-balancer-shape: "flexible"
service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: "10"
service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: "10"
spec:
selector:
app: "api"
type: LoadBalancer
ports:
- protocol: "TCP"
port: 8080
1 The container image tag that exists in Container Registry. Change the <region>
to your region and replace <tenancy-namespace>
with your tenancy namespace. (This MUST match the tag you created above in step 1 of section 3.5.)
2 The imagePullPolicy
to Always
, which means that Kubernetes will always pull the latest version of the image from the container registry.
3 The name of a secret to pull container images from Container Registry. (You will create the secret in section 4.)
4. Deploy Microservices to OKE
-
Create a directory for a
kubectl
configuration:mkdir -p $HOME/.kube
-
Generate a
kubectl
configuration for authentication to OKE:oci ce cluster create-kubeconfig \ --cluster-id $OCI_CLUSTER_ID \ --file $HOME/.kube/config \ --region $OCI_REGION \ --token-version 2.0.0 \ --kube-endpoint PUBLIC_ENDPOINT
-
Set
KUBECONFIG
to the created config file, as shown below. (This variable is consumed bykubectl
.)export KUBECONFIG=$HOME/.kube/config
Note: On Windows,
kubectl
is typically installed within the Kubernetes extension and is not added to the system path. Windows users should configure a proxy in .kube/config to successfully deploy a project to Oracle Cloud Infrastructure: add aproxy-url
property undercluster
.clusters: - name: "dev" cluster: proxy-url: http://user:password@proxy:port ...
-
Deploy the auth.yml file that you created in the Deploy a Micronaut Microservices Application to a Local Kubernetes Cluster guide:
kubectl apply -f auth.yml
-
Create an
ocirsecret
secret for authentication to Container Registry using the command below. The secret is a object to store user credential data (encrypted data), for example, the database username and password. In this case, OKE uses it to authenticate to Container Registry to be able to pull microservices container images.kubectl create secret docker-registry ocirsecret \ --docker-server=$OCI_REGION.ocir.io \ --docker-username=$OCIR_USERNAME \ --docker-password=$AUTH_TOKEN \ --namespace=gdk-k8s
-
Deploy the users microservice:
kubectl apply -f users/k8s-oci.yml
-
Deploy the orders microservice:
kubectl apply -f orders/k8s-oci.yml
-
Run the next command to deploy the api microservice:
kubectl apply -f api/k8s-oci.yml
5. Test Integration Between the Microservices Deployed to OKE
-
Run the following command to check the status of the pods and make sure that all of them have the status "Running":
kubectl get pods -n=gdk-k8s
NAME READY STATUS RESTARTS AGE api-6fb4cd949f-kxxx8 1/1 Running 0 13s orders-595887ddd6-6lzp4 1/1 Running 0 25s users-df6f78cd7-lgnzx 1/1 Running 0 37s
-
Run this command to check the status of the microservices:
kubectl get services -n=gdk-k8s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE api LoadBalancer 10.96.70.48 129.146.149.81 8080:31666/TCP 2m9s orders NodePort 10.96.94.130 <none> 8080:31702/TCP 2m22s users NodePort 10.96.34.174 <none> 8080:31528/TCP 2m33s
If the
EXTERNAL-IP
property of the api service has a<pending>
status, wait a few seconds and then run the command again. If the<pending>
status persists for more than one minute, try the following:-
Verify that a Load Balancer was created.
-
In the Oracle Cloud Console, open the navigation menu.
-
Click Networking, and then click Load balancers.
-
Click Load balancer.
-
Ensure a Load Balancer has been created for your service.
-
-
Check Load Balancer quota (if a Load Balancer was not created, you may have reached the quota limit).
-
In the Oracle Cloud Console, open the navigation menu.
-
Click Governance & Administration. Under Tenancy Management, click Limits, Quotas and Usage.
-
Select the Load Balancer quota. If the quota limit has been reached, request a quota increase or delete unused Load Balancers.
-
-
-
Retrieve the URL of the api microservice and set it as the value of the
$API_URL
environment variable:export API_URL=http://$(kubectl get svc api -n=gdk-k8s -o json | jq -r '.status.loadBalancer.ingress[0].ip'):8080
-
Run a
curl
command to create a new user via the api microservice:curl -X "POST" "$API_URL/api/users" \ -H 'Content-Type: application/json; charset=utf-8' \ -d '{ "first_name": "Nemanja", "last_name": "Mikic", "username": "nmikic" }'
Your output should look like:
{ "id":1, "username":"nmikic", "first_name":"Nemanja", "last_name":"Mikic" }
-
Run a
curl
command to create a new order via the api microservice:curl -X "POST" "$API_URL/api/orders" \ -H 'Content-Type: application/json; charset=utf-8' \ -d '{ "user_id": 1, "item_ids": [1,2] }'
Your output should include details of the order, as follows:
{ "id": 1, "user": { "first_name": "Nemanja", "last_name": "Mikic", "id": 1, "username": "nmikic" }, "items": [ { "id": 1, "name": "Banana", "price": 1.5 }, { "id": 2, "name": "Kiwi", "price": 2.5 } ], "total": 4.0 }
-
Run a
curl
command to list the orders:curl "$API_URL/api/orders" \ -H 'Content-Type: application/json; charset=utf-8'
You should see output that is similar to the following:
[ { "id": 1, "user": { "first_name": "Nemanja", "last_name": "Mikic", "id": 1, "username": "nmikic" }, "items": [ { "id": 1, "name": "Banana", "price": 1.5 }, { "id": 2, "name": "Kiwi", "price": 2.5 } ], "total": 4.0 } ]
-
Try to place an order for a user who does not exist (with
id
100). Run acurl
command:curl -X "POST" "$API_URL/api/orders" \ -H 'Content-Type: application/json; charset=utf-8' \ -d '{ "user_id": 100, "item_ids": [1,2] }'
You should see the following error message:
{ "message": "Bad Request", "_links": { "self": [ { "href": "/api/orders", "templated": false } ] }, "_embedded": { "errors": [ { "message": "User with id 100 doesn't exist" } ] } }
6. Clean Up Cloud Resources
After you have finished this guide, clean up the resources you created.
-
Delete all Kubernetes resources that were created in this guide:
kubectl delete namespaces gdk-k8s
-
Delete the OKE cluster:
oci ce cluster delete --cluster-id $OCI_CLUSTER_ID --force
-
Delete the
gdk-k8s/users-oci
artifacts container repository:oci artifacts container repository delete --repository-id $USERS_REPOSITORY --force
-
Delete the
gdk-k8s/order-oci
artifacts container repository:oci artifacts container repository delete --repository-id $ORDERS_REPOSITORY --force
-
Delete the
gdk-k8s/api-oci
artifacts container repository:oci artifacts container repository delete --repository-id $API_REPOSITORY --force
Summary
This guide demonstrated how to use Kubernetes Service Discovery and Distributed Configuration, provided with the Micronaut Kubernetes integration, to connect three microservices, and to deploy these microservices to a Kubernetes cluster in Oracle Cloud Infrastructure Container Engine for Kubernetes.