RKE2 + Rancher Setup on Azure VMs (SUSE Linux Enterprise Micro 6.0 – Amd64)
This document provides step-by-step instructions for deploying RKE2 (Rancher Kubernetes Engine 2) and Rancher on Azure virtual machines running SUSE Linux Enterprise Micro 6.0 (ARM64 architecture).
References
- RKE2 Requirements
- RKE2 Quickstart Installation
- RKE2 Configuration Options
- Rancher HA Setup with RKE2
- SUSE AI Deployment Guide
- Rancher HA Setup with RKE2
- SUSE AI Deployment Guide
Prerequisites
- Azure Resource Group: Suse-RG
- VM Size: Standard D4ps v5 (4 vCPUs, 16 GiB RAM, AMD64)
- Image: : SUSE Linux Enterprise Micro 6.0 – BYOS – x64 Gen2
- SSH Access: Configure and download your .pem key
Virtual Machines
- Suse-VM-Server (Master)
- Suse-VM-Worker (Agent)
Networking
Ensure inbound rules allow:
- Server: TCP 9345, 6443
- Agent & Ingress: TCP 30000–32767 (NodePort range)
RKE2 Master Node Setup
SSH into the master node:ssh -i <path_to_ssh_key.pem> suse@<SERVER_PUBLIC_IP>
Install and start RKE2 Server:curl -sfL https://get.rke2.io | sh -
sudo systemctl enable rke2-server.service
sudo systemctl start rke2-server.service
sudo systemctl start rke2-agent.service
journalctl -u rke2-agent -f
Check logs:journalctl -u rke2-server -f
RKE2 Agent Node Setup
- SSH into the worker:
ssh -i <path_to_ssh_key.pem> suse@<WORKER_PUBLIC_IP>
- Get token from master:
sudo cat /var/lib/rancher/rke2/server/node-token
- Install RKE2 Agent:
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" sh - sudo systemctl enable rke2-agent.service
- Create Config:
sudo mkdir -p /etc/rancher/rke2/
sudo tee /etc/rancher/rke2/config.yaml > /dev/null <<EOF
server: https://<SERVER_PUBLIC_IP>:9345
token: <NODE_TOKEN>
EOF - Start agent:
sudo systemctl start rke2-agent.service journalctl -u rke2-agent -f
Verify Cluster
On worker:export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
/var/lib/rancher/rke2/bin/kubectl get nodes
# define path to load kubectl library
export PATH=$PATH:/var/lib/rancher/rke2/bin
NGINX Ingress Setup (NodePort)
Default NGINX ingress is installed by RKE2. You must expose it via NodePort.
Check ingress is running:
kubectl get pods -n kube-system | grep ingress
kubectl get svc -n kube-system | grep ingress
Update Config
Create/overwrite the ingress config:sudo mkdir -p /var/lib/rancher/rke2/server/manifests
sudo tee /var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx-config.yaml > /dev/null <<EOF
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-ingress-nginx
namespace: kube-system
spec:
valuesContent: |-
controller:
service:
enabled: true
type: NodePort
ingressClassResource:
default: true EOF
Restart RKE2 server
sudo systemctl restart rke2-server
- Check:
kubectl get svc -n kube-system | grep ingress
kubectl get events -n kube-system | grep ingress - Access from local machine:
http://<SERVER_PUBLIC_IP>:<NODEPORT_HTTP>
https://<SERVER_PUBLIC_IP>:<NODEPORT_HTTPS>
Rancher Installation
- Install Helm:
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
- Upload TLS Certificates:
Download from GoDaddy or equivalent:
– yourdomain.crt
– yourdomain.key
– gd_bundle-g2-g1.crt
Create combined cert:cat yourdomain.crt gd_bundle-g2-g1.crt > fullchain.crt
- Create Kubernetes Secret
kubectl create namespace cattle-system
kubectl create secret tls rancher-tls \
--cert=fullchain.crt \
--key=yourdomain.key \
-n cattle-system - Install Rancher:
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
Please replace the placeholder <RANCHER_DOMAIN> with the actual DNS:
etc-rancher.styrk.ai
helm install rancher rancher-latest/rancher \
--namespace cattle-system \
--set hostname=<RANCHER_DOMAIN> \
--set bootstrapPassword=admin \
--set ingress.tls.source=secret \
--set ingress.extraTls[0].hosts[0]=<RANCHER_DOMAIN> \ --set ingress.extraTls[0].secretName=rancher-tls - Verify:
kubectl get all -n cattle-system
kubectl get ingress -n cattle-system - Patch Ingress Secret (If Needed):
kubectl patch ingress rancher -n cattle-system \
--type=json \
-p='[{"op": "replace", "path": "/spec/tls", "value":[{"hosts":["<RANCHER_DOMAIN>"],"secretName":"rancher-tls"}]}]'
DNS Setup
- Create a DNS A record:
- Hostname: <RANCHER_DOMAIN> eg: etc-rancher.styrk.ai
- IP Address: <Worker/Agent_VM_PUBLIC_IP>
Example DNS: DNS Record: rancher-dns → 1.123.45.123
Access the Rancher UI at:
https://etc-rancher.styrk.ai:31629
SUSE AI Deployment
Refer to the full guide here: SUSE AI Deployment Guide
RKE2 Observability
References
Installation Steps
helm repo add suse-observability https://charts.rancher.com/server-charts/prime/suse-observability
helm repo update
TLS Secret
kubectl create secret tls tls-secret \
--cert=dns-fullchain.crt \
--key=dns-key.key \
-n suse-observability
Template Helm Values
export VALUES_DIR=.
helm template \
--set license='<your license>' \
--set baseUrl='<suse-observability-base-url>' \
--set sizing.profile='<sizing.profile>' \
suse-observability-values \
suse-observability/suse-observability-values --output-dir $VALUES_DIR
Replace DNS in ingress_values.yaml
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
hosts:
- host: stackstate.MY_DOMAIN
tls:
- hosts:
- stackstate.MY_DOMAIN
secretName: tls-secret
Image Pull Secret for Observability
kubectl create secret docker-registry application-collection \
--docker-server=dp.apps.rancher.io \
--docker-username=APPCO_USERNAME \
--docker-password=APPCO_USER_TOKEN \
-n suse-observability
Install SUSE Observability
helm upgrade --install \
--namespace "suse-observability" \
--create-namespace \
--values "ingress_values.yaml" \
--values $VALUES_DIR/suse-observability-values/templates/baseConfig_values.yaml \
--values $VALUES_DIR/suse-observability-values/templates/sizing_values.yaml \
suse-observability \
suse-observability/suse-observability
OpenTelemetry Ingress (ingress_otel_values.yaml)
See: OpenTelemetry Ingress Config
opentelemetry-collector:
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
nginx.ingress.kubernetes.io/backend-protocol: GRPC
hosts:
- host: otlp-stackstate.MY_DOMAIN
paths:
- path: /
pathType: Prefix
port: 4317
tls:
- hosts:
- otlp-stackstate.MY_DOMAIN
secretName: otlp-tls-secret
additionalIngresses:
- name: otlp-http
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
hosts:
- host: otlp-http-stackstate.MY_DOMAIN
paths:
- path: /
pathType: Prefix
port: 4318
tls:
- hosts:
- otlp-http-stackstate.MY_DOMAIN
secretName: otlp-http-tls-secret
helm upgrade \
--install \
--namespace "suse-observability" \
--values "ingress_otel_values.yaml" \
--values $VALUES_DIR/suse-observability-values/templates/baseConfig_values.yaml \
--values $VALUES_DIR/suse-observability-values/templates/sizing_values.yaml \
suse-observability \
suse-observability/suse-observability
Expose via NodePort
kubectl patch svc suse-observability-ui -n suse-observability \
-p '{"spec": {"type": "NodePort"}}'
kubectl get svc suse-observability-ui -n suse-observability
# Access:
http://<Any_RKE2_Node_IP>:<NodePort>
SUSE AI Stack Deploy
Create Namespace
kubectl create namespace suse-ai
Create APPCO_USER_TOKEN by following this guide: Rancher App Catalog Authentication
Create Image Pull Secret
kubectl create secret docker-registry application-collection \
--docker-server=dp.apps.rancher.io \
--docker-username=APPCO_USERNAME \
--docker-password=APPCO_USER_TOKEN \
-n suse-ai
Login to Helm Registry
helm registry login dp.apps.rancher.io/charts \
-u APPCO_USERNAME \
-p APPCO_USER_TOKEN
Install Milvus
See: Milvus Installation
helm pull oci://dp.apps.rancher.io/charts/milvus --version 4.2.2
Create milvus_custom_overrides.yaml:
global:
imagePullSecrets:
- application-collection
cluster:
enabled: True
standalone:
persistence:
persistentVolumeClaim:
storageClass: local-path
etcd:
replicaCount: 1
persistence:
storageClassName: local-path
minio:
mode: distributed
replicas: 4
rootUser: "admin"
rootPassword: "adminminio"
persistence:
storageClass: local-path
resources:
requests:
memory: 1024Mi
kafka:
enabled: true
name: kafka
replicaCount: 3
broker:
enabled: true
cluster:
listeners:
client:
protocol: 'PLAINTEXT'
controller:
protocol: 'PLAINTEXT'
persistence:
enabled: true
annotations: {}
labels: {}
existingClaim: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: "local-path"
Install:
helm upgrade --install \
milvus oci://dp.apps.rancher.io/charts/milvus \
-n suse-ai \
--version 4.2.2 -f milvus_custom_overrides.yaml
Install Ollama (Optional)
See: Ollama Helm Chart
Create ollama_custom_overrides.yaml:
global:
imagePullSecrets:
- application-collection
ingress:
enabled: false
defaultModel: "gemma:2b"
ollama:
models:
pull:
- "gemma:2b"
- "llama3.1"
run:
- "gemma:2b"
- "llama3.1"
persistentVolume:
enabled: true
storageClass: local-path
Install:
helm upgrade --install ollama oci://dp.apps.rancher.io/charts/ollama \
-n suse-ai -f ollama_custom_overrides.yaml
Install cert-manager
helm upgrade --install cert-manager \
oci://dp.apps.rancher.io/charts/cert-manager \
-n suse-ai \
--set crds.enabled=true \
--set 'global.imagePullSecrets[0].name'=application-collection
Install Open WebUI
See: Open WebUI Overrides
helm upgrade --install open-webui \
oci://dp.apps.rancher.io/charts/open-webui \
-n suse-ai \
--version 3.3.2 -f owui_custom_overrides.yaml
Expose open-webui service as NodePort
kubectl patch svc open-webui -n suse-ai -p '{"spec": {"type": "NodePort"}}'
Uninstall Commands
helm uninstall rancher -n cattle-system
helm uninstall milvus -n suse-ai
helm uninstall ollama -n suse-ai
helm uninstall cert-manager -n suse-ai
helm uninstall open-webui -n suse-ai
helm uninstall suse-observability -n suse-observability
kubectl delete secret application-collection -n suse-ai
kubectl delete namespace suse-ai
kubectl delete namespace cattle-system
Notes
- RKE2 NGINX Ingress listens on NodePorts like 32546 (HTTP), 31629 (HTTPS)
- Ensure NSG allows: TCP 30000–32767