8.0 KiB
8.0 KiB
Hybrid Multi-Tenancy Model: Capsule + Kamaji
Overview
Your Kubernetes cluster now supports TWO types of tenants:
1. Capsule Tenants (Lightweight, Namespace-based)
- Best for: Internal teams, dev/qa/staging environments
- Isolation: Namespace-level
- Overhead: Very low
- User experience: Limited Kubernetes (namespaces only)
2. Kamaji Tenants (Virtual Clusters)
- Best for: External customers, production workloads requiring full cluster experience
- Isolation: Control plane-level
- Overhead: Medium (dedicated API server per tenant)
- User experience: Full Kubernetes cluster
Current Tenants
Capsule Tenants
1. dev-team
- Owner: dev user
- Quota: 5 namespaces max
- Resources:
- Max 50 pods
- Max 8 CPU cores (limits), 4 cores (requests)
- Max 16 GiB memory (limits), 8 GiB (requests)
- Max 10 PVCs, 10 services
- Network: Isolated, can only talk to dev-team namespaces
- Storage: standard, hostpath
- Access: Login to Rancher with
dev/devuser123456
2. prod-team
- Quota: 10 namespaces max
- Similar resource quotas (check tenant spec for details)
3. qa-team
- Quota: 7 namespaces max
- Similar resource quotas (check tenant spec for details)
Kamaji Tenants
1. customer1 (Virtual Cluster)
- Version: Kubernetes v1.28.0
- Control Plane: Dedicated API server, controller-manager, scheduler
- Endpoint: https://160.30.114.10:31443
- Kubeconfig:
~/Documents/kuber/customer1-kubeconfig-external.yaml - Resources:
- API Server: 250m-500m CPU, 512Mi-1Gi memory
- Controller Manager: 125m-250m CPU, 256Mi-512Mi memory
- Scheduler: 125m-250m CPU, 256Mi-512Mi memory
- Pod CIDR: 10.244.0.0/16
- Service CIDR: 10.96.0.0/16
- Access: Use kubeconfig file
When to Use Which?
Use Capsule when:
✅ Internal teams (dev, qa, staging) ✅ Simple app deployments ✅ Resource-constrained environments ✅ Need Rancher UI access ✅ Don't need cluster-admin features ✅ Want low overhead
Use Kamaji when:
✅ External customers paying for dedicated clusters ✅ Need complete Kubernetes API experience ✅ Want to install CRDs or cluster-level resources ✅ Need different Kubernetes versions per tenant ✅ Strong isolation requirements ✅ Selling "Kubernetes-as-a-Service"
Managing Capsule Tenants
Add User to Tenant
kubectl patch tenant dev-team --type='json' \
-p='[{"op": "add", "path": "/spec/owners/-", "value": {"kind": "User", "name": "newuser"}}]'
Update Resource Quotas
kubectl edit tenant dev-team
# Modify spec.resourceQuotas.items[0].hard
Create Namespace as Tenant Owner
# Login as dev user in Rancher, create namespace in UI
# Or use kubectl with dev user credentials
Managing Kamaji Tenants
Create New Tenant
kubectl apply -f - << 'YAML'
apiVersion: kamaji.clastix.io/v1alpha1
kind: TenantControlPlane
metadata:
name: customer2
namespace: kamaji-system
spec:
controlPlane:
deployment:
replicas: 1
service:
serviceType: ClusterIP
kubernetes:
version: "v1.28.0"
networkProfile:
port: 6443
podCidr: "10.245.0.0/16" # Different from customer1
serviceCidr: "10.97.0.0/16" # Different from customer1
addons:
coreDNS: {}
kubeProxy: {}
YAML
Get Tenant Kubeconfig
kubectl get secret customer2-admin-kubeconfig -n kamaji-system \
-o jsonpath='{.data.admin\.conf}' | base64 -d > customer2-kubeconfig.yaml
Create NodePort for External Access
kubectl apply -f - << 'YAML'
apiVersion: v1
kind: Service
metadata:
name: customer2-external
namespace: kamaji-system
spec:
type: NodePort
selector:
kamaji.clastix.io/name: customer2
ports:
- protocol: TCP
port: 6443
targetPort: 6443
nodePort: 31444 # Different port for each tenant
YAML
Update Kubeconfig for External Access
sed 's|server: https://.*:6443|server: https://160.30.114.10:31444|g' \
customer2-kubeconfig.yaml > customer2-kubeconfig-external.yaml
Resource Usage
Capsule
- dev-team: ~0 overhead (just RBAC policies)
- prod-team: ~0 overhead
- qa-team: ~0 overhead
Kamaji
- Etcd cluster: ~3 GB RAM (3 replicas)
- Kamaji controller: ~256 MB RAM
- customer1 control plane: ~1.5 GB RAM
- Per additional tenant: ~1.5 GB RAM
Architecture Diagram
┌─────────────────────────────────────────────────────────────┐
│ Physical Kubernetes Cluster │
│ ┌────────────────────────────────────────────────────────┐ │
│ │ Rancher (Cluster Management) │ │
│ └────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────┐ ┌──────────────────────────────┐ │
│ │ Capsule Tenants │ │ Kamaji Tenants │ │
│ │ ──────────────── │ │ ───────────────── │ │
│ │ • dev-team │ │ ┌────────────────────────┐ │ │
│ │ - 5 namespaces │ │ │ customer1 │ │ │
│ │ - 50 pods max │ │ │ ├─ API Server │ │ │
│ │ - 8 CPU max │ │ │ ├─ Controller Manager │ │ │
│ │ │ │ │ ├─ Scheduler │ │ │
│ │ • prod-team │ │ │ └─ Etcd (shared) │ │ │
│ │ • qa-team │ │ └────────────────────────┘ │ │
│ └─────────────────────┘ └──────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────────────┐ │
│ │ Shared Worker Nodes (4 nodes, 16 cores) │ │
│ └────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Cost Analysis
Capsule (3 tenants)
- Infrastructure: $0 (pure RBAC)
- Management: Minimal
Kamaji (1 tenant)
- Etcd cluster: 3 GB RAM
- Control plane: 1.5 GB RAM per tenant
- Total: ~4.5 GB RAM for first tenant, +1.5 GB per additional
Recommendation: Use Capsule for internal teams, Kamaji for paying customers
Next Steps
- ✅ Capsule multi-tenancy configured
- ✅ Kamaji virtual clusters operational
- ⏭️ Create billing/metering for Kamaji tenants
- ⏭️ Add monitoring per tenant
- ⏭️ Configure backup/restore per tenant
- ⏭️ Implement resource quotas enforcement
Access Summary
| Tenant | Type | Access Method | Endpoint |
|---|---|---|---|
| dev-team | Capsule | Rancher UI | https://rancher.connectvm.cloud |
| prod-team | Capsule | Rancher UI | https://rancher.connectvm.cloud |
| qa-team | Capsule | Rancher UI | https://rancher.connectvm.cloud |
| customer1 | Kamaji | Kubeconfig | https://160.30.114.10:31443 |