Files
fleet-demo/DNS-SERVER-GUIDE.md

5.0 KiB

Multi-Tenant DNS Server Setup

🎯 Overview

This DNS server provides:

  • External DNS for connectvm.cloud (public access)
  • Multi-tenant DNS for dev, prod, and QA teams
  • High Availability with 3 replicas
  • Metrics for monitoring

🌐 DNS Zones Configured

1. Main Domain: connectvm.cloud

Public DNS for main services

Available records:

  • rancher.connectvm.cloud → 160.30.114.10
  • paste.connectvm.cloud → 160.30.114.10
  • fleet.connectvm.cloud → 160.30.114.10
  • hello.connectvm.cloud → 160.30.114.10
  • dns.connectvm.cloud → 160.30.114.10
  • *.connectvm.cloud → 160.30.114.10 (wildcard)

2. Dev Team: dev.connectvm.cloud

Development team's DNS zone

Available records:

  • app1.dev.connectvm.cloud
  • app2.dev.connectvm.cloud
  • api.dev.connectvm.cloud
  • web.dev.connectvm.cloud
  • dashboard.dev.connectvm.cloud
  • jenkins.dev.connectvm.cloud
  • gitlab.dev.connectvm.cloud
  • db.dev.connectvm.cloud
  • redis.dev.connectvm.cloud
  • *.dev.connectvm.cloud (wildcard)

3. Prod Team: prod.connectvm.cloud

Production team's DNS zone

Available records:

  • api.prod.connectvm.cloud
  • web.prod.connectvm.cloud
  • app.prod.connectvm.cloud
  • admin.prod.connectvm.cloud
  • portal.prod.connectvm.cloud
  • db.prod.connectvm.cloud
  • monitoring.prod.connectvm.cloud
  • *.prod.connectvm.cloud (wildcard)

4. QA Team: qa.connectvm.cloud

QA/Testing team's DNS zone

Available records:

  • test.qa.connectvm.cloud
  • staging.qa.connectvm.cloud
  • selenium.qa.connectvm.cloud
  • automation.qa.connectvm.cloud
  • reports.qa.connectvm.cloud
  • *.qa.connectvm.cloud (wildcard)

🔧 External Access

DNS Server IPs:

  • Primary: 160.30.114.10:30053 (UDP/TCP)
  • Internal: 10.96.100.100:53 (ClusterIP)

Configure Clients to Use DNS:

Linux/Mac:

# Add to /etc/resolv.conf
nameserver 160.30.114.10

# Or use specific port with dig:
dig @160.30.114.10 -p 30053 rancher.connectvm.cloud

Windows:

# Set DNS server
netsh interface ip set dns "Ethernet" static 160.30.114.10

Kubernetes Pods:

spec:
  dnsPolicy: None
  dnsConfig:
    nameservers:
    - 10.96.100.100
    searches:
    - connectvm.cloud
    - dev.connectvm.cloud

📊 Monitoring

Access DNS metrics:

Metrics include:

  • Query count
  • Query types
  • Response codes
  • Cache hit/miss rates
  • Zone transfer stats

🔒 Security Features

  • Zone isolation: Each tenant has separate DNS zone
  • No zone transfers: Zones are read-only
  • Query logging: All queries are logged
  • Rate limiting: Built-in caching (300s TTL)

🛠️ Management

Add New Record:

  1. Edit dns-server.yaml ConfigMap
  2. Add record to appropriate zone file
  3. Increment Serial number
  4. Git push → Fleet auto-deploys

Example - Add new dev app:

newapp      IN  A   160.30.114.10

Add New Tenant Zone:

  1. Create new zone file in ConfigMap
  2. Add zone to Corefile
  3. Git push

🧪 Testing

Test DNS resolution:

# Test main domain
dig @160.30.114.10 -p 30053 rancher.connectvm.cloud

# Test dev tenant
dig @160.30.114.10 -p 30053 app1.dev.connectvm.cloud

# Test prod tenant
dig @160.30.114.10 -p 30053 api.prod.connectvm.cloud

# Test QA tenant
dig @160.30.114.10 -p 30053 test.qa.connectvm.cloud

# Test wildcard
dig @160.30.114.10 -p 30053 anything.dev.connectvm.cloud

📱 Use Cases

For Dev Team:

# Deploy app with custom DNS
kubectl create deployment myapp --image=nginx
kubectl expose deployment myapp --port=80

# Access via: myapp.dev.connectvm.cloud

For Prod Team:

# Production API endpoint
api.prod.connectvm.cloud → Production API server

For QA Team:

# Automated testing
selenium.qa.connectvm.cloud → Selenium Grid
automation.qa.connectvm.cloud → Test Runner

🚀 High Availability

  • 3 replicas across different nodes
  • Anti-affinity rules for pod distribution
  • Auto-restart on failure
  • Health checks every 10 seconds

📝 DNS Server Info

  • Software: CoreDNS 1.11.1
  • Protocol: DNS (UDP/TCP port 53)
  • Upstream: Google DNS (8.8.8.8, 8.8.4.4)
  • Cache TTL: 300 seconds
  • Zone TTL: 3600 seconds (1 hour)

🎯 Architecture

External Queries (port 30053)
        ↓
    NodePort Service
        ↓
    DNS Pods (3 replicas)
        ↓
    ┌─────────────┬──────────────┬─────────────┐
    │ Main Zone   │ Dev Zone     │ Prod Zone   │
    │ qa Zone     │ K8s DNS      │ Upstream    │
    └─────────────┴──────────────┴─────────────┘

🔄 Updates

All DNS updates happen via GitOps:

  1. Edit zone files in Git
  2. Push to Gitea
  3. Fleet auto-deploys in ~15 seconds
  4. DNS records updated automatically

No manual DNS server management needed!