Deployment

容器化部署

Tagtag Starter 容器化部署指南,包括 Docker 和 Kubernetes 部署方案。

容器化是现代应用部署的趋势,Tagtag Starter 项目支持 Docker 和 Kubernetes 部署,本文档将详细介绍容器化部署的方案和最佳实践。

1. 容器化概述

1.1 为什么选择容器化

  • 一致性:容器确保应用在不同环境中运行一致
  • 轻量级:容器比虚拟机更轻量,启动更快
  • 可移植性:容器可以在任何支持 Docker 的环境中运行
  • 可扩展性:容器易于横向扩展,支持自动化运维
  • 资源隔离:容器提供了良好的资源隔离机制
  • 版本控制:容器镜像支持版本控制,便于回滚和管理

1.2 技术栈

技术用途
Docker容器运行时和镜像构建
Docker Compose本地开发和测试环境部署
Kubernetes生产环境容器编排
HelmKubernetes 应用包管理
GitHub ActionsCI/CD 自动化部署
Harbor私有 Docker 镜像仓库

2. Docker 部署

2.1 Dockerfile 编写

2.1.1 后端 Dockerfile

# 使用 Java 17 作为基础镜像
FROM openjdk:17-jdk-slim as builder

# 设置工作目录
WORKDIR /app

# 复制 Maven 配置文件
COPY pom.xml .
COPY .mvn .mvn

# 下载依赖
RUN ./mvnw dependency:go-offline

# 复制源代码
COPY src src

# 构建项目
RUN ./mvnw package -DskipTests

# 使用轻量级基础镜像
FROM openjdk:17-jdk-slim

# 设置工作目录
WORKDIR /app

# 从构建阶段复制 JAR 文件
COPY --from=builder /app/target/tagtag-backend.jar .

# 设置环境变量
ENV JAVA_OPTS="-Xmx8g -Xms4g"
ENV SPRING_PROFILES_ACTIVE=prod

# 暴露端口
EXPOSE 8080

# 启动应用
ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS -jar tagtag-backend.jar"]

2.1.2 前端 Dockerfile

# 使用 Node.js 18 作为基础镜像
FROM node:18-alpine as builder

# 设置工作目录
WORKDIR /app

# 复制 package.json 和 pnpm-lock.yaml
COPY package.json pnpm-lock.yaml ./

# 安装 pnpm
RUN npm install -g pnpm

# 安装依赖
RUN pnpm install

# 复制源代码
COPY . .

# 构建生产版本
RUN pnpm build

# 使用 Nginx 作为基础镜像
FROM nginx:alpine

# 复制构建好的静态文件到 Nginx 目录
COPY --from=builder /app/dist /usr/share/nginx/html

# 复制 Nginx 配置文件
COPY nginx.conf /etc/nginx/conf.d/default.conf

# 暴露端口
EXPOSE 80

# 启动 Nginx
ENTRYPOINT ["nginx", "-g", "daemon off;"]

2.1.3 Nginx 配置

server {
    listen 80;
    server_name localhost;
    
    location / {
        root /usr/share/nginx/html;
        index index.html;
        try_files $uri $uri/ /index.html;
    }
    
    location /api {
        proxy_pass http://backend:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

2.2 Docker Compose 配置

version: '3.8'

networks:
  tagtag-network:
    driver: bridge

volumes:
  mysql-data:
  redis-data:
  logs:

services:
  # MySQL 数据库
  mysql:
    image: mysql:8.0
    container_name: tagtag-mysql
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD=password
      - MYSQL_DATABASE=tagtag
      - MYSQL_CHARSET=utf8mb4
      - MYSQL_COLLATION=utf8mb4_unicode_ci
    ports:
      - "3306:3306"
    volumes:
      - mysql-data:/var/lib/mysql
      - ./mysql/conf:/etc/mysql/conf.d
      - ./mysql/init:/docker-entrypoint-initdb.d
    networks:
      - tagtag-network
    command: --default-authentication-plugin=mysql_native_password

  # Redis 缓存
  redis:
    image: redis:7.0-alpine
    container_name: tagtag-redis
    restart: always
    environment:
      - REDIS_PASSWORD=password
    ports:
      - "6379:6379"
    volumes:
      - redis-data:/data
    networks:
      - tagtag-network
    command: redis-server --requirepass password --appendonly yes

  # 后端应用
  backend:
    build:
      context: ../backend
      dockerfile: Dockerfile
    container_name: tagtag-backend
    restart: always
    environment:
      - SPRING_PROFILES_ACTIVE=prod
      - SPRING_DATASOURCE_URL=jdbc:mysql://mysql:3306/tagtag?useUnicode=true&characterEncoding=utf-8&useSSL=false&serverTimezone=Asia/Shanghai
      - SPRING_DATASOURCE_USERNAME=root
      - SPRING_DATASOURCE_PASSWORD=password
      - SPRING_REDIS_HOST=redis
      - SPRING_REDIS_PORT=6379
      - SPRING_REDIS_PASSWORD=password
      - JWT_SECRET=your-secret-key
    ports:
      - "8080:8080"
    volumes:
      - logs:/app/logs
    networks:
      - tagtag-network
    depends_on:
      - mysql
      - redis

  # 前端应用
  frontend:
    build:
      context: ../frontend
      dockerfile: Dockerfile
    container_name: tagtag-frontend
    restart: always
    ports:
      - "80:80"
    networks:
      - tagtag-network
    depends_on:
      - backend

  # 可选:phpMyAdmin(用于管理数据库)
  phpmyadmin:
    image: phpmyadmin/phpmyadmin
    container_name: tagtag-phpmyadmin
    restart: always
    environment:
      - PMA_HOST=mysql
      - PMA_PORT=3306
      - MYSQL_ROOT_PASSWORD=password
    ports:
      - "8081:80"
    networks:
      - tagtag-network
    depends_on:
      - mysql

  # 可选:Redis Commander(用于管理 Redis)
  redis-commander:
    image: rediscommander/redis-commander:latest
    container_name: tagtag-redis-commander
    restart: always
    environment:
      - REDIS_HOSTS=local:redis:6379:0:password
    ports:
      - "8082:8081"
    networks:
      - tagtag-network
    depends_on:
      - redis

2.3 构建和运行

命令示例

# 进入 Docker Compose 目录
cd docker

# 构建所有服务
docker-compose build

# 运行所有服务
docker-compose up -d

# 查看服务状态
docker-compose ps

# 查看日志
docker-compose logs -f

# 停止服务
docker-compose down

# 停止服务并删除数据卷
docker-compose down -v

# 仅构建特定服务
docker-compose build frontend

# 仅运行特定服务
docker-compose up -d backend mysql redis

3. Kubernetes 部署

3.1 Kubernetes 集群搭建

3.1.1 本地开发环境

使用 Minikube

# 安装 Minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

# 启动 Minikube
minikube start --cpus 4 --memory 8g --disk-size 40g

# 安装 kubectl
sudo apt-get update && sudo apt-get install -y kubectl

# 验证集群
kubectl cluster-info
kubectl get nodes

使用 Kind

# 安装 Kind
go install sigs.k8s.io/kind@v0.20.0

# 创建集群配置
cat > kind-config.yaml << EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 30080
    hostPort: 80
    protocol: TCP
  - containerPort: 30443
    hostPort: 443
    protocol: TCP
EOF

# 创建集群
kind create cluster --config kind-config.yaml

# 验证集群
kubectl cluster-info
kubectl get nodes

3.1.2 生产环境

  • 云厂商托管集群:推荐使用 AWS EKS、Azure AKS、Google GKE 或阿里云 ACK
  • 自建集群:使用 kubeadm 或 Rancher 搭建

3.2 Kubernetes 资源配置

3.2.1 命名空间

apiVersion: v1
kind: Namespace
metadata:
  name: tagtag
  labels:
    name: tagtag

3.2.2 配置文件(ConfigMap)

apiVersion: v1
kind: ConfigMap
metadata:
  name: tagtag-config
  namespace: tagtag
data:
  application-prod.yaml: |
    server:
      port: 8080
      servlet:
        context-path: /api
    spring:
      datasource:
        url: jdbc:mysql://mysql.tagtag:3306/tagtag?useUnicode=true&characterEncoding=utf-8&useSSL=false&serverTimezone=Asia/Shanghai
        username: root
        password: ${MYSQL_PASSWORD}
      redis:
        host: redis.tagtag
        port: 6379
        password: ${REDIS_PASSWORD}
    jwt:
      secret: ${JWT_SECRET}
      expiration: 3600

3.2.3 密钥(Secret)

apiVersion: v1
kind: Secret
metadata:
  name: tagtag-secrets
  namespace: tagtag
type: Opaque
stringData:
  mysql-password: password
  redis-password: password
  jwt-secret: your-secret-key

3.2.4 MySQL 部署

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  namespace: tagtag
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:8.0
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: tagtag-secrets
              key: mysql-password
        - name: MYSQL_DATABASE
          value: tagtag
        ports:
        - containerPort: 3306
        volumeMounts:
        - name: mysql-data
          mountPath: /var/lib/mysql
        - name: mysql-conf
          mountPath: /etc/mysql/conf.d
      volumes:
      - name: mysql-data
        persistentVolumeClaim:
          claimName: mysql-pvc
      - name: mysql-conf
        configMap:
          name: mysql-config
---
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: tagtag
spec:
  selector:
    app: mysql
  ports:
  - port: 3306
    targetPort: 3306
  clusterIP: None  # 使用 Headless Service
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
  namespace: tagtag
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi

3.2.5 Redis 部署

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: tagtag
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:7.0-alpine
        env:
        - name: REDIS_PASSWORD
          valueFrom:
            secretKeyRef:
              name: tagtag-secrets
              key: redis-password
        ports:
        - containerPort: 6379
        volumeMounts:
        - name: redis-data
          mountPath: /data
        command:
        - redis-server
        - --requirepass
        - $(REDIS_PASSWORD)
        - --appendonly
        - "yes"
      volumes:
      - name: redis-data
        persistentVolumeClaim:
          claimName: redis-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: tagtag
spec:
  selector:
    app: redis
  ports:
  - port: 6379
    targetPort: 6379
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: redis-pvc
  namespace: tagtag
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

3.2.6 后端应用部署

apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  namespace: tagtag
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: backend
        image: tagtag/backend:latest
        env:
        - name: MYSQL_PASSWORD
          valueFrom:
            secretKeyRef:
              name: tagtag-secrets
              key: mysql-password
        - name: REDIS_PASSWORD
          valueFrom:
            secretKeyRef:
              name: tagtag-secrets
              key: redis-password
        - name: JWT_SECRET
          valueFrom:
            secretKeyRef:
              name: tagtag-secrets
              key: jwt-secret
        ports:
        - containerPort: 8080
        livenessProbe:
          httpGet:
            path: /api/actuator/health/liveness
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 30
        readinessProbe:
          httpGet:
            path: /api/actuator/health/readiness
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        resources:
          requests:
            memory: "4Gi"
            cpu: "2"
          limits:
            memory: "8Gi"
            cpu: "4"
        volumeMounts:
        - name: logs
          mountPath: /app/logs
        - name: config
          mountPath: /app/config
      volumes:
      - name: logs
        emptyDir: {}
      - name: config
        configMap:
          name: tagtag-config
---
apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: tagtag
spec:
  selector:
    app: backend
  ports:
  - port: 8080
    targetPort: 8080
  type: ClusterIP

3.2.7 前端应用部署

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  namespace: tagtag
spec:
  replicas: 3
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: frontend
        image: tagtag/frontend:latest
        ports:
        - containerPort: 80
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 30
          periodSeconds: 30
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 10
          periodSeconds: 10
        resources:
          requests:
            memory: "512Mi"
            cpu: "250m"
          limits:
            memory: "1Gi"
            cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
  namespace: tagtag
spec:
  selector:
    app: frontend
  ports:
  - port: 80
    targetPort: 80
  type: ClusterIP

3.2.8 Ingress 配置

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: tagtag-ingress
  namespace: tagtag
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/proxy-body-size: "10m"
spec:
  ingressClassName: nginx
  rules:
  - host: tagtag.your-domain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: frontend
            port:
              number: 80
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: backend
            port:
              number: 8080

3.3 Helm Chart 配置

3.3.1 Chart 目录结构

tagtag-chart/
├── Chart.yaml          # Chart 元数据
├── values.yaml         # 默认配置值
├── templates/          # 模板文件
│   ├── _helpers.tpl    # 辅助函数
│   ├── configmap.yaml  # ConfigMap 模板
│   ├── secret.yaml     # Secret 模板
│   ├── mysql/          # MySQL 相关模板
│   ├── redis/          # Redis 相关模板
│   ├── backend/        # 后端应用模板
│   ├── frontend/       # 前端应用模板
│   └── ingress.yaml    # Ingress 模板
└── charts/             # 依赖 Chart

3.3.2 Chart.yaml

apiVersion: v2
name: tagtag
description: A Helm chart for Tagtag Starter application
type: application
version: 1.0.0
appVersion: "1.0.0"
dependencies:
  - name: mysql
    version: "8.8.2"
    repository: "https://charts.bitnami.com/bitnami"
    condition: mysql.enabled
  - name: redis
    version: "17.10.2"
    repository: "https://charts.bitnami.com/bitnami"
    condition: redis.enabled

3.3.3 values.yaml

# 全局配置
global:
  namespace: tagtag
  imageRegistry: ""

# 前端配置
frontend:
  replicaCount: 3
  image:
    repository: tagtag/frontend
    tag: latest
    pullPolicy: IfNotPresent
  resources:
    requests:
      memory: "512Mi"
      cpu: "250m"
    limits:
      memory: "1Gi"
      cpu: "500m"

# 后端配置
backend:
  replicaCount: 3
  image:
    repository: tagtag/backend
    tag: latest
    pullPolicy: IfNotPresent
  resources:
    requests:
      memory: "4Gi"
      cpu: "2"
    limits:
      memory: "8Gi"
      cpu: "4"

# MySQL 配置
mysql:
  enabled: true
  auth:
    rootPassword: password
    database: tagtag
  primary:
    persistence:
      size: 20Gi

# Redis 配置
redis:
  enabled: true
  auth:
    password: password
  master:
    persistence:
      size: 10Gi

# Ingress 配置
ingress:
  enabled: true
  className: nginx
  hosts:
    - host: tagtag.your-domain.com
      paths:
        - path: /
          pathType: Prefix
        - path: /api
          pathType: Prefix

3.3.4 使用 Helm 部署

# 安装 Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# 添加 Helm 仓库
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

# 安装 Chart
helm install tagtag ./tagtag-chart -n tagtag --create-namespace

# 更新 Chart
helm upgrade tagtag ./tagtag-chart -n tagtag

# 卸载 Chart
helm uninstall tagtag -n tagtag

# 查看已安装的 Chart
helm list -n tagtag

# 查看 Chart 状态
helm status tagtag -n tagtag

# 查看 Pod 状态
kubectl get pods -n tagtag

# 查看日志
kubectl logs -f <pod-name> -n tagtag

3.4 CI/CD 集成

3.4.1 GitHub Actions 配置

name: CI/CD

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  # 构建后端
  build-backend:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    
    - name: Set up JDK 17
      uses: actions/setup-java@v3
      with:
        java-version: '17'
        distribution: 'temurin'
        cache: maven
    
    - name: Build with Maven
      run: mvn -B package -DskipTests --file backend/pom.xml
    
    - name: Login to Docker Hub
      uses: docker/login-action@v2
      with:
        username: ${{ secrets.DOCKER_HUB_USERNAME }}
        password: ${{ secrets.DOCKER_HUB_TOKEN }}
    
    - name: Build and push backend image
      uses: docker/build-push-action@v4
      with:
        context: ./backend
        file: ./backend/Dockerfile
        push: true
        tags: ${{ secrets.DOCKER_HUB_USERNAME }}/tagtag-backend:${{ github.sha }},${{ secrets.DOCKER_HUB_USERNAME }}/tagtag-backend:latest

  # 构建前端
  build-frontend:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    
    - name: Set up Node.js 18
      uses: actions/setup-node@v3
      with:
        node-version: '18'
        cache: 'pnpm'
    
    - name: Install pnpm
      run: npm install -g pnpm
    
    - name: Install dependencies
      run: pnpm install
      working-directory: ./frontend
    
    - name: Build frontend
      run: pnpm build
      working-directory: ./frontend
    
    - name: Login to Docker Hub
      uses: docker/login-action@v2
      with:
        username: ${{ secrets.DOCKER_HUB_USERNAME }}
        password: ${{ secrets.DOCKER_HUB_TOKEN }}
    
    - name: Build and push frontend image
      uses: docker/build-push-action@v4
      with:
        context: ./frontend
        file: ./frontend/Dockerfile
        push: true
        tags: ${{ secrets.DOCKER_HUB_USERNAME }}/tagtag-frontend:${{ github.sha }},${{ secrets.DOCKER_HUB_USERNAME }}/tagtag-frontend:latest

  # 部署到 Kubernetes
  deploy:
    needs: [ build-backend, build-frontend ]
    runs-on: ubuntu-latest
    if: github.event_name == 'push' && github.ref == 'refs/heads/main'
    steps:
    - uses: actions/checkout@v3
    
    - name: Set up kubectl
      uses: azure/setup-kubectl@v3
      with:
        version: 'v1.25.0'
    
    - name: Configure AWS credentials
      uses: aws-actions/configure-aws-credentials@v2
      with:
        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        aws-region: ${{ secrets.AWS_REGION }}
    
    - name: Update kubeconfig
      run: aws eks update-kubeconfig --name ${{ secrets.EKS_CLUSTER_NAME }} --region ${{ secrets.AWS_REGION }}
    
    - name: Deploy to Kubernetes
      run: |
        # 更新 Helm Chart 中的镜像版本
        sed -i 's|tag: latest|tag: ${{ github.sha }}|g' ./helm/values.yaml
        
        # 部署应用
        helm upgrade --install tagtag ./helm -n tagtag --create-namespace
    
    - name: Verify deployment
      run: |
        kubectl wait --for=condition=available deployment/backend -n tagtag --timeout=300s
        kubectl wait --for=condition=available deployment/frontend -n tagtag --timeout=300s
        kubectl get pods -n tagtag

4. 最佳实践

4.1 Docker 最佳实践

  • 使用轻量级基础镜像:如 alpine 或 slim 版本
  • 最小化镜像层数:合并 RUN 指令,使用多阶段构建
  • 使用 .dockerignore 文件:排除不必要的文件和目录
  • 非 root 用户运行容器:提高安全性
  • 设置合理的健康检查:使用 HEALTHCHECK 指令
  • 合理设置资源限制:避免容器占用过多资源
  • 使用标签管理镜像:包括版本号、提交哈希等
  • 避免在容器中存储数据:使用 volumes 或外部存储

4.2 Kubernetes 最佳实践

  • 使用命名空间:按环境或团队划分资源
  • 合理设置资源请求和限制:确保集群稳定性
  • 使用健康检查和就绪探针:提高应用可用性
  • 使用滚动更新:确保零 downtime 部署
  • 使用 StatefulSet 部署有状态应用:如数据库
  • 使用 ConfigMap 和 Secret 管理配置:避免硬编码
  • 使用 Horizontal Pod Autoscaler:自动扩缩容
  • 使用 NetworkPolicy 限制网络访问:提高安全性
  • 定期备份数据:使用 CronJob 或外部工具
  • 监控和日志收集:使用 Prometheus 和 ELK Stack

4.3 CI/CD 最佳实践

  • 自动化测试:在构建前运行单元测试和集成测试
  • 代码质量检查:使用 ESLint、SonarQube 等工具
  • 镜像安全扫描:使用 Trivy 或 Clair 扫描镜像漏洞
  • 多环境部署:开发、测试、预发布、生产环境
  • 手动审批:生产环境部署需要手动审批
  • 回滚机制:支持快速回滚到之前的版本
  • 监控部署状态:实时监控部署进度和状态
  • 日志收集:收集 CI/CD 流程的日志

5. 常见问题与解决方案

5.1 Docker 相关问题

5.1.1 镜像构建失败

问题:构建镜像时出现依赖安装失败

解决方案

  • 检查网络连接
  • 检查 Dockerfile 中的依赖源
  • 使用国内镜像源加速依赖下载
  • 清理缓存后重新构建

5.1.2 容器启动失败

问题:容器启动后立即退出

解决方案

  • 查看容器日志:docker logs <container-name>
  • 检查环境变量配置
  • 检查端口占用情况
  • 检查挂载的卷和配置文件

5.2 Kubernetes 相关问题

5.2.1 Pod 处于 Pending 状态

问题:Pod 一直处于 Pending 状态

解决方案

  • 检查集群资源是否充足
  • 检查 PersistentVolumeClaim 是否绑定成功
  • 检查 NodeSelector 和 Toleration 配置
  • 查看事件:kubectl describe pod <pod-name> -n <namespace>

5.2.2 Pod 处于 CrashLoopBackOff 状态

问题:Pod 反复重启

解决方案

  • 查看容器日志:kubectl logs <pod-name> -n <namespace>
  • 检查应用配置
  • 检查健康检查配置
  • 检查资源限制是否合理

5.2.3 Ingress 无法访问

问题:通过 Ingress 无法访问应用

解决方案

  • 检查 Ingress Controller 是否正常运行
  • 检查 Ingress 规则配置
  • 检查 Service 是否正常
  • 检查 Pod 是否正常运行
  • 查看 Ingress 日志

6. 总结

容器化部署是现代应用部署的趋势,Tagtag Starter 项目支持 Docker 和 Kubernetes 部署,提供了灵活的部署方案。通过 Docker Compose 可以快速搭建本地开发和测试环境,通过 Kubernetes 和 Helm 可以实现生产环境的自动化部署和管理。

本文档详细介绍了 Tagtag Starter 项目的容器化部署方案,包括 Dockerfile 编写、Docker Compose 配置、Kubernetes 资源配置、Helm Chart 配置和 CI/CD 集成。遵循本文档的最佳实践,可以提高应用的可用性、可扩展性和安全性。

在实际部署过程中,建议根据业务需求和集群规模,灵活调整配置参数,确保系统的高性能和可靠性。