前置准备
一个正在使用mysql、mongo的nest应用
如果你正在使用graphql,此教程中也附带了gql的打包部署过程
nest应用的一些配置信息
对nest应用打镜像 如何写 dockerfile 由于在 docker和k8s的最佳实践 一章中讲到了 dockerfile 中代码含义,所以以下不做赘述
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 FROM node:22 -bullseye-slim AS builderWORKDIR /app COPY package*.json ./ RUN npm ci COPY . . RUN npm run build RUN mkdir -p dist/graphql && \ find src -name "*.graphql" -exec cp {} dist/graphql/ \; FROM node:22 -bullseye-slim AS runnerWORKDIR /app ENV NODE_ENV=productionENV PORT=3100 COPY --from=builder /app/dist ./dist COPY --from=builder /app/package*.json ./ RUN npm ci --omit=dev RUN addgroup --system app && adduser --system --ingroup app app USER appEXPOSE 3100 CMD ["node" , "dist/main.js" ]
容器的职责分离 Docker 的设计思想是一个容器只做一件事,只运行一个进程或一个服务,例如一个容器只运行一个nest应用,只运行一个mysql,只运行一个mongo。
设计理念优点:
独立性:这样可以做到每个服务可以独立部署、更新、扩缩容
隔离性:数据库崩溃不会导致应用进程挂掉
可维护性 :镜像小、启动快、问题定位清晰
弹性伸缩 :Kubernetes 可以单独 scale Nest 应用,不会拉着数据库一起扩容
糅杂一起缺点:
由于nestjs应用持续迭代快,而我们在容器中却需要不断安装mysql和mongo,极不合理
数据无法持久化,容器被删除时,它的文件系统也随之消失,即使挂载卷,也很难管理多个数据库实例
Kubernetes 无法独立监控或重启单一服务
镜像体积过大,构建慢,布署慢
所以我们应该分成三个容器,一个放nest应用,一个放mongo,一个放mysql,三个之间通过 Service 通信
如何打包graphql文件 在打包的时候我发现,npm run build 的时候,nest并不会把graphql文件也打包到dist中,导致在加载 graphql 文件的时候报错并没有找到任何的 graphql 文件,所以我做了一件事, 就是打包完之后,把项目的src目录下的所有graphql文件都copy到当前app目录下的dist/graphql目录下。
我在 app.module.ts 主入口处是这样注册的
1 2 3 4 5 6 7 8 9 10 import { GraphQLModule } from '@nestjs/graphql' ;import { ApolloDriver , ApolloDriverConfig } from '@nestjs/apollo' ;GraphQLModule .forRoot <ApolloDriverConfig >({ driver : ApolloDriver , typePaths : ['./**/*.graphql' ], playground : false , introspection : true , plugins : [ApolloServerPluginLandingPageLocalDefault ()], })
也就是我用的是nest中的 schema first 理念,如果是 code first 应该就不会有这个问题
这是其中一种解决方法,第二种还是正常打包,把项目的src目录下的所有graphql文件都copy到当前app目录下的dist/graphql目录这步操作取消,交给nest的打包工具来做
也就是我们需要修改 nest-cli.json 文件
1 2 3 4 5 6 7 8 9 10 { "$schema" : "https://json.schemastore.org/nest-cli" , "collection" : "@nestjs/schematics" , "sourceRoot" : "src" , "compilerOptions" : { "assets" : [ "**/*.graphql" ] , "watchAssets" : true , "deleteOutDir" : true } }
上面说了 Nest 不会自动把 .graphql 文件复制到 dist,我们初始化 nest 项目后,可以看到上面的文件中其实 assets 是空的,其实是默认我们只把 .ts 文件编译成 .js 文件,如果不涉及到其他的,其实无需操作,但是 .graphql 文件对于 ts 来说是非代码静态资源 ,所以ts不会去操作它,所以你在 assets 中加入以上配置,nest cli 就会自动把gql文件复制到对应的dist目录下,并且保留了原路径结构
增加 dockerignore 文件 1 2 3 4 node_modules dist // dist包排除 .git .env // 把env文件排除
如何打镜像 1 2 3 4 5 6 7 8 docker build -f ./dockerfile -t start-learn-nest:v13.0 . docker run -it --rm start-learn-nest:v13.0 sh
使用k8s布署nest应用 注册 Secrets 1 2 3 4 5 6 7 8 9 10 apiVersion: v1 kind: Secret metadata: name: start-learn-nest-secret type: Opaque stringData: MYSQL_USER: "root" MYSQL_PASSWORD: "123456" MONGO_URI: "mongodb://superadmin:shiyuq@mongo-0.mongo:27017/db_nest?authSource=admin"
Pod 启动时会自动注入环境变量
我们在nest中就可以通过 process.env.MYSQL_USER 这种方式读取变量
kubectl apply -f start-learn-nest-secret.yaml
注册 Config Maps 1 2 3 4 5 6 7 8 9 10 apiVersion: v1 kind: ConfigMap metadata: name: start-learn-nest-config data: NODE_ENV: "production" APP_PORT: "3100" MYSQL_HOST: "mysql" MYSQL_PORT: "3306" MYSQL_DB: "db_nest"
kubectl apply -f start-learn-nest-config.yaml
编写 k8s-deploy.yaml 文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 apiVersion: apps/v1 kind: Deployment metadata: name: start-learn-nest spec: replicas: 2 selector: matchLabels: app: start-learn-nest template: metadata: labels: app: start-learn-nest spec: containers: - name: start-learn-nest image: start-learn-nest:v11.0 imagePullPolicy: Never ports: - containerPort: 3100 envFrom: - configMapRef: name: start-learn-nest-config - secretRef: name: start-learn-nest-secret readinessProbe: httpGet: path: / port: 3100 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: httpGet: path: / port: 3100 initialDelaySeconds: 10 periodSeconds: 20 resources: requests: memory: '128Mi' cpu: '100m' limits: memory: '256Mi' cpu: '500m' --- apiVersion: v1 kind: Service metadata: name: start-learn-nest-service spec: type: NodePort selector: app: start-learn-nest ports: - port: 3100 targetPort: 3100 nodePort: 30081
现在我们的 k8s 布署的代码已经编写完成,但是我们还不能直接执行布署代码 kubectl apply -f k8s-deploy.yaml ,因为我们的nest应用还用到了数据库 mysql 和 mongo
编写 MYSQL 布署文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 apiVersion: apps/v1 kind: StatefulSet metadata: name: mysql spec: selector: matchLabels: app: mysql serviceName: "mysql" replicas: 1 template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8 env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: password - name: MYSQL_DATABASE value: testdb ports: - containerPort: 3306 volumeMounts: - name: mysql-storage mountPath: /var/lib/mysql volumeClaimTemplates: - metadata: name: mysql-storage spec: accessModes: ["ReadWriteOnce" ] resources: requests: storage: 10Gi --- apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql clusterIP: None ports: - port: 3306 targetPort: 3306 --- apiVersion: v1 kind: Service metadata: name: mysql-external spec: type: NodePort selector: app: mysql ports: - port: 3306 targetPort: 3306 nodePort: 30306
整体的结构如下:
1 2 3 4 5 6 7 8 9 10 11 12 ┌────────────────────────────────────────────┐ │ StatefulSet: mysql │ │ └── Pod: mysql-0 │ │ ├── MySQL container │ │ └── PVC: mysql-storage-mysql-0 (10Gi) │ └────────────────────────────────────────────┘ │ ▼ Headless Service (mysql, clusterIP=None) │ ▼ NodePort Service (mysql-external, port=30306)
上面还用到了 mysql-secret secret文件,所以我们应该先 kubectl apply -f mysql-secret.yaml
1 2 3 4 5 6 7 8 apiVersion: v1 kind: Secret metadata: name: mysql-secret type: Opaque stringData: username: root password: "123456"
编写 MONGO 布署文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 apiVersion: apps/v1 kind: StatefulSet metadata: name: mongo spec: serviceName: "mongo" replicas: 1 selector: matchLabels: app: mongo template: metadata: labels: app: mongo spec: containers: - name: mongo image: mongo:6 ports: - containerPort: 27017 env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: mongo-secret key: mongo-root-username - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: mongo-secret key: mongo-root-password volumeMounts: - name: mongo-storage mountPath: /data/db volumeClaimTemplates: - metadata: name: mongo-storage spec: accessModes: ["ReadWriteOnce" ] resources: requests: storage: 5Gi --- apiVersion: v1 kind: Service metadata: name: mongo spec: clusterIP: None selector: app: mongo ports: - port: 27017 --- apiVersion: v1 kind: Service metadata: name: mongo-nodeport spec: type: NodePort selector: app: mongo ports: - port: 27017 targetPort: 27017 nodePort: 30017
然后我们用到了 mongo-secret 文件,我们应该先 kubectl apply -f mongo-secret.yaml
1 2 3 4 5 6 7 8 apiVersion: v1 kind: Secret metadata: name: mongo-secret type: Opaque stringData: mongo-root-username: "superadmin" mongo-root-password: "shiyuq"
布署后的表现形式 1 2 3 4 PS C:\workspace> kubectl get svc mysql-external mysql NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-external NodePort 10.103.86.171 <none> 3306:30306/TCP 47h mysql ClusterIP None <none> 3306/TCP 47h
上面msyql是集群内部访问,通过3306端口,mysql-internal是通过宿主机的30306访问,然后转到集群内部3306端口
1 2 3 4 PS C:\workspace kubectl get svc mongo mongo-nodeport NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mongo ClusterIP None <none> 27017/TCP 27h mongo-nodeport NodePort 10.97.38.200 <none> 27017:30017/TCP 46h
上面mongo是集群内部通过27017端口访问,mongo-nodeport是宿主机通过30017访问,然后转发至集群内部27017端口
1 2 3 4 5 PS C:\workspace> kubectl get statefulset PS C:\workspace> kubectl get sts NAME READY AGE mongo 1/1 46h mysql 1/1 47h
这里 READY 表示当前已启动的副本数与期望副本数。
也可以通过 kubectl describe sts <sts-name> 查看具体的有状态副本集的信息,通常会用来排查 StatefulSet 创建或更新时的异常
1 2 3 4 5 6 PS C:\workspace> kubectl get pods -l app=mysql NAME READY STATUS RESTARTS AGE mysql-0 1/1 Running 0 47h PS C:\workspace> kubectl get pods -l app=mongo NAME READY STATUS RESTARTS AGE mongo-0 1/1 Running 0 46h
上面可以查看 StatefulSet 关联的 Pod
1 2 3 4 PS C:\workspace> kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE mongo-storage-mongo-0 Bound pvc-60c61852-7bf8-4d4b-a479-182ecc7fdb2e 5Gi RWO hostpath <unset > 46h mysql-storage-mysql-0 Bound pvc-f1b642d9-75cb-4b06-90bf-de192613b172 10Gi RWO hostpath <unset > 47h
上面可以查看 StatefulSet 绑定的存储卷(PVC)
flowchart TB
%% === 外部层 ===
subgraph Host["宿主机 (外部访问层)"]
extClient1["客户端应用 / 外部系统"]
nodeportMySQL["NodePort: 30306 (MySQL External)"]
nodeportMongo["NodePort: 30017 (Mongo Nodeport)"]
extClient1 -->|TCP 30306| nodeportMySQL
extClient1 -->|TCP 30017| nodeportMongo
end
%% === 集群服务层 ===
subgraph Cluster["Kubernetes 集群 (Service 层)"]
svcMySQL["Service: mysql (ClusterIP: None)"]
svcMongo["Service: mongo (ClusterIP: None)"]
nodeportMySQL --> svcMySQL
nodeportMongo --> svcMongo
end
%% === 工作负载层 ===
subgraph Workload["有状态副本集 (StatefulSet 层)"]
subgraph stsMySQL["StatefulSet: mysql"]
podMySQL0["Pod: mysql-0"]
end
subgraph stsMongo["StatefulSet: mongo"]
podMongo0["Pod: mongo-0"]
end
svcMySQL --> podMySQL0
svcMongo --> podMongo0
end
%% === 存储层 ===
subgraph Storage["持久化存储层 (PVC + PV)"]
pvcMySQL["PVC: mysql-storage-mysql-0 (10Gi)"]
pvcMongo["PVC: mongo-storage-mongo-0 (5Gi)"]
pvMySQL["PV: pvc-f1b642d9-75cb-4b06-90bf-de192613b172"]
pvMongo["PV: pvc-60c61852-7bf8-4d4b-a479-182ecc7fdb2e"]
pvcMySQL --> pvMySQL
pvcMongo --> pvMongo
podMySQL0 --> pvcMySQL
podMongo0 --> pvcMongo
end
%% === 样式优化 ===
classDef svc fill:#88ccff,stroke:#006699,stroke-width:2px,color:#003355;
classDef sts fill:#a6e3a1,stroke:#3a7d44,stroke-width:2px,color:#1c4424;
classDef pvc fill:#fce38a,stroke:#c08400,stroke-width:2px,color:#4d3b00;
classDef pv fill:#ffd6a5,stroke:#b87333,stroke-width:2px,color:#4a2c00;
classDef nodeport fill:#f9a8d4,stroke:#b91c1c,stroke-width:2px,color:#4a1a1a;
class nodeportMySQL,nodeportMongo nodeport;
class svcMySQL,svcMongo svc;
class stsMySQL,stsMongo,podMySQL0,podMongo0 sts;
class pvcMySQL,pvcMongo pvc;
class pvMySQL,pvMongo pv;
现在我们就可以布署nest应用了 kubectl apply -f k8s-deploy.yaml
1 2 3 PS C:\workspace> kubectl get svc start-learn-nest-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE start-learn-nest-service NodePort 10.100.220.119 <none> 3100:30081/TCP 45h
这个时候,我们就可以在本机的30081端口访问我们布署好的nest服务
1 { "status" : 200 , "timestamp" : 1760621147789 , "data" : "hello world" }
如果说在布署之后发现无法访问,可以通过下面的命令来看报错信息
1 2 kubectl logs start-learn-nest-74f4f9456b-7g2sp kubectl logs <pod名称>