本文共 23566 字,大约阅读时间需要 78 分钟。
Prometheus Operator 是 CoreOS 开发的基于 Prometheus 的 Kubernetes 监控方案,也可能是目前功能最全面的开源方案。
Prometheus Operator 通过 Grafana 展示监控数据,预定义了一系列的 DashboardPrometheus 是一个非常优秀的监控工具。准确的说,应该是监控方案。Prometheus 提供了数据搜集、存储、处理、可视化和告警一套完整的解决方案。Prometheus 的架构如下图所示:
Prometheus Server
Prometheus Server 负责从 Exporter 拉取和存储监控数据,并提供一套灵活的查询语言(PromQL)供用户使用。Exporter
Exporter 负责收集目标对象(host, container...)的性能数据,并通过 HTTP 接口供 Prometheus Server 获取。可视化组件
监控数据的可视化展现对于监控方案至关重要。以前 Prometheus 自己开发了一套工具,不过后来废弃了,因为开源社区出现了更为优秀的产品 Grafana。Grafana 能够与 Prometheus 无缝集成,提供完美的数据展示能力。Alertmanager
用户可以定义基于监控数据的告警规则,规则会触发告警。一旦 Alermanager 收到告警,会通过预定义的方式发出告警通知。支持的方式包括 Email、PagerDuty、Webhook 等.Prometheus Operator 目前功能最全面的开源监控方案。 能够监控Node Port ,并支持集群的各种管理组件,如 API Server 、Scheduler、Controller Manager等。
Prometheus Operator 的目标是尽可能简化在 Kubernetes 中部署和维护 Prometheus 的工作。其架构如下图所示:
图上的每一个对象都是 Kubernetes 中运行的资源。
Operator
Operator 即 Prometheus Operator,在 Kubernetes 中以 Deployment 运行。其职责是部署和管理 Prometheus Server,根据 ServiceMonitor 动态更新 Prometheus Server 的监控对象。Prometheus Server
Prometheus Server 会作为 Kubernetes 应用部署到集群中。为了更好地在 Kubernetes 中管理 Prometheus,CoreOS 的开发人员专门定义了一个命名为 Prometheus 类型的 Kubernetes 定制化资源。我们可以把 Prometheus看作是一种特殊的 Deployment,它的用途就是专门部署 Prometheus Server。Service
这里的 Service 就是 Cluster 中的 Service 资源,也是 Prometheus 要监控的对象,在 Prometheus 中叫做 Target。每个监控对象都有一个对应的 Service。比如要监控 Kubernetes Scheduler,就得有一个与 Scheduler 对应的 Service。当然,Kubernetes 集群默认是没有这个 Service 的,Prometheus Operator 会负责创建。ServiceMonitor
Operator 能够动态更新 Prometheus 的 Target 列表,ServiceMonitor 就是 Target 的抽象。比如想监控 Kubernetes Scheduler,用户可以创建一个与 Scheduler Service 相映射的 ServiceMonitor 对象。Operator 则会发现这个新的 ServiceMonitor,并将 Scheduler 的 Target 添加到 Prometheus 的监控列表中。ServiceMonitor 也是 Prometheus Operator 专门开发的一种 Kubernetes 定制化资源类型。
Alertmanager
除了 Prometheus 和 ServiceMonitor,Alertmanager 是 Operator 开发的第三种 Kubernetes 定制化资源。我们可以把 Alertmanager 看作是一种特殊的 Deployment,它的用途就是专门部署 Alertmanager 组件。Helm 有两个重要的概念:chart 和 release。
chart 是创建一个应用的信息集合,包括各种 Kubernetes 对象的配置模板、参数定义、依赖关系、文档说明等。chart 是应用部署的自包含逻辑单元。可以将 chart 想象成 apt、yum 中的软件安装包。release 是 chart 的运行实例,代表了一个正在运行的应用。当 chart 被安装到 Kubernetes 集群,就生成一个 release。chart 能够多次安装到同一个集群,每次安装都是一个 release。在https://github.com/helm/helm/releases
下载最新的版本。
[root@master ~]# tar xf helm-v2.12.1-linux-amd64.tar.gz[root@master ~]# cp linux-amd64/helm /usr/local/bin/[root@master ~]# helm versionClient: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}Error: could not find tiller
目前只能查看到客户端的版本,服务器还没有安装。
对于启用了 RBAC 的集群,我们首先创建授权,参照文档https://github.com/helm/helm/blob/master/docs/rbac.md
,创建rbac-config.yaml
,内容如下:
apiVersion: v1kind: ServiceAccountmetadata: name: tiller namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: tillerroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-adminsubjects: - kind: ServiceAccount name: tiller namespace: kube-system
然后进行创建。
[root@master ~]# kubectl apply -f rbac-config.yaml serviceaccount/tiller createdclusterrolebinding.rbac.authorization.k8s.io/tiller created
Tiller 服务器安装非常简单,只需要执行 helm init,版本尽快使用相同的,因为国内访问不了google的镜像,我们使用国内的阿里云:
[root@master ~]# helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.12.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/chartsCreating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm.Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.To prevent this, run `helm init` with the --tiller-tls-verify flag.For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installationHappy Helming!
查看安装结果。
[root@master ~]# helm versionClient: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}Server: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}[root@master ~]# helm repo listNAME URL stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/chartslocal http://127.0.0.1:8879/charts
本节在实践时使用的是 Prometheus Operator 版本 v0.26.0。由于项目开发迭代速度很快,部署方法可能会更新,必要时请参考官方文档。
[root@master ~]# git clone https://github.com/coreos/prometheus-operator.git[root@master ~]# cd prometheus-operator/
为方便管理,创建一个单独的 Namespace monitoring,Prometheus Operator 相关的组件都会部署到这个 Namespace。
[root@master prometheus-operator]# kubectl create namespace monitoringnamespace/monitoring created
首先更新一下 repo 源。
helm repo update
然后进行 helm 安装,因为要下载几百兆的镜像,速度会慢一些,也可以提醒把镜像下载好,建议使用阿里云的镜像,然后再改名,这样速度比较快。
[root@master prometheus-operator]# helm install --name prometheus-operator --set rbacEnable=true --namespace=monitoring helm/prometheus-operatorNAME: prometheus-operatorLAST DEPLOYED: Tue Dec 25 22:09:31 2018NAMESPACE: monitoringSTATUS: DEPLOYEDRESOURCES:==> v1beta1/PodSecurityPolicyNAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMESprometheus-operator false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim==> v1/ConfigMapNAME DATA AGEprometheus-operator 1 3s==> v1/ServiceAccountNAME SECRETS AGEprometheus-operator 1 3s==> v1beta1/ClusterRoleNAME AGEprometheus-operator 3spsp-prometheus-operator 3s==> v1beta1/ClusterRoleBindingNAME AGEprometheus-operator 3spsp-prometheus-operator 3s==> v1beta1/DeploymentNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEprometheus-operator 1 1 1 1 3s==> v1/Pod(related)NAME READY STATUS RESTARTS AGEprometheus-operator-867bbfddbd-vpsjd 1/1 Running 0 3sNOTES:The Prometheus Operator has been installed. Check its status by running: kubectl --namespace monitoring get pods -l "app=prometheus-operator,release=prometheus-operator"Visit https://github.com/coreos/prometheus-operator for instructions on howto create & configure Alertmanager and Prometheus instances using the Operator.
查看创建的资源。
[root@master prometheus-operator]# kubectl get pod -n monitoringNAME READY STATUS RESTARTS AGEprometheus-operator-867bbfddbd-vpsjd 1/1 Running 0 95s[root@master prometheus-operator]# kubectl get deploy -n monitoringNAME READY UP-TO-DATE AVAILABLE AGEprometheus-operator 1/1 1 1 103s[root@master prometheus-operator]# helm listNAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE prometheus-operator 1 Tue Dec 25 22:09:31 2018 DEPLOYED prometheus-operator-0.0.29 0.20.0 monitoring
[root@master prometheus-operator]# helm install --name prometheus --set serviceMonitorsSelector.app=prometheus --set ruleSelector.app=prometheus --namespace=monitoring helm/prometheusNAME: prometheusLAST DEPLOYED: Tue Dec 25 22:17:06 2018NAMESPACE: monitoringSTATUS: DEPLOYEDRESOURCES:==> v1/PrometheusNAME AGEprometheus 0s==> v1/PrometheusRuleNAME AGEprometheus-rules 0s==> v1/ServiceMonitorNAME AGEprometheus 0s==> v1beta1/PodSecurityPolicyNAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMESprometheus false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim==> v1/ServiceAccountNAME SECRETS AGEprometheus 1 0s==> v1beta1/ClusterRoleNAME AGEprometheus 0spsp-prometheus 0s==> v1beta1/ClusterRoleBindingNAME AGEprometheus 0spsp-prometheus 0s==> v1/ServiceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEprometheus ClusterIP 10.109.224.2179090/TCP 0sNOTES:A new Prometheus instance has been created.DEPRECATION NOTICE:- additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels
查看创建情况。
[root@master prometheus-operator]# kubectl get pod -n monitoringNAME READY STATUS RESTARTS AGEprometheus-operator-867bbfddbd-vpsjd 1/1 Running 0 16mprometheus-prometheus-0 3/3 Running 1 9m10s[root@master prometheus-operator]# kubectl get svc -n monitoringNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEprometheus ClusterIP 10.111.72.329090/TCP 22mprometheus-operated ClusterIP None 9090/TCP 19m
[root@master prometheus-operator]# helm install --name alertmanager --namespace=monitoring helm/alertmanagerNAME: alertmanagerLAST DEPLOYED: Tue Dec 25 22:30:11 2018NAMESPACE: monitoringSTATUS: DEPLOYEDRESOURCES:==> v1/PrometheusRuleNAME AGEalertmanager 0s==> v1/ServiceMonitorNAME AGEalertmanager 0s==> v1beta1/PodSecurityPolicyNAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMESalertmanager false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim==> v1/SecretNAME TYPE DATA AGEalertmanager-alertmanager Opaque 1 0s==> v1beta1/ClusterRoleNAME AGEpsp-alertmanager 0s==> v1beta1/ClusterRoleBindingNAME AGEpsp-alertmanager 0s==> v1/ServiceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEalertmanager ClusterIP 10.102.68.1669093/TCP 0s==> v1/AlertmanagerNAME AGEalertmanager 0sNOTES:A new Alertmanager instance has been created.DEPRECATION NOTICE:- additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels
查看安装结果。
[root@master prometheus-operator]# kubectl get pod -n monitoringNAME READY STATUS RESTARTS AGEalertmanager-alertmanager-0 2/2 Running 0 13mprometheus-operator-867bbfddbd-vpsjd 1/1 Running 0 33mprometheus-prometheus-0 3/3 Running 1 26m
kube-prometheus 是一个 Helm Chart,打包了监控 Kubernetes 需要的所有 Exporter 和 ServiceMonitor,会创建几个 Service。
从上面的连接中可以看到调整了安装方式,安装过程如下:[root@master prometheus-operator]# mkdir -p helm/kube-prometheus/charts[root@master prometheus-operator]# helm package -d helm/kube-prometheus/charts helm/alertmanager helm/grafana helm/prometheus helm/exporter-kube-dns \> helm/exporter-kube-scheduler helm/exporter-kubelets helm/exporter-node helm/exporter-kube-controller-manager \> helm/exporter-kube-etcd helm/exporter-kube-state helm/exporter-coredns helm/exporter-kubernetesSuccessfully packaged chart and saved it to: helm/kube-prometheus/charts/alertmanager-0.1.7.tgzSuccessfully packaged chart and saved it to: helm/kube-prometheus/charts/grafana-0.0.37.tgzSuccessfully packaged chart and saved it to: helm/kube-prometheus/charts/prometheus-0.0.51.tgzSuccessfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-kube-dns-0.1.7.tgzSuccessfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-kube-scheduler-0.1.9.tgzSuccessfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-kubelets-0.2.11.tgzSuccessfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-node-0.4.6.tgzSuccessfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-kube-controller-manager-0.1.10.tgzSuccessfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-kube-etcd-0.1.15.tgzSuccessfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-kube-state-0.2.6.tgzSuccessfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-coredns-0.0.3.tgzSuccessfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-kubernetes-0.1.10.tgz[root@master prometheus-operator]# helm install helm/kube-prometheus --name kube-prometheus --namespace monitoringNAME: kube-prometheusLAST DEPLOYED: Tue Dec 25 23:02:25 2018NAMESPACE: monitoringSTATUS: DEPLOYEDRESOURCES:==> v1/SecretNAME TYPE DATA AGEalertmanager-kube-prometheus Opaque 1 1skube-prometheus-grafana Opaque 2 1s==> v1beta1/RoleBindingNAME AGEkube-prometheus-exporter-kube-state 1s==> v1/ServiceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkube-prometheus-alertmanager ClusterIP 10.109.99.339093/TCP 1skube-prometheus-exporter-kube-controller-manager ClusterIP None 10252/TCP 1skube-prometheus-exporter-kube-dns ClusterIP None 10054/TCP,10055/TCP 1skube-prometheus-exporter-kube-etcd ClusterIP None 4001/TCP 1skube-prometheus-exporter-kube-scheduler ClusterIP None 10251/TCP 1skube-prometheus-exporter-kube-state ClusterIP 10.106.111.57 80/TCP 1skube-prometheus-exporter-node ClusterIP 10.107.178.109 9100/TCP 1skube-prometheus-grafana ClusterIP 10.110.171.226 80/TCP 1skube-prometheus ClusterIP 10.102.19.97 9090/TCP 1s==> v1beta1/DeploymentNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEkube-prometheus-exporter-kube-state 1 1 1 0 1skube-prometheus-grafana 1 1 1 0 1s==> v1/Pod(related)NAME READY STATUS RESTARTS AGEkube-prometheus-exporter-node-8cclq 0/1 ContainerCreating 0 1skube-prometheus-exporter-node-xsqvj 0/1 ContainerCreating 0 1skube-prometheus-exporter-node-zcjfj 0/1 ContainerCreating 0 1skube-prometheus-exporter-kube-state-7bb8cf75d9-czp24 0/2 ContainerCreating 0 1skube-prometheus-grafana-6f4bb75c95-jvfzn 0/2 ContainerCreating 0 1s==> v1beta1/ClusterRoleNAME AGEpsp-kube-prometheus-alertmanager 1skube-prometheus-exporter-kube-state 1spsp-kube-prometheus-exporter-kube-state 1spsp-kube-prometheus-exporter-node 1spsp-kube-prometheus-grafana 1skube-prometheus 1spsp-kube-prometheus 1s==> v1beta1/DaemonSetNAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEkube-prometheus-exporter-node 3 3 0 3 0 1s==> v1beta1/PodSecurityPolicyNAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMESkube-prometheus-alertmanager false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaimkube-prometheus-exporter-kube-state false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaimkube-prometheus-exporter-node false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim,hostPathkube-prometheus-grafana false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim,hostPathkube-prometheus false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim==> v1/ServiceAccountNAME SECRETS AGEkube-prometheus-exporter-kube-state 1 1skube-prometheus-exporter-node 1 1skube-prometheus-grafana 1 1skube-prometheus 1 1s==> v1beta1/ClusterRoleBindingNAME AGEpsp-kube-prometheus-alertmanager 1skube-prometheus-exporter-kube-state 1spsp-kube-prometheus-exporter-kube-state 1spsp-kube-prometheus-exporter-node 1spsp-kube-prometheus-grafana 1skube-prometheus 1spsp-kube-prometheus 1s==> v1beta1/RoleNAME AGEkube-prometheus-exporter-kube-state 1s==> v1/PrometheusNAME AGEkube-prometheus 1s==> v1/ServiceMonitorNAME AGEkube-prometheus-alertmanager 0skube-prometheus-exporter-kube-controller-manager 0skube-prometheus-exporter-kube-dns 0skube-prometheus-exporter-kube-etcd 0skube-prometheus-exporter-kube-scheduler 0skube-prometheus-exporter-kube-state 0skube-prometheus-exporter-kubelets 0skube-prometheus-exporter-kubernetes 0skube-prometheus-exporter-node 0skube-prometheus-grafana 0skube-prometheus 0s==> v1/ConfigMapNAME DATA AGEkube-prometheus-grafana 10 1s==> v1/AlertmanagerNAME AGEkube-prometheus 1s==> v1/PrometheusRuleNAME AGEkube-prometheus-alertmanager 1skube-prometheus-exporter-kube-controller-manager 1skube-prometheus-exporter-kube-etcd 1skube-prometheus-exporter-kube-scheduler 1skube-prometheus-exporter-kube-state 1skube-prometheus-exporter-kubelets 1skube-prometheus-exporter-kubernetes 1skube-prometheus-exporter-node 1skube-prometheus-rules 1skube-prometheus 0sNOTES:DEPRECATION NOTICE:- alertmanager.ingress.fqdn is not used anymore, use alertmanager.ingress.hosts []- prometheus.ingress.fqdn is not used anymore, use prometheus.ingress.hosts []- grafana.ingress.fqdn is not used anymore, use prometheus.grafana.hosts []- additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels- prometheus.additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels- alertmanager.additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels- exporter-kube-controller-manager.additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels- exporter-kube-etcd.additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels- exporter-kube-scheduler.additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels- exporter-kubelets.additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels- exporter-kubernetes.additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels
等需要的镜像下载完成,因为有的镜像国内访问不到,建议都从阿里云间接下载,我们查看安装结果。
每个 Exporter 会对应一个 Service,为 Pormetheus 提供 Kubernetes 集群的各类监控数据。[root@master prometheus-operator]# kubectl get svc -n monitoringNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEalertmanager ClusterIP 10.102.68.1669093/TCP 52malertmanager-operated ClusterIP None 9093/TCP,6783/TCP 52mkube-prometheus ClusterIP 10.109.193.133 9090/TCP 3m42skube-prometheus-alertmanager ClusterIP 10.110.67.174 9093/TCP 3m42skube-prometheus-exporter-kube-state ClusterIP 10.101.225.77 80/TCP 3m42skube-prometheus-exporter-node ClusterIP 10.103.162.196 9100/TCP 3m42skube-prometheus-grafana ClusterIP 10.101.81.167 80/TCP 3m42sprometheus ClusterIP 10.109.224.217 9090/TCP 65mprometheus-operated ClusterIP None 9090/TCP 65m
每个 Service 对应一个 ServiceMonitor,组成 Pormetheus 的 Target 列表。
[root@master prometheus-operator]# kubectl get servicemonitor -n monitoringNAME AGEalertmanager 14mkube-prometheus 22mkube-prometheus-alertmanager 22mkube-prometheus-exporter-kube-controller-manager 22mkube-prometheus-exporter-kube-dns 22mkube-prometheus-exporter-kube-etcd 22mkube-prometheus-exporter-kube-scheduler 22mkube-prometheus-exporter-kube-state 22mkube-prometheus-exporter-kubelets 22mkube-prometheus-exporter-kubernetes 22mkube-prometheus-exporter-node 22mkube-prometheus-grafana 22mprometheus 16mprometheus-operator 2h
如下是与 Prometheus Operator 相关的所有 Pod,我们注意到有些 Exporter 没有运行 Pod,这是因为像 API Server、Scheduler、Kubelet 等 Kubernetes 内部组件原生就支持 Prometheus,只需要定义 Service 就能直接从预定义端口获取监控数据。
[root@master prometheus-operator]# kubectl get pod -n monitoringNAME READY STATUS RESTARTS AGEalertmanager-alertmanager-0 2/2 Running 0 15malertmanager-kube-prometheus-0 2/2 Running 0 15mkube-prometheus-exporter-kube-state-dc6966bb5-r5kng 2/2 Running 0 25mkube-prometheus-exporter-node-grtvz 1/1 Running 0 25mkube-prometheus-exporter-node-jfq79 1/1 Running 0 25mkube-prometheus-exporter-node-n79vq 1/1 Running 0 25mkube-prometheus-grafana-6f4bb75c95-bw72r 2/2 Running 0 25mprometheus-kube-prometheus-0 3/3 Running 1 15mprometheus-operator-867bbfddbd-rxj6s 1/1 Running 0 15mprometheus-prometheus-0 3/3 Running 1 15m
为了方便访问 kube-prometheus-grafana,我们将 Service 类型改为 NodePort。
[root@master prometheus-operator]# kubectl patch svc kube-prometheus-grafana -p '{"spec":{"type":"NodePort"}}' -n monitoringservice/alertmanager patched[root@master prometheus-operator]# kubectl patch svc kube-prometheus -p '{"spec":{"type":"NodePort"}}' -n monitoringservice/kube-prometheus patched[root@master prometheus-operator]# kubectl get svc -n monitoringNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEalertmanager ClusterIP 10.110.11.1619093/TCP 21malertmanager-operated ClusterIP None 9093/TCP,6783/TCP 19mkube-prometheus NodePort 10.102.23.109 9090:31679/TCP 29mkube-prometheus-alertmanager NodePort 10.97.58.177 9093:31627/TCP 29mkube-prometheus-exporter-kube-state ClusterIP 10.110.185.195 80/TCP 29mkube-prometheus-exporter-node ClusterIP 10.111.98.237 9100/TCP 29mkube-prometheus-grafana NodePort 10.105.188.204 80:30357/TCP 29mprometheus ClusterIP 10.111.72.32 9090/TCP 22mprometheus-operated ClusterIP None 9090/TCP 19m
访问MASTER_IP:31697
,如下所示:
访问MASTER_IP:31627
,如下所示:
访问MASTER_IP:30357/login
,然后登陆,账号密码都是 admin。
可以监控 Kubernetes 集群的整体健康状态:
整个集群的资源使用情况:
Kubernetes 各个管理组件的状态:
节点的资源使用情况:
Deployment 的运行状态:
Pod 的运行状态:
StatefulSet 运行状态:
这些 Dashboard 展示了从集群到 Pod 的运行状况,能够帮助用户更好地运维 Kubernetes。而且 Prometheus Operator 迭代非常快,相信会继续开发出更多更好的功能,所以值得我们花些时间学习和实践。
官方文档:
转载于:https://blog.51cto.com/wzlinux/2335343