10 KiB
10 KiB
Changelog
All notable changes from the upstream Prometheus Operator chart will be added to this file.
[Package Version 00] - 2020-07-19
Added
- Added Prometheus Adapter as a dependency to the upstream Prometheus Operator chart to allow users to expose custom metrics from the default Prometheus instance deployed by this chart
- Remove
prometheus-operator/cleanup-crds.yamlandprometheus-operator/crds.yamlfrom the Prometheus Operator upstream chart in favor of just using the CRD directory to install the CRDs. - Added support for
rkeControllerManager,rkeScheduler,rkeProxy, andrkeEtcdPushProx exporters for monitoring k8s components within RKE clusters - Added support for a
k3sServerPushProx exporter that monitors k3s server components (kubeControllerManager,kubeScheduler, andkubeProxy) within k3s clusters - Added support for
kubeAdmControllerManager,kubeAdmScheduler,kubeAdmProxy, andkubeAdmEtcdPushProx exporters for monitoring k8s components within kubeAdm clusters - Added support for
rke2ControllerManager,rke2Scheduler,rke2Proxy, andrke2EtcdPushProx exporters for monitoring k8s components within rke2 clusters - Exposed
prometheus.prometheusSpec.ignoreNamespaceSelectorson values.yaml and set it tofalseby default. This value instructs the default Prometheus server deployed with this chart to ignore thenamespaceSelectorfield within any created ServiceMonitor or PodMonitor CRs that it selects. This prevents ServiceMonitors and PodMonitors from configuring the Prometheus scrape configuration to monitor resources outside the namespace that they are deployed in; if a user needs to have one ServiceMonitor / PodMonitor monitor resources within several namespaces (such as the resources that are used to monitor Istio in a default installation), they should not enable this option since it would require them to create one ServiceMonitor / PodMonitor CR per namespace that they would like to monitor. Relevant fields were also updated in the default README.md. - Added
grafana.sidecar.dashboards.searchNamespacetovalues.yamlwith a default value ofcattle-dashboards. The namespace provided should contain all ConfigMaps with the labelgrafana_dashboardand will be searched by the Grafana Dashboards sidecar for updates. The namespace specified is also created along with this deployment. All default dashboard ConfigMaps have been relocated from the deployment namespace to the namespace specified - Added
monitoring-admin,monitoring-edit, andmonitoring-viewdefaultClusterRolesto allow admins to assign roles to users to interact with Prometheus Operator CRs. These can be enabled by setting.Values.global.rbac.userRoles.create(default:true). In a typical RBAC setup, you might want to use aClusterRoleBindingto bind these roles to a Subject to allow them to set up or viewServiceMonitors/PodMonitors/PrometheusRulesand viewPrometheusorAlertmanagerCRs across the cluster. If.Values.global.rbac.userRoles.aggregateRolesForRBACis enabled, these ClusterRoles will aggregate into the respective default ClusterRoles provided by Kubernetes - Added
monitoring-config-admin,monitoring-config-editandmonitoring-config-viewdefaultRolesto allow admins to assign roles to users to be able to edit / viewSecretsandConfigMapswithin thecattle-monitoring-systemnamespace. These can be enabled by setting.Values.global.rbac.userRoles.create(default:true). In a typical RBAC setup, you might want to use aRoleBindingto bind these roles to a Subject within thecattle-monitoring-systemnamespace to allow them to modify Secrets / ConfigMaps tied to the deployment, such as your Alertmanager Config Secret. - Added
monitoring-dashboard-admin,monitoring-dashboard-editandmonitoring-dashboard-viewdefaultRolesto allow admins to assign roles to users to be able to edit / viewConfigMapswithin thecattle-dashboardsnamespace. These can be enabled by setting.Values.global.rbac.userRoles.create(default:true) and deploying Grafana as part of this chart. In a typical RBAC setup, you might want to use aRoleBindingto bind these roles to a Subject within thecattle-dashboardsnamespace to allow them to create / modify ConfigMaps that contain the JSON used to persist Grafana Dashboards on the cluster. - Added default resource limits for
Prometheus Operator,Prometheus,AlertManager,Grafana,kube-state-metrics,node-exporter - Added a default template
rancher_defaults.tmplto AlertManager that Rancher will offer to users in order to help configure the way alerts are rendered on a notifier. Also updated the default template deployed with this chart to reference that template and added an example of a Slack config using this template as a comment in thevalues.yaml. - Added support for private registries via introducing a new field for
global.cattle.systemDefaultRegistrythat, if supplied, will automatically be prepended onto every image used by the chart. - Added a default
nginxproxy container deployed with Grafana whose config is set in theConfigMaplocated incharts/grafana/templates/nginx-config.yaml. The purpose of this container is to make it possible to view Grafana's UI through a proxy that has a subpath (e.g. Rancher's proxy). This proxy container is set to listen on port8080(with aportNameofnginx-httpinstead of the defaultservice), which is also where the Grafana service will now point to, and will forward all requests to the Grafana container listening on the default port3000. - Added a default
nginxproxy container deployed with Prometheus whose config is set in theConfigMaplocated intemplates/prometheus/nginx-config.yaml. The purpose of this container is to make it possible to view Prometheus's UI through a proxy that has a subpath (e.g. Rancher's proxy). This proxy container is set to listen on port8081(with aportNameofnginx-httpinstead of the defaultweb), which is also where the Prometheus service will now point to, and will forward all requests to the Prometheus container listening on the default port9090. - Added support for passing CIS Scans in a hardened cluster by introducing a Job that patches the default service account within the
cattle-monitoring-systemandcattle-dashboardsnamespaces on install or upgrade and adding a default allow allNetworkPolicyto thecattle-monitoring-systemandcattle-dashboardsnamespaces.
Modified
- Updated the chart name from
prometheus-operatortorancher-monitoringand added theio.rancher.certified: rancherannotation toChart.yaml - Modified the default
node-exporterport from9100to9796 - Modified the default
nameOverridetorancher-monitoring. This change is necessary as the Prometheus Adapter's default URL (http://{{ .Values.nameOverride }}-prometheus.{{ .Values.namespaceOverride }}.svc) is based off of the value used here; if modified, the default Adapter URL must also be modified - Modified the default
namespaceOverridetocattle-monitoring-system. This change is necessary as the Prometheus Adapter's default URL (http://{{ .Values.nameOverride }}-prometheus.{{ .Values.namespaceOverride }}.svc) is based off of the value used here; if modified, the default Adapter URL must also be modified - Configured some default values for
grafana.servicevalues and exposed them in the default README.md - The default namespaces the following ServiceMonitors were changed from the deployment namespace to allow them to continue to monitor metrics when
prometheus.prometheusSpec.ignoreNamespaceSelectorsis enabled:core-dns:kube-systemapi-server:defaultkube-controller-manager:kube-systemkubelet:{{ .Values.kubelet.namespace }}
- Disabled the following deployments by default (can be enabled if required):
AlertManagerkube-controller-managermetrics exporterkube-etcdmetrics exporterkube-schedulermetrics exporterkube-proxymetrics exporter
- Updated default Grafana
deploymentStrategytoRecreateto prevent deployments from being stuck on upgrade if a PV is attached to Grafana - Modified the default
<serviceMonitor|podMonitor|rule>SelectorNilUsesHelmValuesto default tofalse. As a result, we look for all CRs with any labels in all namespaces by default rather than just the ones tagged with the labelrelease: rancher-monitoring. - Modified the default images used by the
rancher-monitoringchart to point to Rancher mirrors of the original images from upstream. - Modified the behavior of the chart to create the Alertmanager Config Secret via a pre-install hook instead of using the normal Helm lifecycle to manage the secret. The benefit of this approach is that all changes to the Config Secret done on a live cluster will never get overridden on a
helm upgradesince the secret only gets created on ahelm install. If you would like the secret to be cleaned up on anhelm uninstall, enablealertmanager.cleanupOnUninstall; however, this is disabled by default to prevent the loss of alerting configuration on an uninstall. This secret will never be modified on ahelm upgrade. - Modified the default
securityContextforPodtemplates across the chart to{"runAsNonRoot": "true", "runAsUser": "1000"}and replacedgrafana.rbac.pspUseAppArmorin favor ofgrafana.rbac.pspAnnotations={}in order to make it possible to deploy this chart on a hardened cluster which does not support Seccomp or AppArmor annotations in PSPs. Users can always choose to specify the annotations they want to use for the PSP directly as part of the values provided. - Modified
.Values.prometheus.prometheusSpec.containersto take in a string representing a template that should be rendered by Helm (viatpl) instead of allowing a user to provide YAML directly. - Modified the default Grafana configuration to auto assign users who access Grafana to the Viewer role and enable anonymous access to Grafana dashboards by default. This default works well for a Rancher user who is accessing Grafana via the
kubectl proxyon the Rancher Dashboard UI since anonymous users who enter via the proxy are authenticated by the k8s API Server, but you can / should modify this behavior if you plan on exposing Grafana in a way that does not require authentication (e.g. as aNodePortservice). - Modified the default Grafana configuration to add a default dashboard for Rancher on the Grafana home page.