<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:base="https://prometheus.io/">
  <id>https://prometheus.io/</id>
  <title>Prometheus Blog</title>
  <updated>2024-11-19T00:00:00Z</updated>
  <link rel="alternate" href="https://prometheus.io/" type="text/html"/>
  <link rel="self" href="https://prometheus.io/blog/feed.xml" type="application/atom+xml"/>
  <author>
    <name>© Prometheus Authors 2015</name>
    <uri>https://prometheus.io/blog/</uri>
  </author>
  <icon>https://prometheus.io/assets/favicons/favicon.ico</icon>
  <logo>https://prometheus.io/assets/prometheus_logo.png</logo>
  <entry>
    <id>tag:prometheus.io,2024-11-19:/blog/2024/11/19/yace-joining-prometheus-community/</id>
    <title type="html">YACE is joining Prometheus Community</title>
    <published>2024-11-19T00:00:00Z</published>
    <updated>2024-11-19T00:00:00Z</updated>
    <author>
      <name>Thomas Peitz (@thomaspeitz)</name>
      <uri>https://prometheus.io/blog/</uri>
    </author>
    <link rel="alternate" href="https://prometheus.io/blog/2024/11/19/yace-joining-prometheus-community/" type="text/html"/>
    <content type="html">&lt;p&gt;&lt;a href="https://github.com/prometheus-community/yet-another-cloudwatch-exporter"&gt;Yet Another Cloudwatch Exporter&lt;/a&gt; (YACE) has officially joined the Prometheus community! This move will make it more accessible to users and open new opportunities for contributors to enhance and maintain the project. There's also a blog post from &lt;a href="https://grafana.com/blog/2024/11/19/yace-moves-to-prometheus-community/"&gt;Cristian Greco's point of view&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="the-early-days"&gt;The early days&lt;a class="header-anchor" href="#the-early-days" name="the-early-days"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;When I first started YACE, I had no idea it would grow to this scale. At the time, I was working with &lt;a href="https://www.ivx.com"&gt;Invision AG&lt;/a&gt; (not to be confused with the design app), a company focused on workforce management software. They fully supported me in open-sourcing the tool, and with the help of my teammate &lt;a href="https://github.com/kforsthoevel"&gt;Kai Forsthövel&lt;/a&gt;, YACE was brought to life.&lt;/p&gt;

&lt;p&gt;Our first commit was back in 2018, with one of our primary goals being to make CloudWatch metrics easy to scale and automatically detect what to measure, all while keeping the user experience simple and intuitive. InVision AG was scaling their infrastructure up and down due to machine learning workloads and we needed something that detects new infrastructure easily. This focus on simplicity has remained a core priority. From that point on, YACE began to find its audience.&lt;/p&gt;

&lt;h2 id="yace-gains-momentum"&gt;Yace Gains Momentum&lt;a class="header-anchor" href="#yace-gains-momentum" name="yace-gains-momentum"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;As YACE expanded, so did the support around it. One pivotal moment was when &lt;a href="https://github.com/cristiangreco"&gt;Cristian Greco&lt;/a&gt; from Grafana Labs reached out. I was feeling overwhelmed and hardly keeping up when Cristian stepped in, simply asking where he could help. He quickly became the main releaser and led Grafana Labs' contributions to YACE, a turning point that made a huge impact on the project. Along with an incredible community of contributors from all over the world, they elevated YACE beyond what I could have achieved alone, shaping it into a truly global tool. YACE is no longer just my project or Invision's—it belongs to the community.&lt;/p&gt;

&lt;h2 id="gratitude-and-future-vision"&gt;Gratitude and Future Vision&lt;a class="header-anchor" href="#gratitude-and-future-vision" name="gratitude-and-future-vision"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;I am immensely grateful to every developer, tester, and user who has contributed to YACE's success. This journey has shown me the power of community and open source collaboration. But we're not done yet.&lt;/p&gt;

&lt;p&gt;It's time to take Yace even further—into the heart of the Prometheus ecosystem. Making Yace as the official Amazon CloudWatch exporter for Prometheus will make it easier and more accessible for everyone. With ongoing support from Grafana Labs and my commitment to refining the user experience, we'll ensure YACE becomes an intuitive tool that anyone can use effortlessly.&lt;/p&gt;

&lt;h2 id="try-out-yace-on-your-own"&gt;Try out YACE on your own&lt;a class="header-anchor" href="#try-out-yace-on-your-own" name="try-out-yace-on-your-own"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Try out &lt;strong&gt;&lt;a href="https://github.com/prometheus-community/yet-another-cloudwatch-exporter"&gt;YACE (Yet Another CloudWatch Exporter)&lt;/a&gt;&lt;/strong&gt; by following our step-by-step &lt;a href="https://github.com/prometheus-community/yet-another-cloudwatch-exporter/blob/master/docs/installation.md"&gt;Installation Guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can explore various configuration examples &lt;a href="https://github.com/prometheus-community/yet-another-cloudwatch-exporter/tree/master/examples"&gt;here&lt;/a&gt; to get started with monitoring specific AWS services.&lt;/p&gt;

&lt;p&gt;Our goal is to enable easy auto-discovery across all AWS services, making it simple to monitor any dynamic infrastructure.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <id>tag:prometheus.io,2024-11-14:/blog/2024/11/14/prometheus-3-0/</id>
    <title type="html">Announcing Prometheus 3.0</title>
    <published>2024-11-14T00:00:00Z</published>
    <updated>2024-11-14T00:00:00Z</updated>
    <author>
      <name>The Prometheus Team</name>
      <uri>https://prometheus.io/blog/</uri>
    </author>
    <link rel="alternate" href="https://prometheus.io/blog/2024/11/14/prometheus-3-0/" type="text/html"/>
    <content type="html">&lt;!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"&gt;
&lt;html&gt;&lt;body&gt;
&lt;p&gt;Following the recent release of &lt;a href="https://prometheus.io/blog/2024/09/11/prometheus-3-beta/"&gt;Prometheus 3.0 beta&lt;/a&gt; at PromCon in Berlin, the Prometheus Team
is excited to announce the immediate availability of Prometheus Version 3.0!&lt;/p&gt;

&lt;p&gt;This latest version marks a significant milestone as it is the first major release in 7 years. Prometheus has come a long way in that time, 
evolving from a project for early adopters to becoming a standard part of the cloud native monitoring stack. Prometheus 3.0 aims to 
continue that journey by adding some exciting new features while largely maintaining stability and compatibility with previous versions.&lt;/p&gt;

&lt;p&gt;The full 3.0 release adds some new features on top of the beta and also introduces a few additional breaking changes that we will describe in this article.&lt;/p&gt;

&lt;h1 id="whats-new" class="page-header"&gt;What's New&lt;a class="header-anchor" href="#whats-new" name="whats-new"&gt;&lt;/a&gt;
&lt;/h1&gt;
&lt;div class="toc toc-right"&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#new-ui"&gt;New UI
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#remote-write-2-0"&gt;Remote Write 2.0
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#utf-8-support"&gt;UTF-8 Support
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#otlp-support"&gt;OTLP Support
&lt;/a&gt;&lt;/li&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#otlp-ingestion"&gt;OTLP Ingestion
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#utf-8-normalization"&gt;UTF-8 Normalization
&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;li&gt;&lt;a href="#native-histograms"&gt;Native Histograms
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#breaking-changes"&gt;Breaking Changes
&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/div&gt;

&lt;p&gt;Here is a summary of the exciting changes that have been released as part of the beta version, as well as what has been added since:&lt;/p&gt;

&lt;h2 id="new-ui"&gt;New UI&lt;a class="header-anchor" href="#new-ui" name="new-ui"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;One of the highlights in Prometheus 3.0 is its brand-new UI that is enabled by default:&lt;/p&gt;

&lt;p&gt;&lt;img src="/assets/blog/2024-11-14/blog_post_screenshot_tree_view-s.png" alt="New UI query page"&gt;&lt;/p&gt;

&lt;p&gt;The UI has been completely rewritten with less clutter, a more modern look and feel, new features like a &lt;a href="https://promlens.com/"&gt;&lt;strong&gt;PromLens&lt;/strong&gt;&lt;/a&gt;-style tree view,
and will make future maintenance easier by using a more modern technical stack.&lt;/p&gt;

&lt;p&gt;Learn more about the new UI in general in &lt;a href="https://promlabs.com/blog/2024/09/11/a-look-at-the-new-prometheus-3-0-ui/"&gt;Julius' detailed article on the PromLabs blog&lt;/a&gt;.
Users can temporarily enable the old UI by using the &lt;code&gt;old-ui&lt;/code&gt; feature flag.&lt;/p&gt;

&lt;p&gt;Since the new UI is not battle-tested yet, it is also very possible that there are still bugs. If you find any, please 
&lt;a href="https://github.com/prometheus/prometheus/issues/new?assignees=&amp;amp;labels=&amp;amp;projects=&amp;amp;template=bug_report.yml"&gt;report them on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Since the beta, the user interface has been updated to support UTF-8 metric and label names.&lt;/p&gt;

&lt;p&gt;&lt;img src="/assets/blog/2024-11-14/utf8_ui.png" alt="New UTF-8 UI"&gt;&lt;/p&gt;

&lt;h2 id="remote-write-2-0"&gt;Remote Write 2.0&lt;a class="header-anchor" href="#remote-write-2-0" name="remote-write-2-0"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Remote-Write 2.0 iterates on the previous protocol version by adding native support for a host of new elements including metadata, exemplars,
created timestamp and native histograms. It also uses string interning to reduce payload size and CPU usage when compressing and decompressing. 
There is better handling for partial writes to provide more details to clients when this occurs. More details can be found
&lt;a href="https://prometheus.io/docs/specs/remote_write_spec_2_0/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="utf-8-support"&gt;UTF-8 Support&lt;a class="header-anchor" href="#utf-8-support" name="utf-8-support"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Prometheus now allows all valid UTF-8 characters to be used in metric and label names by default, as well as label values,
as has been true in version 2.x.&lt;/p&gt;

&lt;p&gt;Users will need to make sure their metrics producers are configured to pass UTF-8 names, and if either side does not support UTF-8,
metric names will be escaped using the traditional underscore-replacement method. PromQL queries can be written with the new quoting syntax
in order to retrieve UTF-8 metrics, or users can specify the &lt;code&gt;__name__&lt;/code&gt;  label name manually.&lt;/p&gt;

&lt;p&gt;Currently only the Go client library has been updated to support UTF-8, but support for other languages will be added soon.&lt;/p&gt;

&lt;h2 id="otlp-support"&gt;OTLP Support&lt;a class="header-anchor" href="#otlp-support" name="otlp-support"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In alignment with &lt;a href="https://prometheus.io/blog/2024/03/14/commitment-to-opentelemetry/"&gt;our commitment to OpenTelemetry&lt;/a&gt;, Prometheus 3.0 features 
several new features to improve interoperability with OpenTelemetry. &lt;/p&gt;

&lt;h3 id="otlp-ingestion"&gt;OTLP Ingestion&lt;a class="header-anchor" href="#otlp-ingestion" name="otlp-ingestion"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Prometheus can be configured as a native receiver for the OTLP Metrics protocol, receiving OTLP metrics on the &lt;code&gt;/api/v1/otlp/v1/metrics&lt;/code&gt; endpoint.&lt;/p&gt;

&lt;p&gt;See our &lt;a href="https://prometheus.io/docs/guides/opentelemetry"&gt;guide&lt;/a&gt; on best practices for consuming OTLP metric traffic into Prometheus.&lt;/p&gt;

&lt;h3 id="utf-8-normalization"&gt;UTF-8 Normalization&lt;a class="header-anchor" href="#utf-8-normalization" name="utf-8-normalization"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;With Prometheus 3.0, thanks to &lt;a href="#utf-8-support"&gt;UTF-8 support&lt;/a&gt;, users can store and query OpenTelemetry metrics without annoying changes to metric and label names like &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/pkg/translator/prometheus"&gt;changing dots to underscores&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Notably this allows &lt;strong&gt;less confusion&lt;/strong&gt; for users and tooling in terms of the discrepancy between what’s defined in OpenTelemetry semantic convention or SDK and what’s actually queryable.&lt;/p&gt;

&lt;p&gt;To achieve this for OTLP ingestion, Prometheus 3.0 has experimental support for different translation strategies. See &lt;a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#:%7E:text=Settings%20related%20to%20the%20OTLP%20receiver%20feature"&gt;otlp section in the Prometheus configuration&lt;/a&gt; for details.&lt;/p&gt;

&lt;blockquote&gt;
&lt;div class="admonition-wrapper note"&gt;&lt;div class="admonition alert alert-info"&gt;
&lt;strong&gt;NOTE:&lt;/strong&gt; While “NoUTF8EscapingWithSuffixes” strategy allows special characters, it still adds required suffixes for the best experience. See &lt;a href="https://github.com/prometheus/proposals/pull/39"&gt;the proposal on the future work to enable no suffixes&lt;/a&gt; in Prometheus.&lt;/div&gt;&lt;/div&gt;
&lt;/blockquote&gt;

&lt;h2 id="native-histograms"&gt;Native Histograms&lt;a class="header-anchor" href="#native-histograms" name="native-histograms"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Native histograms are a Prometheus metric type that offer a higher efficiency and lower cost alternative to Classic Histograms. Rather than having to choose (and potentially have to update) bucket boundaries based on the data set, native histograms have pre-set bucket boundaries based on exponential growth.&lt;/p&gt;

&lt;p&gt;Native Histograms are still experimental and not yet enabled by default, and can be turned on by passing &lt;code&gt;--enable-feature=native-histograms&lt;/code&gt;. Some aspects of Native Histograms, like the text format and accessor functions / operators are still under active design.&lt;/p&gt;

&lt;h2 id="breaking-changes"&gt;Breaking Changes&lt;a class="header-anchor" href="#breaking-changes" name="breaking-changes"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The Prometheus community strives to &lt;a href="https://prometheus.io/docs/prometheus/latest/stability/"&gt;not break existing features within a major release&lt;/a&gt;. With a new major release we took the opportunity to clean up a few, but small, long-standing issues. In other words, Prometheus 3.0 contains a few breaking changes. This includes changes to feature flags, configuration files, PromQL, and scrape protocols.&lt;/p&gt;

&lt;p&gt;Please read the &lt;a href="https://prometheus.io/docs/prometheus/3.0/migration/"&gt;migration guide&lt;/a&gt; to find out if your setup is affected and what actions to take.&lt;/p&gt;

&lt;h1 id="performance" class="page-header"&gt;Performance&lt;a class="header-anchor" href="#performance" name="performance"&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;It’s impressive to see what we have accomplished in the community since Prometheus 2.0. We all love numbers, so let’s celebrate the efficiency improvements we made for both CPU and memory use for the TSDB mode. Below you can see performance numbers between 3 Prometheus versions on the node with 8 CPU and 49 GB allocatable memory.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2.0.0 (7 years ago)&lt;/li&gt;
&lt;li&gt;2.18.0 (4 years ago)&lt;/li&gt;
&lt;li&gt;3.0.0 (now)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="/assets/blog/2024-11-14/memory_bytes_ui.png" alt="Memory bytes"&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src="/assets/blog/2024-11-14/cpu_seconds_ui.png" alt="CPU seconds"&gt;&lt;/p&gt;

&lt;p&gt;It’s furthermore impressive that those numbers were taken using our &lt;a href="https://github.com/prometheus/prometheus/pull/15366"&gt;prombench macrobenchmark&lt;/a&gt; 
that uses the same PromQL queries, configuration and environment–highlighting backward compatibility and stability for the core features, even with 3.0.&lt;/p&gt;

&lt;h1 id="whats-next" class="page-header"&gt;What's Next&lt;a class="header-anchor" href="#whats-next" name="whats-next"&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;There are still tons of exciting features and improvements we can make in Prometheus and the ecosystem. Here is a non-exhaustive list to get you excited and… 
hopefully motivate you to contribute and join us!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;New, more inclusive &lt;strong&gt;governance&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;More &lt;strong&gt;OpenTelemetry&lt;/strong&gt; compatibility and features&lt;/li&gt;
&lt;li&gt;OpenMetrics 2.0, now under Prometheus governance!&lt;/li&gt;
&lt;li&gt;Native Histograms stability (and with custom buckets!)&lt;/li&gt;
&lt;li&gt;More optimizations!&lt;/li&gt;
&lt;li&gt;UTF-8 support coverage in more SDKs and tools&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id="try-it-out" class="page-header"&gt;Try It Out!&lt;a class="header-anchor" href="#try-it-out" name="try-it-out"&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;You can try out Prometheus 3.0 by downloading it from our &lt;a href="https://prometheus.io/download/#prometheus"&gt;official binaries&lt;/a&gt; and &lt;a href="https://quay.io/repository/prometheus/prometheus?tab=tags"&gt;container images&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you are upgrading from Prometheus 2.x, check out the migration guide for more information on any adjustments you will have to make.
Please note that we strongly recommend upgrading to v2.55 before upgrading to v3.0. Rollback is possible from v3.0 to v2.55, but not to earlier versions.&lt;/p&gt;

&lt;p&gt;As always, we welcome feedback and contributions from the community!&lt;/p&gt;
&lt;/body&gt;&lt;/html&gt;
</content>
  </entry>
  <entry>
    <id>tag:prometheus.io,2024-09-11:/blog/2024/09/11/prometheus-3-beta/</id>
    <title type="html">Prometheus 3.0 Beta Released</title>
    <published>2024-09-11T00:00:00Z</published>
    <updated>2024-09-11T00:00:00Z</updated>
    <author>
      <name>The Prometheus Team</name>
      <uri>https://prometheus.io/blog/</uri>
    </author>
    <link rel="alternate" href="https://prometheus.io/blog/2024/09/11/prometheus-3-beta/" type="text/html"/>
    <content type="html">&lt;!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"&gt;
&lt;html&gt;&lt;body&gt;
&lt;p&gt;The Prometheus Team is proud to announce the availability of Prometheus Version 3.0-beta!
You can download it &lt;a href="https://github.com/prometheus/prometheus/releases/tag/v3.0.0-beta.0"&gt;here&lt;/a&gt;.
As is traditional with a beta release, we do &lt;strong&gt;not&lt;/strong&gt; recommend users install Prometheus 3.0-beta on critical production systems, but we do want everyone to test it out and find bugs.&lt;/p&gt;

&lt;p&gt;In general, the only breaking changes are the removal of deprecated feature flags. The Prometheus team worked hard to ensure backwards-compatibility and not to break existing installations, so all of the new features described below build on top of existing functionality. Most users should be able to try Prometheus 3.0 out of the box without any configuration changes.&lt;/p&gt;

&lt;h1 id="whats-new" class="page-header"&gt;What's New&lt;a class="header-anchor" href="#whats-new" name="whats-new"&gt;&lt;/a&gt;
&lt;/h1&gt;
&lt;div class="toc toc-right"&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#new-ui"&gt;New UI
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#remote-write-2-0"&gt;Remote Write 2.0
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#opentelemetry-support"&gt;OpenTelemetry Support
&lt;/a&gt;&lt;/li&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#utf-8"&gt;UTF-8
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#otlp-ingestion"&gt;OTLP Ingestion
&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;li&gt;&lt;a href="#native-histograms"&gt;Native Histograms
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#other-breaking-changes"&gt;Other Breaking Changes
&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/div&gt;

&lt;p&gt;With over 7500 commits in the 7 years since Prometheus 2.0 came out there are too many new individual features and fixes to list, but there are some big shiny and breaking changes we wanted to call out. We need everyone in the community to try them out and report any issues you might find.
The more feedback we get, the more stable the final 3.0 release can be.&lt;/p&gt;

&lt;h2 id="new-ui"&gt;New UI&lt;a class="header-anchor" href="#new-ui" name="new-ui"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;One of the highlights in Prometheus 3.0 is its brand new UI that is enabled by default:&lt;/p&gt;

&lt;p&gt;&lt;img src="/assets/blog/2024-09-11/blog_post_screenshot_tree_view-s.png" alt="New UI query page"&gt;&lt;/p&gt;

&lt;p&gt;The UI has been completely rewritten with less clutter, a more modern look and feel, new features like a &lt;a href="https://promlens.com/"&gt;&lt;strong&gt;PromLens&lt;/strong&gt;&lt;/a&gt;-style tree view, and will make future maintenance easier by using a more modern technical stack.&lt;/p&gt;

&lt;p&gt;Learn more about the new UI in general in &lt;a href="https://promlabs.com/blog/2024/09/11/a-look-at-the-new-prometheus-3-0-ui/"&gt;Julius' detailed article on the PromLabs blog&lt;/a&gt;.
Users can temporarily enable the old UI by using the &lt;code&gt;old-ui&lt;/code&gt; feature flag.
Since the new UI is not battle-tested yet, it is also very possible that there are still bugs. If you find any, please &lt;a href="https://github.com/prometheus/prometheus/issues/new?assignees=&amp;amp;labels=&amp;amp;projects=&amp;amp;template=bug_report.yml"&gt;report them on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="remote-write-2-0"&gt;Remote Write 2.0&lt;a class="header-anchor" href="#remote-write-2-0" name="remote-write-2-0"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Remote-Write 2.0 iterates on the previous protocol version by adding native support for a host of new elements including metadata, exemplars, created timestamp and native histograms. It also uses string interning to reduce payload size and CPU usage when compressing and decompressing. More details can be found &lt;a href="https://prometheus.io/docs/specs/remote_write_spec_2_0/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="opentelemetry-support"&gt;OpenTelemetry Support&lt;a class="header-anchor" href="#opentelemetry-support" name="opentelemetry-support"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Prometheus intends to be the default choice for storing OpenTelemetry metrics, and 3.0 includes some big new features that makes it even better as a storage backend for OpenTelemetry metrics data.&lt;/p&gt;

&lt;h3 id="utf-8"&gt;UTF-8&lt;a class="header-anchor" href="#utf-8" name="utf-8"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;By default, Prometheus will allow all valid UTF-8 characters to be used in metric and label names, as well as label values as has been true in version 2.x. &lt;/p&gt;

&lt;p&gt;Users will need to make sure their metrics producers are configured to pass UTF-8 names, and if either side does not support UTF-8, metric names will be escaped using the traditional underscore-replacement method. PromQL queries can be written with the new quoting syntax in order to retrieve UTF-8 metrics, or users can specify the &lt;code&gt;__name__&lt;/code&gt;  label name manually.&lt;/p&gt;

&lt;p&gt;Not all language bindings have been updated with support for UTF-8 but the primary Go libraries have been.&lt;/p&gt;

&lt;h3 id="otlp-ingestion"&gt;OTLP Ingestion&lt;a class="header-anchor" href="#otlp-ingestion" name="otlp-ingestion"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Prometheus can be configured as a native receiver for the OTLP Metrics protocol, receiving OTLP metrics on the /api/v1/otlp/v1/metrics endpoint.&lt;/p&gt;

&lt;h2 id="native-histograms"&gt;Native Histograms&lt;a class="header-anchor" href="#native-histograms" name="native-histograms"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Native histograms are a Prometheus metric type that offer a higher efficiency and lower cost alternative to Classic Histograms. Rather than having to choose (and potentially have to update) bucket boundaries based on the data set, native histograms have pre-set bucket boundaries based on exponential growth.&lt;/p&gt;

&lt;p&gt;Native Histograms are still experimental and not yet enabled by default, and can be turned on by passing &lt;code&gt;--enable-feature=native-histograms&lt;/code&gt;.  Some aspects of Native Histograms, like the text format and accessor functions / operators are still under active design.&lt;/p&gt;

&lt;h2 id="other-breaking-changes"&gt;Other Breaking Changes&lt;a class="header-anchor" href="#other-breaking-changes" name="other-breaking-changes"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The following feature flags have been removed, being enabled by default instead. References to these flags should be removed from configs, and will be ignored in Prometheus starting with version 3.0&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;promql-at-modifier&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;promql-negative-offset&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;remote-write-receiver&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;no-scrape-default-port&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;new-service-discovery-manager&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Range selections are now &lt;a href="https://github.com/prometheus/prometheus/issues/13213"&gt;left-open and right-closed&lt;/a&gt;, which will avoid rare occasions that more points than intended are included in operations.&lt;/p&gt;

&lt;p&gt;Agent mode is now stable and has its own config flag instead of a feature flag&lt;/p&gt;
&lt;/body&gt;&lt;/html&gt;
</content>
  </entry>
  <entry>
    <id>tag:prometheus.io,2024-03-13:/blog/2024/03/14/commitment-to-opentelemetry/</id>
    <title type="html">Our commitment to OpenTelemetry</title>
    <published>2024-03-13T00:00:00Z</published>
    <updated>2024-03-13T00:00:00Z</updated>
    <author>
      <name>Goutham Veeramachaneni (@Gouthamve) and Carrie Edwards (@carrieedwards)</name>
      <uri>https://prometheus.io/blog/</uri>
    </author>
    <link rel="alternate" href="https://prometheus.io/blog/2024/03/14/commitment-to-opentelemetry/" type="text/html"/>
    <content type="html">&lt;p&gt;&lt;em&gt;The &lt;a href="https://opentelemetry.io/"&gt;OpenTelemetry project&lt;/a&gt; is an Observability framework and toolkit designed to create and manage telemetry data such as traces, metrics, and logs. It is gaining widespread adoption due to its consistent specification between signals and promise to reduce vendor lock-in which is something that we’re excited about.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id="looking-back-at-2023"&gt;Looking back at 2023&lt;a class="header-anchor" href="#looking-back-at-2023" name="looking-back-at-2023"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Over the past few years, we have collaborated with the OpenTelemetry community to make sure that OpenTelemetry and Prometheus support each other bidirectionally. This led to the drafting of the official specification to convert between the two systems, as well as the implementations that allow you to ingest Prometheus metrics into OpenTelemetry Collector and vice-versa.&lt;/p&gt;

&lt;p&gt;Since then, we have spent a significant amount of time understanding the &lt;a href="https://docs.google.com/document/d/1epvoO_R7JhmHYsII-GJ6Yw99Ky91dKOqOtZGqX7Bk0g/edit?usp=sharing"&gt;challenges faced by OpenTelemetry users&lt;/a&gt; when storing their metrics in Prometheus and based on those, explored &lt;a href="https://docs.google.com/document/d/1NGdKqcmDExynRXgC_u1CDtotz9IUdMrq2yyIq95hl70/edit?usp=sharing"&gt;how we can address them&lt;/a&gt;. Some of the changes proposed need careful considerations to avoid breaking either side's operating promises, e.g. supporting both push and pull. At PromCon Berlin 2023, we attempted to summarize our ideas in &lt;a href="https://www.youtube.com/watch?v=mcabOH70FqU"&gt;one of the talks&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;At our &lt;a href="https://docs.google.com/document/d/11LC3wJcVk00l8w5P3oLQ-m3Y37iom6INAMEu2ZAGIIE/edit#bookmark=id.9kp854ea3sv4"&gt;dev summit in Berlin&lt;/a&gt;, we spent the majority of our time discussing these changes and our general stance on OpenTelemetry in depth, and the broad consensus is that we want &lt;a href="https://docs.google.com/document/d/11LC3wJcVk00l8w5P3oLQ-m3Y37iom6INAMEu2ZAGIIE/edit#bookmark=id.196i9ij1u7fs"&gt;“to be the default store for OpenTelemetry metrics”&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;We’ve formed a core group of developers to lead this initiative, and we are going to release a Prometheus 3.0 in 2024 with OTel support as one of its more important features. Here’s a sneak peek at what's coming in 2024.&lt;/p&gt;

&lt;h2 id="the-year-ahead"&gt;The year ahead&lt;a class="header-anchor" href="#the-year-ahead" name="the-year-ahead"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;h3 id="otlp-ingestion-ga"&gt;OTLP Ingestion GA&lt;a class="header-anchor" href="#otlp-ingestion-ga" name="otlp-ingestion-ga"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;In &lt;a href="https://github.com/prometheus/prometheus/releases/tag/v2.47.0"&gt;Prometheus v2.47.0&lt;/a&gt;, released on 6th September 2023, we added experimental support for OTLP ingestion in Prometheus. We’re constantly improving this and we plan to add support for staleness and make it a stable feature. We will also mark our support for out-of-order ingestion as stable. This involves also GA-ing our support for native / exponential histograms.&lt;/p&gt;

&lt;h3 id="support-utf-8-metric-and-label-names"&gt;Support UTF-8 metric and label names&lt;a class="header-anchor" href="#support-utf-8-metric-and-label-names" name="support-utf-8-metric-and-label-names"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/open-telemetry/semantic-conventions/blob/main/docs/http/http-metrics.md"&gt;OpenTelemetry semantic conventions&lt;/a&gt; push for &lt;code&gt;“.”&lt;/code&gt; to be the namespacing character. For example, &lt;code&gt;http.server.request.duration&lt;/code&gt;. However, Prometheus currently requires a &lt;a href="https://prometheus.io/docs/instrumenting/writing_exporters/#naming"&gt;more limited character set&lt;/a&gt;, which means we convert the metric to &lt;code&gt;http_server_request_duration&lt;/code&gt; when ingesting it into Prometheus.&lt;/p&gt;

&lt;p&gt;This causes unnecessary dissonance and we’re working on removing this limitation by adding UTF-8 support for all labels and metric names. The progress is tracked &lt;a href="https://github.com/prometheus/prometheus/issues/13095"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id="native-support-for-resource-attributes"&gt;Native support for resource attributes&lt;a class="header-anchor" href="#native-support-for-resource-attributes" name="native-support-for-resource-attributes"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;OpenTelemetry differentiates between metric attributes (labels to identify the metric itself, like &lt;code&gt;http.status_code&lt;/code&gt;) and resource attributes (labels to identify the source of the metrics, like &lt;code&gt;k8s.pod.name&lt;/code&gt;), while Prometheus has a more flat label schema. This leads to many usability issues that are detailed &lt;a href="https://docs.google.com/document/d/1gG-eTQ4SxmfbGwkrblnUk97fWQA93umvXHEzQn2Nv7E/edit?usp=sharing"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We’re &lt;a href="https://docs.google.com/document/d/1FgHxOzCQ1Rom-PjHXsgujK8x5Xx3GTiwyG__U3Gd9Tw/edit"&gt;exploring several solutions&lt;/a&gt; to this problem from many fronts (Query, UX, storage, etc.), but our goal is to make it quite easy to filter and group on resource attributes. This is a work in progress, and feedback and help are wanted!&lt;/p&gt;

&lt;h3 id="otlp-export-in-the-ecosystem"&gt;OTLP export in the ecosystem&lt;a class="header-anchor" href="#otlp-export-in-the-ecosystem" name="otlp-export-in-the-ecosystem"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Prometheus remote write is supported by &lt;a href="https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage"&gt;most of the leading Observability projects and vendors&lt;/a&gt; already. However, OpenTelemetry Protocol (OTLP) is gaining prominence and we would like to support it across the Prometheus ecosystem.&lt;/p&gt;

&lt;p&gt;We would like to add support for it to the Prometheus server, SDKs and the exporters. This would mean that any service instrumented with the Prometheus SDKs will also be able to &lt;em&gt;push&lt;/em&gt; OTLP and it will unlock the rich Prometheus exporter ecosystem for OpenTelemetry users.&lt;/p&gt;

&lt;p&gt;However, we intend to keep and develop the OpenMetrics exposition format as an optimized / simplified format for Prometheus and pull-based use-cases.&lt;/p&gt;

&lt;h3 id="delta-temporality"&gt;Delta temporality&lt;a class="header-anchor" href="#delta-temporality" name="delta-temporality"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The OpenTelemetry project also supports &lt;a href="https://grafana.com/blog/2023/09/26/opentelemetry-metrics-a-guide-to-delta-vs.-cumulative-temporality-trade-offs/"&gt;Delta temporality&lt;/a&gt; which has some use-cases for the Observability ecosystem. We have a lot of Prometheus users still running statsd and using the statsd_exporter for various reasons.&lt;/p&gt;

&lt;p&gt;We would like to support the Delta temporality of OpenTelemetry in the Prometheus server and are &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/30479"&gt;working towards it&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="call-for-contributions"&gt;Call for contributions!&lt;a class="header-anchor" href="#call-for-contributions" name="call-for-contributions"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;As you can see, a lot of new and exciting things are coming to Prometheus! If working in the intersection between two of the most relevant open-source projects around observability sounds challenging and interesting to you, we'd like to have you on board!&lt;/p&gt;

&lt;p&gt;This year there is also a change of governance in the works that will make the process of becoming a maintainer easier than ever! If you ever wanted to have an impact on Prometheus, now is a great time to get started.&lt;/p&gt;

&lt;p&gt;Our first focus has always been to be as open and transparent as possible on how we are organizing all the work above so that you can also contribute. We are looking for contributors to support this initiative and help implement these features. Check out the &lt;a href="https://github.com/orgs/prometheus/projects/9"&gt;Prometheus 3.0 public board&lt;/a&gt; and &lt;a href="https://github.com/prometheus/prometheus/issues?q=is%3Aopen+is%3Aissue+milestone%3A%22OTEL+Support%22"&gt;Prometheus OTel support milestone&lt;/a&gt; to track the progress of the feature development and see ways that you can &lt;a href="https://github.com/prometheus/prometheus/blob/main/CONTRIBUTING.md"&gt;contribute&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="conclusion"&gt;Conclusion&lt;a class="header-anchor" href="#conclusion" name="conclusion"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Some of the changes proposed are large and invasive or involve a fundamental departure from the original data model of Prometheus. However, we plan to introduce these gracefully so that Prometheus 3.0 will have no major breaking changes and most of the users can upgrade without impact.&lt;/p&gt;

&lt;p&gt;We are excited to embark on this new chapter for Prometheus and would love your feedback on the changes suggested.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <id>tag:prometheus.io,2023-09-01:/blog/2023/09/01/promcon2023-schedule/</id>
    <title type="html">The Schedule for the PromCon Europe 2023 is Live</title>
    <published>2023-09-01T00:00:00Z</published>
    <updated>2023-09-01T00:00:00Z</updated>
    <author>
      <name>Matthias Loibl (@metalmatze)</name>
      <uri>https://prometheus.io/blog/</uri>
    </author>
    <link rel="alternate" href="https://prometheus.io/blog/2023/09/01/promcon2023-schedule/" type="text/html"/>
    <content type="html">&lt;blockquote&gt;
&lt;p&gt;PromCon Europe is the eighth conference fully dedicated to the Prometheus monitoring system&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Berlin, Germany – September 1, 2023 – The CNCF and the Prometheus team, released the two-day schedule for the single-track PromCon Europe 2023 conference happening in Berlin, Germany from September 28 to September 29, 2023. Attendees will be able to choose from 21 full-length (25min) sessions and up to 20 five-minute lightning talk sessions spanning diverse topics related to &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now in its 8th installment, PromCon brings together Prometheus users and developers from around the world to exchange knowledge, best practices, and experience gained through using Prometheus. The program committee reviewed 66 submissions that will provide a fresh and informative look into the most pressing topics around Prometheus today.&lt;/p&gt;

&lt;p&gt;"We are super excited for PromCon to be coming home to Berlin. Prometheus was started in Berlin at Soundcloud in 2012. The first PromCon was hosted in Berlin and in between moved to Munich. This year we're hosting around 300 attendees at Radialsystem in Friedrichshain, Berlin. Berlin has a vibrant Prometheus community and many of the Prometheus team members live in the neighborhood. It is a great opportunity to network and connect with the Prometheus family who are all passionate about systems and service monitoring," said Matthias Loibl, Senior Software Engineer at Polar Signals and Prometheus team member who leads this year's PromCon program committee. "It will be a great event to learn about the latest developments from the Prometheus team itself and connect to some big-scale users of Prometheus up close."&lt;/p&gt;

&lt;p&gt;The community-curated schedule will feature sessions from open source community members, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://promcon.io/2023-berlin/talks/towards-making-prometheus-opentelemetry-native"&gt;Towards making Prometheus OpenTelemetry native&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://promcon.io/2023-berlin/talks/how-to-monitor-global-tens-of-thousands-of-kubernetes-clusters-with-thanos-federation"&gt;How to Monitor Global Tens of Thousands of Kubernetes Clusters with Thanos Federation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://promcon.io/2023-berlin/talks/prometheus-java-client"&gt;Prometheus Java Client 1.0.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://promcon.io/2023-berlin/talks/perses"&gt;Perses&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://promcon.io/2023-berlin/talks/where-your-money-going-the-beginners-guide-to-measuring-kubernetes-costs"&gt;Where's your money going? The Beginners Guide To Measuring Kubernetes Costs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For the full PromCon Europe 2023 program, please visit the &lt;a href="https://promcon.io/2023-berlin/schedule/"&gt;schedule&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="registration"&gt;Registration&lt;a class="header-anchor" href="#registration" name="registration"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://promcon.io/2023-berlin/register/"&gt;Register&lt;/a&gt; for the in-person standard pricing of $350 USD through September 25. The venue has space for 300 attendees so don’t wait!&lt;/p&gt;

&lt;h2 id="thank-you-to-our-sponsors"&gt;Thank You to Our Sponsors&lt;a class="header-anchor" href="#thank-you-to-our-sponsors" name="thank-you-to-our-sponsors"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;PromCon Europe 2023 has been made possible thanks to the amazing community around Prometheus and support from our Diamond Sponsor &lt;a href="https://grafana.com/"&gt;Grafana Labs&lt;/a&gt;, Platinum Sponsor &lt;a href="https://www.redhat.com/"&gt;Red Hat&lt;/a&gt; as well as many more Gold, and Startup sponsors. This year’s edition is organized by &lt;a href="https://www.polarsignals.com/"&gt;Polar Signals&lt;/a&gt; and CNCF.&lt;/p&gt;

&lt;h2 id="watch-the-prometheus-documentary"&gt;Watch the Prometheus Documentary&lt;a class="header-anchor" href="#watch-the-prometheus-documentary" name="watch-the-prometheus-documentary"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;iframe width="560" height="315" src="https://www.youtube.com/embed/rT4fJNbfe14" frameborder="0" allowfullscreen&gt;&lt;/iframe&gt;

&lt;h2 id="contact"&gt;Contact&lt;a class="header-anchor" href="#contact" name="contact"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Jessie Adams-Shore - The Linux Foundation - &lt;a href="mailto:pr@cncf.io"&gt;pr@cncf.io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PromCon Organizers - &lt;a href="mailto:promcon-organizers@googlegroups.com"&gt;promcon-organizers@googlegroups.com&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  <entry>
    <id>tag:prometheus.io,2023-03-21:/blog/2023/03/21/stringlabel/</id>
    <title type="html">FAQ about Prometheus 2.43 String Labels Optimization</title>
    <published>2023-03-21T00:00:00Z</published>
    <updated>2023-03-21T00:00:00Z</updated>
    <author>
      <name>Julien Pivotto (@roidelapluie)</name>
      <uri>https://prometheus.io/blog/</uri>
    </author>
    <link rel="alternate" href="https://prometheus.io/blog/2023/03/21/stringlabel/" type="text/html"/>
    <content type="html">&lt;p&gt;Prometheus 2.43 has just been released, and it brings some exciting features and
enhancements. One of the significant improvements is the &lt;code&gt;stringlabels&lt;/code&gt; release,
which uses a new data structure for labels. This blog post will answer some
frequently asked questions about the 2.43 release and the &lt;code&gt;stringlabels&lt;/code&gt;
optimizations.&lt;/p&gt;

&lt;h3 id="what-is-the-stringlabels-release"&gt;What is the &lt;code&gt;stringlabels&lt;/code&gt; release?&lt;a class="header-anchor" href="#what-is-the-stringlabels-release" name="what-is-the-stringlabels-release"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;stringlabels&lt;/code&gt; release is a Prometheus 2.43 version that uses a new data
structure for labels. It stores all the label/values in a single string,
resulting in a smaller heap size and some speedups in most cases. These
optimizations are not shipped in the default binaries and require compiling
Prometheus using the Go tag &lt;code&gt;stringlabels&lt;/code&gt;.&lt;/p&gt;

&lt;h3 id="why-didnt-you-go-for-a-feature-flag-that-we-can-toggle"&gt;Why didn't you go for a feature flag that we can toggle?&lt;a class="header-anchor" href="#why-didnt-you-go-for-a-feature-flag-that-we-can-toggle" name="why-didnt-you-go-for-a-feature-flag-that-we-can-toggle"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We considered using a feature flag but it would have a memory overhead that was
not worth it. Therefore, we decided to provide a separate release with these
optimizations for those who are interested in testing and measuring the gains on
their production environment.&lt;/p&gt;

&lt;h3 id="when-will-these-optimizations-be-generally-available"&gt;When will these optimizations be generally available?&lt;a class="header-anchor" href="#when-will-these-optimizations-be-generally-available" name="when-will-these-optimizations-be-generally-available"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;These optimizations will be available in the upcoming Prometheus 2.44 release
by default.&lt;/p&gt;

&lt;h3 id="how-do-i-get-the-2-43-release"&gt;How do I get the 2.43 release?&lt;a class="header-anchor" href="#how-do-i-get-the-2-43-release" name="how-do-i-get-the-2-43-release"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/prometheus/prometheus/releases/tag/v2.43.0"&gt;Prometheus 2.43 release&lt;/a&gt; is available on the official Prometheus GitHub
releases page, and users can download the binary files directly from there.
Additionally, Docker images are also available for those who prefer to use
containers.&lt;/p&gt;

&lt;p&gt;The stringlabels optimization is not included in these default binaries. To use
this optimization, users will need to download the &lt;a href="https://github.com/prometheus/prometheus/releases/tag/v2.43.0%2Bstringlabels"&gt;2.43.0+stringlabels
release&lt;/a&gt;
binary or the &lt;a href="https://quay.io/repository/prometheus/prometheus?tab=tags"&gt;Docker images tagged
v2.43.0-stringlabels&lt;/a&gt; specifically.&lt;/p&gt;

&lt;h3 id="why-is-the-release-v2-43-0-stringlabels-and-the-docker-tag-v2-43-0-stringlabels"&gt;Why is the release &lt;code&gt;v2.43.0+stringlabels&lt;/code&gt; and the Docker tag &lt;code&gt;v2.43.0-stringlabels&lt;/code&gt;?&lt;a class="header-anchor" href="#why-is-the-release-v2-43-0-stringlabels-and-the-docker-tag-v2-43-0-stringlabels" name="why-is-the-release-v2-43-0-stringlabels-and-the-docker-tag-v2-43-0-stringlabels"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;In semantic versioning, the plus sign (+) is used to denote build
metadata. Therefore, the Prometheus 2.43 release with the &lt;code&gt;stringlabels&lt;/code&gt;
optimization is named &lt;code&gt;2.43.0+stringlabels&lt;/code&gt; to signify that it includes the
experimental &lt;code&gt;stringlabels&lt;/code&gt; feature. However, Docker tags do not allow the use of
the plus sign in their names. Hence, the plus sign has been replaced with a dash
(-) to make the Docker tag &lt;code&gt;v2.43.0-stringlabels&lt;/code&gt;. This allows the Docker tag to
pass the semantic versioning checks of downstream projects such as the
Prometheus Operator.&lt;/p&gt;

&lt;h3 id="what-are-the-other-noticeable-features-in-the-prometheus-2-43-release"&gt;What are the other noticeable features in the Prometheus 2.43 release?&lt;a class="header-anchor" href="#what-are-the-other-noticeable-features-in-the-prometheus-2-43-release" name="what-are-the-other-noticeable-features-in-the-prometheus-2-43-release"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Apart from the &lt;code&gt;stringlabels&lt;/code&gt; optimizations, the Prometheus 2.43 release
brings several new features and enhancements. Some of the significant additions
include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We added support for &lt;code&gt;scrape_config_files&lt;/code&gt; to include scrape configs from
different files. This makes it easier to manage and organize the configuration.&lt;/li&gt;
&lt;li&gt;The HTTP clients now includes two new config options: &lt;code&gt;no_proxy&lt;/code&gt; to exclude
URLs from proxied requests and &lt;code&gt;proxy_from_environment&lt;/code&gt; to read proxies from
env variables. These features make it easier to manage the HTTP client's
behavior in different environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can learn more about features and bugfixes in the
&lt;a href="https://github.com/prometheus/prometheus/releases/tag/v2.43.0"&gt;full changelog&lt;/a&gt;.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <id>tag:prometheus.io,2021-11-16:/blog/2021/11/16/agent/</id>
    <title type="html">Introducing Prometheus Agent Mode, an Efficient and Cloud-Native Way for Metric Forwarding</title>
    <published>2021-11-16T00:00:00Z</published>
    <updated>2021-11-16T00:00:00Z</updated>
    <author>
      <name>Bartlomiej Plotka (@bwplotka)</name>
      <uri>https://prometheus.io/blog/</uri>
    </author>
    <link rel="alternate" href="https://prometheus.io/blog/2021/11/16/agent/" type="text/html"/>
    <content type="html">&lt;blockquote&gt;
&lt;p&gt;Bartek Płotka has been a Prometheus Maintainer since 2019 and Principal Software Engineer at Red Hat. Co-author of the CNCF Thanos project. CNCF Ambassador and tech lead for the CNCF TAG Observability. In his free time, he writes a book titled "Efficient Go" with O'Reilly. Opinions are my own!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What I personally love in the Prometheus project, and one of the many reasons why I joined the team, was the laser focus on the project's goals. Prometheus was always about pushing boundaries when it comes to providing pragmatic, reliable, cheap, yet invaluable metric-based monitoring. Prometheus' ultra-stable and robust APIs, query language, and integration protocols (e.g. Remote Write and &lt;a href="https://openmetrics.io/"&gt;OpenMetrics&lt;/a&gt;) allowed the Cloud Native Computing Foundation (CNCF) metrics ecosystem to grow on those strong foundations. Amazing things happened as a result:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We can see community exporters for getting metrics about virtually everything e.g. &lt;a href="https://github.com/google/cadvisor"&gt;containers&lt;/a&gt;, &lt;a href="https://github.com/cloudflare/ebpf_exporter"&gt;eBPF&lt;/a&gt;, &lt;a href="https://github.com/sladkoff/minecraft-prometheus-exporter"&gt;Minecraft server statistics&lt;/a&gt; and even &lt;a href="https://megamorf.gitlab.io/2019/07/14/monitoring-plant-health-with-prometheus/"&gt;plants' health when gardening&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Most people nowadays expect cloud-native software to have an HTTP/HTTPS &lt;code&gt;/metrics&lt;/code&gt; endpoint that Prometheus can scrape. A concept developed in secret within Google and pioneered globally by the Prometheus project.&lt;/li&gt;
&lt;li&gt;The observability paradigm shifted. We see SREs and developers rely heavily on metrics from day one, which improves software resiliency, debuggability, and data-driven decisions!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the end, we hardly see Kubernetes clusters without Prometheus running there.&lt;/p&gt;

&lt;p&gt;The strong focus of the Prometheus community allowed other open-source projects to grow too to extend the Prometheus deployment model beyond single nodes (e.g. &lt;a href="https://cortexmetrics.io/"&gt;Cortex&lt;/a&gt;, &lt;a href="https://thanos.io/"&gt;Thanos&lt;/a&gt; and more). Not mentioning cloud vendors adopting Prometheus' API and data model (e.g. &lt;a href="https://aws.amazon.com/prometheus/"&gt;Amazon Managed Prometheus&lt;/a&gt;, &lt;a href="https://cloud.google.com/stackdriver/docs/managed-prometheus"&gt;Google Cloud Managed Prometheus&lt;/a&gt;, &lt;a href="https://grafana.com/products/cloud/"&gt;Grafana Cloud&lt;/a&gt; and more). If you are looking for a single reason why the Prometheus project is so successful, it is this: &lt;strong&gt;Focusing the monitoring community on what matters&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this (lengthy) blog post, I would love to introduce a new operational mode of running Prometheus called "Agent". It is built directly into the Prometheus binary. The agent mode disables some of Prometheus' usual features and optimizes the binary for scraping and remote writing to remote locations. Introducing a mode that reduces the number of features enables new usage patters. In this blog post I will explain why it is a game-changer for certain deployments in the CNCF ecosystem. I am super excited about this!&lt;/p&gt;

&lt;h2 id="history-of-the-forwarding-use-case"&gt;History of the Forwarding Use Case&lt;a class="header-anchor" href="#history-of-the-forwarding-use-case" name="history-of-the-forwarding-use-case"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The core design of Prometheus has been unchanged for the project's entire lifetime. Inspired by &lt;a href="https://sre.google/sre-book/practical-alerting/#the-rise-of-borgmon"&gt;Google's Borgmon monitoring system&lt;/a&gt;, you can deploy a Prometheus server alongside the applications you want to monitor, tell Prometheus how to reach them, and allow to scrape the current values of their metrics at regular intervals. Such a collection method, which is often referred to as the "pull model", is the core principle that &lt;a href="https://prometheus.io/blog/2016/07/23/pull-does-not-scale-or-does-it/"&gt;allows Prometheus to be lightweight and reliable&lt;/a&gt;. Furthermore, it enables application instrumentation and exporters to be dead simple, as they only need to provide a simple human-readable HTTP endpoint with the current value of all tracked metrics (in OpenMetrics format). All without complex push infrastructure and non-trivial client libraries. Overall, a simplified typical Prometheus monitoring deployment looks as below:&lt;/p&gt;

&lt;p&gt;&lt;img src="/assets/blog/2021-11-16/prom.png" alt="Prometheus high-level view"&gt;&lt;/p&gt;

&lt;p&gt;This works great, and we have seen millions of successful deployments like this over the years that process dozens of millions of active series. Some of them for longer time retention, like two years or so. All allow to query, alert, and record metrics useful for both cluster admins and developers.&lt;/p&gt;

&lt;p&gt;However, the cloud-native world is constantly growing and evolving. With the growth of managed Kubernetes solutions and clusters created on-demand within seconds, we are now finally able to treat clusters as "cattle", not as "pets" (in other words, we care less about individual instances of those). In some cases, solutions do not even have the cluster notion anymore, e.g. &lt;a href="https://github.com/kcp-dev/kcp"&gt;kcp&lt;/a&gt;, &lt;a href="https://aws.amazon.com/fargate/"&gt;Fargate&lt;/a&gt; and other platforms.&lt;/p&gt;

&lt;p&gt;&lt;img src="/assets/blog/2021-11-16/yoda.gif" alt="Yoda"&gt;&lt;/p&gt;

&lt;p&gt;The other interesting use case that emerges is the notion of &lt;strong&gt;Edge&lt;/strong&gt; clusters or networks. With industries like telecommunication, automotive and IoT devices adopting cloud-native technologies, we see more and more much smaller clusters with a restricted amount of resources. This is forcing all data (including observability) to be transferred to remote, bigger counterparts as almost nothing can be stored on those remote nodes.&lt;/p&gt;

&lt;p&gt;What does that mean? That means monitoring data has to be somehow aggregated, presented to users and sometimes even stored on the &lt;em&gt;global&lt;/em&gt; level. This is often called a &lt;strong&gt;Global-View&lt;/strong&gt; feature.&lt;/p&gt;

&lt;p&gt;Naively, we could think about implementing this by either putting Prometheus on that global level and scraping metrics across remote networks or pushing metrics directly from the application to the central location for monitoring purposes. Let me explain why both are generally &lt;em&gt;very&lt;/em&gt; bad ideas:&lt;/p&gt;

&lt;p&gt;🔥 Scraping across network boundaries can be a challenge if it adds new unknowns in a monitoring pipeline. The local pull model allows Prometheus to know why exactly the metric target has problems and when. Maybe it's down, misconfigured, restarted, too slow to give us metrics (e.g. CPU saturated), not discoverable by service discovery, we don't have credentials to access or just DNS, network, or the whole cluster is down. By putting our scraper outside of the network, we risk losing some of this information by introducing unreliability into scrapes that is unrelated to an individual target. On top of that, we risk losing important visibility completely if the network is temporarily down. Please don't do it. It's not worth it. (: &lt;/p&gt;

&lt;p&gt;🔥 Pushing metrics directly from the application to some central location is equally bad. Especially when you monitor a larger fleet, you know literally nothing when you don't see metrics from remote applications. Is the application down? Is my receiver pipeline down? Maybe the application failed to authorize? Maybe it failed to get the IP address of my remote cluster? Maybe it's too slow? Maybe the network is down? Worse, you may not even know that the data from some application targets is missing. And you don't even gain a lot as you need to track the state and status of everything that should be sending data. Such a design needs careful analysis as it can be a recipe for a failure too easily.&lt;/p&gt;

&lt;blockquote&gt;
&lt;div class="admonition-wrapper note"&gt;&lt;div class="admonition alert alert-info"&gt;
&lt;strong&gt;NOTE:&lt;/strong&gt; Serverless functions and short-living containers are often cases where we think about push from application as the rescue. At this point however we talk about events or pieces of metrics we might want to aggregate to longer living time series. This topic is discussed &lt;a href="https://groups.google.com/g/prometheus-developers/c/FPe0LsTfo2E/m/yS7up2YzAwAJ"&gt;here&lt;/a&gt;, feel free to contribute and help us support those cases better!&lt;/div&gt;&lt;/div&gt;
&lt;/blockquote&gt;

&lt;p&gt;Prometheus introduced three ways to support the global view case, each with its own pros and cons. Let's briefly go through those. They are shown in orange color in the diagram below:&lt;/p&gt;

&lt;p&gt;&lt;img src="/assets/blog/2021-11-16/prom-remote.png" alt="Prometheus global view"&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Federation&lt;/strong&gt; was introduced as the first feature for aggregation purposes. It allows a global-level Prometheus server to scrape a subset of metrics from a leaf Prometheus. Such a "federation" scrape reduces some unknowns across networks because metrics exposed by federation endpoints include the original samples' timestamps. Yet, it usually suffers from the inability to federate all metrics and not lose data during longer network partitions (minutes).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prometheus Remote Read&lt;/strong&gt; allows selecting raw metrics from a remote Prometheus server's database without a direct PromQL query. You can deploy Prometheus or other solutions (e.g. Thanos) on the global level to perform PromQL queries on this data while fetching the required metrics from multiple remote locations. This is really powerful as it allows you to store data "locally" and access it only when needed. Unfortunately, there are cons too. Without features like &lt;a href="https://github.com/thanos-io/thanos/issues/305"&gt;Query Pushdown&lt;/a&gt; we are in extreme cases pulling GBs of compressed metric data to answer a single query. Also, if we have a network partition, we are temporarily blind. Last but not least, certain security guidelines are not allowing ingress traffic, only egress one.&lt;/li&gt;
&lt;li&gt;Finally, we have &lt;strong&gt;Prometheus Remote Write&lt;/strong&gt;, which seems to be the most popular choice nowadays. Since the agent mode focuses on remote write use cases, let's explain it in more detail.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="remote-write"&gt;Remote Write&lt;a class="header-anchor" href="#remote-write" name="remote-write"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The Prometheus Remote Write protocol allows us to forward (stream) all or a subset of metrics collected by Prometheus to the remote location. You can configure Prometheus to forward some metrics (if you want, with all metadata and exemplars!) to one or more locations that support the Remote Write API. In fact, Prometheus supports both ingesting and sending Remote Write, so you can deploy Prometheus on a global level to receive that stream and aggregate data cross-cluster.&lt;/p&gt;

&lt;p&gt;While the official &lt;a href="https://docs.google.com/document/d/1LPhVRSFkGNSuU1fBd81ulhsCPR4hkSZyyBj1SZ8fWOM/edit"&gt;Prometheus Remote Write API specification is in review stage&lt;/a&gt;, the ecosystem adopted the Remote Write protocol as the default metrics export protocol. For example, Cortex, Thanos, OpenTelemetry, and cloud services like Amazon, Google, Grafana, Logz.io, etc., all support ingesting data via Remote Write.&lt;/p&gt;

&lt;p&gt;The Prometheus project also offers the official compliance tests for its APIs, e.g. &lt;a href="https://github.com/prometheus/compliance/tree/main/remote_write_sender"&gt;remote-write sender compliance&lt;/a&gt; for solutions that offer Remote Write client capabilities. It's an amazing way to quickly tell if you are correctly implementing this protocol.&lt;/p&gt;

&lt;p&gt;Streaming data from such a scraper enables Global View use cases by allowing you to store metrics data in a centralized location. This also enables separation of concerns, which is useful when applications are managed by different teams than the observability or monitoring pipelines. Furthermore, it is also why Remote Write is chosen by vendors who want to offload as much work from their customers as possible.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Wait for a second, Bartek. You just mentioned before that pushing metrics directly from the application is not the best idea!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Sure, but the amazing part is that, even with Remote Write, Prometheus still uses a pull model to gather metrics from applications, which gives us an understanding of those different failure modes. After that, we batch samples and series and export, replicate (push) data to the Remote Write endpoints, limiting the number of monitoring unknowns that the central point has!&lt;/p&gt;

&lt;p&gt;It's important to note that a reliable and efficient remote-writing implementation is a non-trivial problem to solve. The Prometheus community spent around three years to come up with a stable and scalable implementation. We reimplemented the WAL (write-ahead-log) a few times, added internal queuing, sharding, smart back-offs and more. All of this is hidden from the user, who can enjoy well-performing streaming or large amounts of metrics stored in a centralized location.&lt;/p&gt;

&lt;h3 id="hands-on-remote-write-example-katacoda-tutorial"&gt;Hands-on Remote Write Example: Katacoda Tutorial&lt;a class="header-anchor" href="#hands-on-remote-write-example-katacoda-tutorial" name="hands-on-remote-write-example-katacoda-tutorial"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;All of this is not new in Prometheus. Many of us already use Prometheus to scrape all required metrics and remote-write all or some of them to remote locations.&lt;/p&gt;

&lt;p&gt;Suppose you would like to try the hands-on experience of remote writing capabilities. In that case, we recommend the &lt;a href="https://katacoda.com/thanos/courses/thanos/3-receiver"&gt;Thanos Katacoda tutorial of remote writing metrics from Prometheus&lt;/a&gt;, which explains all steps required for Prometheus to forward all metrics to the remote location. It's &lt;strong&gt;free&lt;/strong&gt;, just sign up for an account and enjoy the tutorial! 🤗&lt;/p&gt;

&lt;p&gt;Note that this example uses Thanos in receive mode as the remote storage. Nowadays, you can use plenty of other projects that are compatible with the remote write API.&lt;/p&gt;

&lt;p&gt;So if remote writing works fine, why did we add a special Agent mode to Prometheus?&lt;/p&gt;

&lt;h2 id="prometheus-agent-mode"&gt;Prometheus Agent Mode&lt;a class="header-anchor" href="#prometheus-agent-mode" name="prometheus-agent-mode"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;From Prometheus &lt;code&gt;v2.32.0&lt;/code&gt; (next release), everyone will be able to run the Prometheus binary with an experimental &lt;code&gt;--enable-feature=agent&lt;/code&gt; flag. If you want to try it before the release, feel free to use &lt;a href="https://github.com/prometheus/prometheus/releases/tag/v2.32.0-beta.0"&gt;Prometheus v2.32.0-beta.0&lt;/a&gt; or use our &lt;code&gt;quay.io/prometheus/prometheus:v2.32.0-beta.0&lt;/code&gt; image.&lt;/p&gt;

&lt;p&gt;The Agent mode optimizes Prometheus for the remote write use case. It disables querying, alerting, and local storage, and replaces it with a customized TSDB WAL. Everything else stays the same: scraping logic, service discovery and related configuration. It can be used as a drop-in replacement for Prometheus if you want to just forward your data to a remote Prometheus server or any other Remote-Write-compliant project. In essence it looks like this:&lt;/p&gt;

&lt;p&gt;&lt;img src="/assets/blog/2021-11-16/agent.png" alt="Prometheus agent"&gt;&lt;/p&gt;

&lt;p&gt;The best part about Prometheus Agent is that it's built into Prometheus. Same scraping APIs, same semantics, same configuration and discovery mechanism.&lt;/p&gt;

&lt;p&gt;What are the benefits of using the Agent mode if you plan not to query or alert on data locally and stream metrics outside? There are a few:&lt;/p&gt;

&lt;p&gt;First of all, efficiency. Our customized Agent TSDB WAL removes the data immediately after successful writes. If it cannot reach the remote endpoint, it persists the data temporarily on the disk until the remote endpoint is back online. This is currently limited to a two-hour buffer only, similar to non-agent Prometheus, &lt;a href="https://github.com/prometheus/prometheus/issues/9607"&gt;hopefully unblocked soon&lt;/a&gt;. This means that we don't need to build chunks of data in memory. We don't need to maintain a full index for querying purposes. Essentially the Agent mode uses a fraction of the resources that a normal Prometheus server would use in a similar situation.&lt;/p&gt;

&lt;p&gt;Does this efficiency matter? Yes! As we mentioned, every GB of memory and every CPU core used on edge clusters matters for some deployments. On the other hand, the paradigm of performing monitoring using metrics is quite mature these days. This means that the more relevant metrics with more cardinality you can ship for the same cost - the better.&lt;/p&gt;

&lt;blockquote&gt;
&lt;div class="admonition-wrapper note"&gt;&lt;div class="admonition alert alert-info"&gt;
&lt;strong&gt;NOTE:&lt;/strong&gt; With the introduction of the Agent mode, the original Prometheus server mode still stays as the recommended, stable and maintained mode. Agent mode with remote storage brings additional complexity. Use with care.&lt;/div&gt;&lt;/div&gt;
&lt;/blockquote&gt;

&lt;p&gt;Secondly, the benefit of the new Agent mode is that it enables easier horizontal scalability for ingestion. This is something I am excited about the most. Let me explain why.&lt;/p&gt;

&lt;h3 id="the-dream-auto-scalable-metric-ingestion"&gt;The Dream: Auto-Scalable Metric Ingestion&lt;a class="header-anchor" href="#the-dream-auto-scalable-metric-ingestion" name="the-dream-auto-scalable-metric-ingestion"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;A true auto-scalable solution for scraping would need to be based on the amount of metric targets and the number of metrics they expose. The more data we have to scrape, the more instances of Prometheus we deploy automatically. If the number of targets or their number of metrics goes down, we could scale down and remove a couple of instances. This would remove the manual burden of adjusting the sizing of Prometheus and stop the need for over-allocating Prometheus for situations where the cluster is temporarily small.&lt;/p&gt;

&lt;p&gt;With just Prometheus in server mode, this was hard to achieve. This is because Prometheus in server mode is stateful. Whatever is collected stays as-is in a single place. This means that the scale-down procedure would need to back up the collected data to existing instances before termination. Then we would have the problem of overlapping scrapes, misleading staleness markers etc.&lt;/p&gt;

&lt;p&gt;On top of that, we would need some global view query that is able to aggregate all samples across all instances (e.g. Thanos Query or Promxy). Last but not least, the resource usage of Prometheus in server mode depends on more things than just ingestion. There is alerting, recording, querying, compaction, remote write etc., that might need more or fewer resources independent of the number of metric targets.&lt;/p&gt;

&lt;p&gt;Agent mode essentially moves the discovery, scraping and remote writing to a separate microservice. This allows a focused operational model on ingestion only. As a result, Prometheus in Agent mode is more or less stateless. Yes, to avoid loss of metrics, we need to deploy an HA pair of agents and attach a persistent disk to them. But technically speaking, if we have thousands of metric targets (e.g. containers), we can deploy multiple Prometheus agents and safely change which replica is scraping which targets. This is because, in the end, all samples will be pushed to the same central storage.&lt;/p&gt;

&lt;p&gt;Overall, Prometheus in Agent mode enables easy horizontal auto-scaling capabilities of Prometheus-based scraping that can react to dynamic changes in metric targets. This is definitely something we will look at with the &lt;a href="https://github.com/prometheus-operator/prometheus-operator"&gt;Prometheus Kubernetes Operator&lt;/a&gt; community going forward.&lt;/p&gt;

&lt;p&gt;Now let's take a look at the currently implemented state of agent mode in Prometheus. Is it ready to use?&lt;/p&gt;

&lt;h3 id="agent-mode-was-proven-at-scale"&gt;Agent Mode Was Proven at Scale&lt;a class="header-anchor" href="#agent-mode-was-proven-at-scale" name="agent-mode-was-proven-at-scale"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The next release of Prometheus will include Agent mode as an experimental feature. Flags, APIs and WAL format on disk might change. But the performance of the implementation is already battle-tested thanks to &lt;a href="https://grafana.com/"&gt;Grafana Labs'&lt;/a&gt; open-source work.&lt;/p&gt;

&lt;p&gt;The initial implementation of our Agent's custom WAL was inspired by the current Prometheus server's TSDB WAL and created by &lt;a href="https://github.com/rfratto"&gt;Robert Fratto&lt;/a&gt; in 2019, under the mentorship of &lt;a href="https://twitter.com/tom_wilkie"&gt;Tom Wilkie&lt;/a&gt;, Prometheus maintainer. It was then used in an open-source &lt;a href="https://github.com/grafana/agent"&gt;Grafana Agent&lt;/a&gt; project that was since then used by many Grafana Cloud customers and community members. Given the maturity of the solution, it was time to donate the implementation to Prometheus for native integration and bigger adoption. Robert (Grafana Labs), with the help of Srikrishna (Red Hat) and the community, ported the code to the Prometheus codebase, which was merged to &lt;code&gt;main&lt;/code&gt; 2 weeks ago!&lt;/p&gt;

&lt;p&gt;The donation process was quite smooth. Since some Prometheus maintainers contributed to this code before within the Grafana Agent, and since the new WAL is inspired by Prometheus' own WAL, it was not hard for the current Prometheus TSDB maintainers to take it under full maintenance! It also really helps that Robert is joining the Prometheus Team as a TSDB maintainer (congratulations!).&lt;/p&gt;

&lt;p&gt;Now, let's explain how you can use it! (:&lt;/p&gt;

&lt;h3 id="how-to-use-agent-mode-in-detail"&gt;How to Use Agent Mode in Detail&lt;a class="header-anchor" href="#how-to-use-agent-mode-in-detail" name="how-to-use-agent-mode-in-detail"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;From now on, if you show the help output of Prometheus (&lt;code&gt;--help&lt;/code&gt; flag), you should see more or less the following:&lt;/p&gt;

&lt;pre&gt;&lt;code class="bash"&gt;usage: prometheus [&amp;lt;flags&amp;gt;]

The Prometheus monitoring server

Flags:
  -h, --help                     Show context-sensitive help (also try --help-long and --help-man).
      (... other flags)
      --storage.tsdb.path="data/"
                                 Base path for metrics storage. Use with server mode only.
      --storage.agent.path="data-agent/"
                                 Base path for metrics storage. Use with agent mode only.
      (... other flags)
      --enable-feature= ...      Comma separated feature names to enable. Valid options: agent, exemplar-storage, expand-external-labels, memory-snapshot-on-shutdown, promql-at-modifier, promql-negative-offset, remote-write-receiver,
                                 extra-scrape-metrics, new-service-discovery-manager. See https://prometheus.io/docs/prometheus/latest/feature_flags/ for more details.
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Since the Agent mode is behind a feature flag, as mentioned previously, use the &lt;code&gt;--enable-feature=agent&lt;/code&gt; flag to run Prometheus in the Agent mode. Now, the rest of the flags are either for both server and Agent or only for a specific mode. You can see which flag is for which mode by checking the last sentence of a flag's help string. "Use with server mode only" means it's only for server mode. If you don't see any mention like this, it means the flag is shared.&lt;/p&gt;

&lt;p&gt;The Agent mode accepts the same scrape configuration with the same discovery options and remote write options.&lt;/p&gt;

&lt;p&gt;It also exposes a web UI with disabled query capabitilies, but showing build info, configuration, targets and service discovery information as in a normal Prometheus server.&lt;/p&gt;

&lt;h3 id="hands-on-prometheus-agent-example-katacoda-tutorial"&gt;Hands-on Prometheus Agent Example: Katacoda Tutorial&lt;a class="header-anchor" href="#hands-on-prometheus-agent-example-katacoda-tutorial" name="hands-on-prometheus-agent-example-katacoda-tutorial"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Similarly to Prometheus remote-write tutorial, if you would like to try the hands-on experience of Prometheus Agent capabilities, we recommend the &lt;a href="https://katacoda.com/thanos/courses/thanos/4-receiver-agent"&gt;Thanos Katacoda tutorial of Prometheus Agent&lt;/a&gt;, which explains how easy it is to run Prometheus Agent.&lt;/p&gt;

&lt;h2 id="summary"&gt;Summary&lt;a class="header-anchor" href="#summary" name="summary"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;I hope you found this interesting! In this post, we walked through the new cases that emerged like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;edge clusters&lt;/li&gt;
&lt;li&gt;limited access networks&lt;/li&gt;
&lt;li&gt;large number of clusters&lt;/li&gt;
&lt;li&gt;ephemeral and dynamic clusters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We then explained the new Prometheus Agent mode that allows efficiently forwarding scraped metrics to the remote write endpoints.&lt;/p&gt;

&lt;p&gt;As always, if you have any issues or feedback, feel free to &lt;a href="https://prometheus.io/community/"&gt;submit a ticket on GitHub or ask questions on the mailing list&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This blog post is part of a coordinated release between CNCF, Grafana, and Prometheus. Feel free to also read the &lt;a href="https://www.cncf.io/blog/"&gt;CNCF announcement&lt;/a&gt; and the angle on the &lt;a href="https://grafana.com/blog/2021/11/16/why-we-created-a-prometheus-agent-mode-from-the-grafana-agent"&gt;Grafana Agent which underlies the Prometheus Agent&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
</content>
  </entry>
  <entry>
    <id>tag:prometheus.io,2021-10-14:/blog/2021/10/14/prometheus-conformance-results/</id>
    <title type="html">Prometheus Conformance Program: First round of results</title>
    <published>2021-10-14T00:00:00Z</published>
    <updated>2021-10-14T00:00:00Z</updated>
    <author>
      <name>Richard "RichiH" Hartmann</name>
      <uri>https://prometheus.io/blog/</uri>
    </author>
    <link rel="alternate" href="https://prometheus.io/blog/2021/10/14/prometheus-conformance-results/" type="text/html"/>
    <content type="html">&lt;!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"&gt;
&lt;html&gt;&lt;body&gt;
&lt;p&gt;Today, we're launching the &lt;a href="/blog/2021/05/03/introducing-prometheus-conformance-program/"&gt;Prometheus Conformance Program&lt;/a&gt; with the goal of ensuring interoperability between different projects and vendors in the Prometheus monitoring space. While the legal paperwork still needs to be finalized, we ran tests, and we consider the below our first round of results. As part of this launch &lt;a href="https://promlabs.com/blog/2021/10/14/promql-vendor-compatibility-round-three"&gt;Julius Volz updated his PromQL test results&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As a quick reminder: The program is called Prometheus &lt;strong&gt;Conformance&lt;/strong&gt;, software can be &lt;strong&gt;compliant&lt;/strong&gt; to specific tests, which result in a &lt;strong&gt;compatibility&lt;/strong&gt; rating. The nomenclature might seem complex, but it allows us to speak about this topic without using endless word snakes.&lt;/p&gt;

&lt;h1 id="preamble" class="page-header"&gt;Preamble&lt;a class="header-anchor" href="#preamble" name="preamble"&gt;&lt;/a&gt;
&lt;/h1&gt;
&lt;div class="toc toc-right"&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#new-categories"&gt;New Categories
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#call-for-action"&gt;Call for Action
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#register-for-being-tested"&gt;Register for being tested
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#full-prometheus-compatibility"&gt;Full Prometheus Compatibility
&lt;/a&gt;&lt;/li&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#projects"&gt;Projects
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#aas"&gt;aaS
&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;li&gt;&lt;a href="#agent-collector"&gt;Agent/Collector
&lt;/a&gt;&lt;/li&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#passing"&gt;Passing
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#not-passing"&gt;Not passing
&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/ul&gt;&lt;/div&gt;

&lt;h2 id="new-categories"&gt;New Categories&lt;a class="header-anchor" href="#new-categories" name="new-categories"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We found that it's quite hard to reason about what needs to be applied to what software. To help sort my thoughts, we created &lt;a href="https://docs.google.com/document/d/1VGMme9RgpclqF4CF2woNmgFqq0J7nqHn-l72uNmAxhA"&gt;an overview&lt;/a&gt;, introducing four new categories we can put software into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Metrics Exposers&lt;/li&gt;
&lt;li&gt;Agents/Collectors&lt;/li&gt;
&lt;li&gt;Prometheus Storage Backends&lt;/li&gt;
&lt;li&gt;Full Prometheus Compatibility&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="call-for-action"&gt;Call for Action&lt;a class="header-anchor" href="#call-for-action" name="call-for-action"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Feedback is very much welcome. Maybe counter-intuitively, we want the community, not just Prometheus-team, to shape this effort. To help with that, we will launch a WG Conformance within Prometheus. As with &lt;a href="https://docs.google.com/document/d/1k7_Ya7j5HrIgxXghTCj-26CuwPyGdAbHS0uQf0Ir2tw"&gt;WG Docs&lt;/a&gt; and &lt;a href="https://docs.google.com/document/d/1HWL-NIfog3_pFxUny0kAHeoxd0grnqhCBcHVPZN4y3Y"&gt;WG Storage&lt;/a&gt;, those will be public and we actively invite participation.&lt;/p&gt;

&lt;p&gt;As we &lt;a href="https://www.youtube.com/watch?v=CBDZKjgRiew"&gt;alluded to recently&lt;/a&gt;, the maintainer/adoption ratio of Prometheus is surprisingly, or shockingly, low. In different words, we hope that the economic incentives around Prometheus Compatibility will entice vendors to assign resources in building out the tests with us. If you always wanted to contribute to Prometheus during work time, this might be the way; and a way that will have you touch a lot of highly relevant aspects of Prometheus. There's a variety of ways to &lt;a href="https://prometheus.io/community/"&gt;get in touch&lt;/a&gt; with us.&lt;/p&gt;

&lt;h2 id="register-for-being-tested"&gt;Register for being tested&lt;a class="header-anchor" href="#register-for-being-tested" name="register-for-being-tested"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;You can use the &lt;a href="https://prometheus.io/community/"&gt;same communication channels&lt;/a&gt; to get in touch with us if you want to register for being tested. Once the paperwork is in place, we will hand contact information and contract operations over to CNCF.&lt;/p&gt;

&lt;h1 id="test-results" class="page-header"&gt;Test results&lt;a class="header-anchor" href="#test-results" name="test-results"&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;h2 id="full-prometheus-compatibility"&gt;Full Prometheus Compatibility&lt;a class="header-anchor" href="#full-prometheus-compatibility" name="full-prometheus-compatibility"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We know what tests we want to build out, but we are not there yet. As announced previously, it would be unfair to hold this against projects or vendors. As such, test shims are defined as being passed. The currently semi-manual nature of e.g. the &lt;a href="https://promlabs.com/blog/2021/10/14/promql-vendor-compatibility-round-three"&gt;PromQL tests Julius ran this week&lt;/a&gt; mean that Julius tested sending data through Prometheus Remote Write in most cases as part of PromQL testing. We're reusing his results in more than one way here. This will be untangled soon, and more tests from different angles will keep ratcheting up the requirements and thus End User confidence.&lt;/p&gt;

&lt;p&gt;It makes sense to look at projects and aaS offerings in two sets.&lt;/p&gt;

&lt;h3 id="projects"&gt;Projects&lt;a class="header-anchor" href="#projects" name="projects"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;h4 id="passing"&gt;Passing&lt;a class="header-anchor" href="#passing" name="passing"&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Cortex 1.10.0&lt;/li&gt;
&lt;li&gt;M3 1.3.0&lt;/li&gt;
&lt;li&gt;Promscale 0.6.2&lt;/li&gt;
&lt;li&gt;Thanos 0.23.1&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="not-passing"&gt;Not passing&lt;a class="header-anchor" href="#not-passing" name="not-passing"&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;VictoriaMetrics 1.67.0 is not passing and &lt;a href="https://promlabs.com/blog/2021/10/14/promql-vendor-compatibility-round-three#victoriametrics"&gt;does not intend to pass&lt;/a&gt;. In the spirit of End User confidence, we decided to track their results while they position themselves as a drop-in replacement for Prometheus.&lt;/p&gt;

&lt;h3 id="aas"&gt;aaS&lt;a class="header-anchor" href="#aas" name="aas"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;h4 id="passing"&gt;Passing&lt;a class="header-anchor" href="#passing-0" name="passing-0"&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Chronosphere&lt;/li&gt;
&lt;li&gt;Grafana Cloud&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="not-passing"&gt;Not passing&lt;a class="header-anchor" href="#not-passing-0" name="not-passing-0"&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Amazon Managed Service for Prometheus&lt;/li&gt;
&lt;li&gt;Google Cloud Managed Service for Prometheus&lt;/li&gt;
&lt;li&gt;New Relic&lt;/li&gt;
&lt;li&gt;Sysdig Monitor&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;NB: As Amazon Managed Service for Prometheus is based on Cortex just like Grafana Cloud, we expect them to pass after the next update cycle.&lt;/p&gt;

&lt;h2 id="agent-collector"&gt;Agent/Collector&lt;a class="header-anchor" href="#agent-collector" name="agent-collector"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;h3 id="passing"&gt;Passing&lt;a class="header-anchor" href="#passing-1" name="passing-1"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Grafana Agent 0.19.0&lt;/li&gt;
&lt;li&gt;OpenTelemetry Collector 0.37.0&lt;/li&gt;
&lt;li&gt;Prometheus 2.30.3&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="not-passing"&gt;Not passing&lt;a class="header-anchor" href="#not-passing-1" name="not-passing-1"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Telegraf 1.20.2&lt;/li&gt;
&lt;li&gt;Timber Vector 0.16.1&lt;/li&gt;
&lt;li&gt;VictoriaMetrics Agent 1.67.0&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;NB: We tested Vector 0.16.1 instead of 0.17.0 because there are no binary downloads for 0.17.0 and our test toolchain currently expects binaries.&lt;/p&gt;
&lt;/body&gt;&lt;/html&gt;
</content>
  </entry>
  <entry>
    <id>tag:prometheus.io,2021-06-10:/blog/2021/06/10/on-ransomware-naming/</id>
    <title type="html">On Ransomware Naming</title>
    <published>2021-06-10T00:00:00Z</published>
    <updated>2021-06-10T00:00:00Z</updated>
    <author>
      <name>Richard "RichiH" Hartmann</name>
      <uri>https://prometheus.io/blog/</uri>
    </author>
    <link rel="alternate" href="https://prometheus.io/blog/2021/06/10/on-ransomware-naming/" type="text/html"/>
    <content type="html">&lt;p&gt;As per Oscar Wilde, imitation is the sincerest form of flattery.&lt;/p&gt;

&lt;p&gt;The names "Prometheus" and "Thanos" have &lt;a href="https://cybleinc.com/2021/06/05/prometheus-an-emerging-apt-group-using-thanos-ransomware-to-target-organizations/"&gt;recently been taken up by a ransomware group&lt;/a&gt;. There's not much we can do about that except to inform you that this is happening. There's not much you can do either, except be aware that this is happening.&lt;/p&gt;

&lt;p&gt;While we do &lt;em&gt;NOT&lt;/em&gt; have reason to believe that this group will try to trick anyone into downloading fake binaries of our projects, we still recommend following common supply chain &amp;amp; security practices. When deploying software, do it through one of those mechanisms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Binary downloads from the official release pages for &lt;a href="https://github.com/prometheus/prometheus/releases"&gt;Prometheus&lt;/a&gt; and &lt;a href="https://github.com/thanos-io/thanos/releases"&gt;Thanos&lt;/a&gt;, with verification of checksums provided.&lt;/li&gt;
&lt;li&gt;Docker downloads from official project controlled repositories:

&lt;ul&gt;
&lt;li&gt;Prometheus: &lt;a href="https://quay.io/repository/prometheus/prometheus"&gt;https://quay.io/repository/prometheus/prometheus&lt;/a&gt; and &lt;a href="https://hub.docker.com/r/prom/prometheus"&gt;https://hub.docker.com/r/prom/prometheus&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Thanos: &lt;a href="https://quay.io/repository/thanos/thanos"&gt;https://quay.io/repository/thanos/thanos&lt;/a&gt; and &lt;a href="https://hub.docker.com/r/thanosio/thanos"&gt;https://hub.docker.com/r/thanosio/thanos&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Binaries, images, or containers from distributions you trust&lt;/li&gt;
&lt;li&gt;Binaries, images, or containers from your own internal software verification and deployment teams&lt;/li&gt;
&lt;li&gt;Build from source yourself&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unless you can reasonably trust the specific providence and supply chain, you should not use software.&lt;/p&gt;

&lt;p&gt;As there's a non-zero chance that the ransomware group chose the names deliberately and thus might come across this blog post: Please stop. With both the ransomware and the naming choice.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <id>tag:prometheus.io,2021-05-05:/blog/2021/05/04/prometheus-conformance-remote-write-compliance/</id>
    <title type="html">Prometheus Conformance Program: Remote Write Compliance Test Results</title>
    <published>2021-05-05T00:00:00Z</published>
    <updated>2021-05-05T00:00:00Z</updated>
    <author>
      <name>Richard "RichiH" Hartmann</name>
      <uri>https://prometheus.io/blog/</uri>
    </author>
    <link rel="alternate" href="https://prometheus.io/blog/2021/05/04/prometheus-conformance-remote-write-compliance/" type="text/html"/>
    <content type="html">&lt;p&gt;As &lt;a href="https://www.cncf.io/blog/2021/05/03/announcing-the-intent-to-form-the-prometheus-conformance-program/"&gt;announced by CNCF&lt;/a&gt; and by &lt;a href="https://prometheus.io/blog/2021/05/03/introducing-prometheus-conformance-program/"&gt;ourselves&lt;/a&gt;, we're starting a Prometheus conformance program.&lt;/p&gt;

&lt;p&gt;To give everyone an overview of where the ecosystem is before running tests officially, we wanted to show off the newest addition to our happy little bunch of test suites: The Prometheus &lt;a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write"&gt;Remote Write&lt;/a&gt; compliance test suite tests the sender part of the Remote Write protocol against our &lt;a href="https://docs.google.com/document/d/1LPhVRSFkGNSuU1fBd81ulhsCPR4hkSZyyBj1SZ8fWOM"&gt;specification&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;During Monday's &lt;a href="https://promcon.io/2021-online/"&gt;PromCon&lt;/a&gt;, &lt;a href="https://twitter.com/tom_wilkie"&gt;Tom Wilkie&lt;/a&gt; presented the test results from the time of recording a few weeks ago. In the live section, he already had an &lt;a href="https://docs.google.com/presentation/d/1RcN58LlS3V5tYCUsftqUvNuCpCsgGR2P7-GoH1MVL0Q/edit#slide=id.gd1789c7f7c_0_0"&gt;update&lt;/a&gt;. Two days later we have two more updates:
The addition of the &lt;a href="https://github.com/prometheus/compliance/pull/24"&gt;observability pipeline tool Vector&lt;/a&gt;, as well as &lt;a href="https://github.com/prometheus/compliance/pull/25"&gt;new versions of existing systems&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So, without further ado, the current results in alphabetical order are:&lt;/p&gt;

&lt;table class=" table table-bordered"&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Sender&lt;/th&gt;
&lt;th&gt;Version&lt;/th&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Grafana Agent&lt;/td&gt;
&lt;td&gt;0.13.1&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;100%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Prometheus&lt;/td&gt;
&lt;td&gt;2.26.0&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;100%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OpenTelemetry Collector&lt;/td&gt;
&lt;td&gt;0.26.0&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;41%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Telegraf&lt;/td&gt;
&lt;td&gt;1.18.2&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;65%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Timber Vector&lt;/td&gt;
&lt;td&gt;0.13.1&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;35%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VictoriaMetrics Agent&lt;/td&gt;
&lt;td&gt;1.59.0&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;76%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The raw results are:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;--- PASS: TestRemoteWrite/grafana (0.01s)
    --- PASS: TestRemoteWrite/grafana/Counter (10.02s)
    --- PASS: TestRemoteWrite/grafana/EmptyLabels (10.02s)
    --- PASS: TestRemoteWrite/grafana/Gauge (10.02s)
    --- PASS: TestRemoteWrite/grafana/Headers (10.02s)
    --- PASS: TestRemoteWrite/grafana/Histogram (10.02s)
    --- PASS: TestRemoteWrite/grafana/HonorLabels (10.02s)
    --- PASS: TestRemoteWrite/grafana/InstanceLabel (10.02s)
    --- PASS: TestRemoteWrite/grafana/Invalid (10.02s)
    --- PASS: TestRemoteWrite/grafana/JobLabel (10.02s)
    --- PASS: TestRemoteWrite/grafana/NameLabel (10.02s)
    --- PASS: TestRemoteWrite/grafana/Ordering (26.12s)
    --- PASS: TestRemoteWrite/grafana/RepeatedLabels (10.02s)
    --- PASS: TestRemoteWrite/grafana/SortedLabels (10.02s)
    --- PASS: TestRemoteWrite/grafana/Staleness (10.01s)
    --- PASS: TestRemoteWrite/grafana/Summary (10.01s)
    --- PASS: TestRemoteWrite/grafana/Timestamp (10.01s)
    --- PASS: TestRemoteWrite/grafana/Up (10.02s)
--- PASS: TestRemoteWrite/prometheus (0.01s)
    --- PASS: TestRemoteWrite/prometheus/Counter (10.02s)
    --- PASS: TestRemoteWrite/prometheus/EmptyLabels (10.02s)
    --- PASS: TestRemoteWrite/prometheus/Gauge (10.02s)
    --- PASS: TestRemoteWrite/prometheus/Headers (10.02s)
    --- PASS: TestRemoteWrite/prometheus/Histogram (10.02s)
    --- PASS: TestRemoteWrite/prometheus/HonorLabels (10.02s)
    --- PASS: TestRemoteWrite/prometheus/InstanceLabel (10.02s)
    --- PASS: TestRemoteWrite/prometheus/Invalid (10.02s)
    --- PASS: TestRemoteWrite/prometheus/JobLabel (10.02s)
    --- PASS: TestRemoteWrite/prometheus/NameLabel (10.03s)
    --- PASS: TestRemoteWrite/prometheus/Ordering (24.99s)
    --- PASS: TestRemoteWrite/prometheus/RepeatedLabels (10.02s)
    --- PASS: TestRemoteWrite/prometheus/SortedLabels (10.02s)
    --- PASS: TestRemoteWrite/prometheus/Staleness (10.02s)
    --- PASS: TestRemoteWrite/prometheus/Summary (10.02s)
    --- PASS: TestRemoteWrite/prometheus/Timestamp (10.02s)
    --- PASS: TestRemoteWrite/prometheus/Up (10.02s)
--- FAIL: TestRemoteWrite/otelcollector (0.00s)
    --- FAIL: TestRemoteWrite/otelcollector/Counter (10.01s)
    --- FAIL: TestRemoteWrite/otelcollector/Histogram (10.01s)
    --- FAIL: TestRemoteWrite/otelcollector/InstanceLabel (10.01s)
    --- FAIL: TestRemoteWrite/otelcollector/Invalid (10.01s)
    --- FAIL: TestRemoteWrite/otelcollector/JobLabel (10.01s)
    --- FAIL: TestRemoteWrite/otelcollector/Ordering (13.54s)
    --- FAIL: TestRemoteWrite/otelcollector/RepeatedLabels (10.01s)
    --- FAIL: TestRemoteWrite/otelcollector/Staleness (10.01s)
    --- FAIL: TestRemoteWrite/otelcollector/Summary (10.01s)
    --- FAIL: TestRemoteWrite/otelcollector/Up (10.01s)
    --- PASS: TestRemoteWrite/otelcollector/EmptyLabels (10.01s)
    --- PASS: TestRemoteWrite/otelcollector/Gauge (10.01s)
    --- PASS: TestRemoteWrite/otelcollector/Headers (10.01s)
    --- PASS: TestRemoteWrite/otelcollector/HonorLabels (10.01s)
    --- PASS: TestRemoteWrite/otelcollector/NameLabel (10.01s)
    --- PASS: TestRemoteWrite/otelcollector/SortedLabels (10.01s)
    --- PASS: TestRemoteWrite/otelcollector/Timestamp (10.01s)
--- FAIL: TestRemoteWrite/telegraf (0.01s)
    --- FAIL: TestRemoteWrite/telegraf/EmptyLabels (14.60s)
    --- FAIL: TestRemoteWrite/telegraf/HonorLabels (14.61s)
    --- FAIL: TestRemoteWrite/telegraf/Invalid (14.61s)
    --- FAIL: TestRemoteWrite/telegraf/RepeatedLabels (14.61s)
    --- FAIL: TestRemoteWrite/telegraf/Staleness (14.59s)
    --- FAIL: TestRemoteWrite/telegraf/Up (14.60s)
    --- PASS: TestRemoteWrite/telegraf/Counter (14.61s)
    --- PASS: TestRemoteWrite/telegraf/Gauge (14.60s)
    --- PASS: TestRemoteWrite/telegraf/Headers (14.61s)
    --- PASS: TestRemoteWrite/telegraf/Histogram (14.61s)
    --- PASS: TestRemoteWrite/telegraf/InstanceLabel (14.61s)
    --- PASS: TestRemoteWrite/telegraf/JobLabel (14.61s)
    --- PASS: TestRemoteWrite/telegraf/NameLabel (14.60s)
    --- PASS: TestRemoteWrite/telegraf/Ordering (14.61s)
    --- PASS: TestRemoteWrite/telegraf/SortedLabels (14.61s)
    --- PASS: TestRemoteWrite/telegraf/Summary (14.60s)
    --- PASS: TestRemoteWrite/telegraf/Timestamp (14.61s)
--- FAIL: TestRemoteWrite/vector (0.01s)
    --- FAIL: TestRemoteWrite/vector/Counter (10.02s)
    --- FAIL: TestRemoteWrite/vector/EmptyLabels (10.01s)
    --- FAIL: TestRemoteWrite/vector/Headers (10.02s)
    --- FAIL: TestRemoteWrite/vector/HonorLabels (10.02s)
    --- FAIL: TestRemoteWrite/vector/InstanceLabel (10.02s)
    --- FAIL: TestRemoteWrite/vector/Invalid (10.02s)
    --- FAIL: TestRemoteWrite/vector/JobLabel (10.01s)
    --- FAIL: TestRemoteWrite/vector/Ordering (13.01s)
    --- FAIL: TestRemoteWrite/vector/RepeatedLabels (10.02s)
    --- FAIL: TestRemoteWrite/vector/Staleness (10.02s)
    --- FAIL: TestRemoteWrite/vector/Up (10.02s)
    --- PASS: TestRemoteWrite/vector/Gauge (10.02s)
    --- PASS: TestRemoteWrite/vector/Histogram (10.02s)
    --- PASS: TestRemoteWrite/vector/NameLabel (10.02s)
    --- PASS: TestRemoteWrite/vector/SortedLabels (10.02s)
    --- PASS: TestRemoteWrite/vector/Summary (10.02s)
    --- PASS: TestRemoteWrite/vector/Timestamp (10.02s)
--- FAIL: TestRemoteWrite/vmagent (0.01s)
    --- FAIL: TestRemoteWrite/vmagent/Invalid (20.66s)
    --- FAIL: TestRemoteWrite/vmagent/Ordering (22.05s)
    --- FAIL: TestRemoteWrite/vmagent/RepeatedLabels (20.67s)
    --- FAIL: TestRemoteWrite/vmagent/Staleness (20.67s)
    --- PASS: TestRemoteWrite/vmagent/Counter (20.67s)
    --- PASS: TestRemoteWrite/vmagent/EmptyLabels (20.64s)
    --- PASS: TestRemoteWrite/vmagent/Gauge (20.66s)
    --- PASS: TestRemoteWrite/vmagent/Headers (20.64s)
    --- PASS: TestRemoteWrite/vmagent/Histogram (20.66s)
    --- PASS: TestRemoteWrite/vmagent/HonorLabels (20.66s)
    --- PASS: TestRemoteWrite/vmagent/InstanceLabel (20.66s)
    --- PASS: TestRemoteWrite/vmagent/JobLabel (20.66s)
    --- PASS: TestRemoteWrite/vmagent/NameLabel (20.66s)
    --- PASS: TestRemoteWrite/vmagent/SortedLabels (20.66s)
    --- PASS: TestRemoteWrite/vmagent/Summary (20.66s)
    --- PASS: TestRemoteWrite/vmagent/Timestamp (20.67s)
    --- PASS: TestRemoteWrite/vmagent/Up (20.66s)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We'll work more on improving our test suites, both by adding more tests &amp;amp; by adding new test targets. If you want to help us, consider adding more of &lt;a href="https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage"&gt;our list of Remote Write integrations&lt;/a&gt;.&lt;/p&gt;
</content>
  </entry>
</feed>

