Streaming Render Service Metrics

Push metrics for CPU, memory, and more to your OTel-compatible provider.

Workspaces with a Professional plan or higher can push a variety of service metrics (memory usage, disk capacity, etc.) to an OpenTelemetry-compatible observability provider, such as New Relic, Honeycomb, or Grafana.

Example OpenTelemetry metrics in Grafana

Render does not emit metrics for the following:

General setup

The following steps must be performed by a workspace admin:

  1. From your workspace’s home in the Render Dashboard, select Integrations > Observability in the left sidebar:

    Selecting Integrations in the Render Dashboard

  2. Under Metrics Stream, click + Add destination.

    The following dialog appears:

    Setting a default metrics export in the Render Dashboard

  3. Select your observability provider from the dropdown. The dialog updates to display fields specific to your provider.

    If your provider isn’t listed, select Custom. Learn how to connect a custom provider.

  4. Fill in the provider-specific fields.

    • See instructions for your provider below.
  5. Click Add destination.

You’re all set! Your provider will start receiving reported metrics from Render shortly.

Provider-specific config

When creating a metrics stream for your Render workspace, you provide different information depending on your observability provider:

Provider-specific metrics config in the Render Dashboard

See details for each supported provider below, along with instructions for other providers. Please also consult your provider’s documentation for additional information.

If there’s a provider you’d like us to add to this list, please submit a feature request.

New Relic

For Region, select US or EU according to where your New Relic data is hosted.

For License key, create a new key with the following steps:

  1. From your New Relic API keys page, click Create a key.

    The following dialog appears:

    Creating a New Relic API key

  2. For the Key type, select Ingest - License.

  3. Add a descriptive Name (e.g., “Render Metrics Integration”).

  4. Click Create Key.

Honeycomb

For Region, select US or EU according to where your Honeycomb data is hosted.

For API key, create a new key with the following steps:

  1. In your Honeycomb dashboard, hover over Manage Data on the bottom left and click Send Data:

    Clicking Send Data in Honeycomb

  2. Click Manage API keys.

  3. Click Create Ingest API Key.

    The following dialog appears:

    Creating a Honeycomb API key

  4. Add a descriptive Name (e.g., “Render Metrics Integration”).

  5. Make sure Can create services/datasets is enabled.

  6. Click Create.

Grafana

Obtain both your Endpoint and API Token with the following steps:

  1. From your Grafana Cloud Portal (grafana.com/orgs/[your-org-name]), click Details for the Grafana stack you want to use:

    Selecting a Grafana stack in the Grafana Cloud Portal

  2. Find the OpenTelemetry tile and click Configure.

  3. Copy the value of Endpoint for sending OTLP signals (this is your Endpoint).

  4. Under Password / API Token, click Generate now.

  5. Add a token name (e.g., render_metrics_integration).

  6. Click Create Token.

  7. Copy the generated value starting with glc_ (this is your API Token).

For more details, see the Grafana documentation.

Datadog

To simplify metrics ingestion with Datadog, Render pushes metrics in Datadog’s native format instead of using OpenTelemetry.

Specify your Datadog site, according to where your Datadog data is hosted.

For API key, generate a new organization-level API key from your organization settings page. You cannot use an application key or a user-scoped API key.

Better Stack

Obtain both your Ingesting host and Source token with the following steps:

  1. From your Telemetry > Sources page in Better Stack, click Connect source.

    The following page appears:

    Creating a Better Stack source

  2. Add a descriptive Name (e.g., “Render Metrics Integration”).

  3. Select OpenTelmetry as the Platform.

  4. Click Connect source.

    Better Stack creates the new source and redirects you to its details page.

  5. Copy your source’s Ingesting host URL and Source token.

Other providers (custom)

Consult this section only if your observability provider isn’t listed above.

Render can push service metrics to your OpenTelemetry-compatible endpoint, if that endpoint authenticates requests via an API key provided as a bearer token in an Authorization header.

If your provider’s endpoint supports authentication via bearer token:

  1. Consult your provider’s documentation to obtain your OpenTelemetry endpoint and API key.

  2. Specify Custom as your provider in the metrics stream creation dialog, then provide your endpoint and API key in the corresponding fields.

If your provider’s endpoint requires a different authentication method:

  1. Please submit a feature request to let us know about your provider’s requirements.

  2. You can spin up your own OpenTelemetry collector (such as the official vendor-agnostic implementation). Your collector’s endpoint can receive metrics from Render, then transform and forward them to your provider using whatever authentication method it expects.

Reported metrics

Render streams service metrics that pertain to the following categories:

All metrics use OpenTelemetry JSON format. The first component of each metric’s name is render (e.g., render.service.memory.usage).

Some observability providers transform metric names to match their conventions.

For example, Grafana converts the metric render.service.memory.usage to render_service_memory_usage_bytes.

After you set up your metrics stream, inspect incoming data in your provider’s dashboard to verify how it identifies Render metrics.

See names, descriptions, and included properties for each reported metric below.

Universal properties

All reported metrics include the following properties:

PropertyDescription
service.name

The name of the service (e.g., my-service).

Grafana displays this property as job.

service.id

The ID of the service (e.g., srv-abc123).

service.instance.id

For most metrics, this is the ID of the metric’s associated service instance (e.g., srv-abc123-def456). This is not the case for HTTP request metrics.

Everything before the final hyphen is the service ID (srv-abc123), and the final component (def456) uniquely identifies the instance.

This value enables you to segment metrics by individual instances of a scaled service, and to identify when a service’s instances are cycled as part of a redeploy.

The following properties are also universal but optional:

PropertyDescription
service.project

The name of the service’s associated project, if it belongs to one (otherwise omitted).

service.environment

The name of the service’s associated environment, if it belongs to one (otherwise omitted).

CPU

These metrics apply to all compute instances and datastores.

render.service.cpu.limit

The maximum amount of CPU available to a particular service instance (as determined by its instance type).

Includes universal properties only.

render.service.cpu.time

The cumulative amount of CPU time used by a particular service instance, in seconds.

To visualize changes to CPU load over time, apply a rate() function or similar in your observability provider.

Includes universal properties only.

Memory

These metrics apply to all compute instances and datastores.

render.service.memory.limit

The maximum amount of memory available to a particular service instance (as determined by its instance type), in bytes.

Includes universal properties only.

render.service.memory.usage

The amount of memory that a particular service instance is currently using, in bytes.

Includes universal properties only.

HTTP requests

These metrics apply only to web services.

HTTP request metrics are not reported per instance.

Render aggregates these metrics across all instances of a given web service. For these metrics, the value of service.instance.id matches that of service.id.

render.service.http.requests.total

The cumulative number of HTTP requests that a given service has received across all instances, segmented by the properties below.

To visualize changes to request load over time, apply a rate() function or similar in your observability provider.

Includes universal properties, along with the following:

PropertyDescription
host

The destination domain for incoming requests. This can be your service’s onrender.com domain or any custom domain you’ve added.

status_code

The HTTP status code returned by the service (200, 404, and so on).

render.service.http.response.latency

Provides a particular web service’s p50, p95, or p99 response time, segmented by the properties below.

Includes universal properties, along with the following:

PropertyDescription
quantile

Indicates the percentile of the provided latency value. One of the following:

  • 0.50 (p50)
  • 0.95 (p90)
  • 0.99 (p99)
host

The destination domain for incoming requests. This can be your service’s onrender.com domain or any custom domain you’ve added.

status_code

The HTTP status code returned by the service instance (200, 404, and so on).

Data storage

Each of these metrics applies to one or more of Render Postgres, Render Key Value, and persistent disks.

render.service.disk.capacity

The total capacity of a service’s persistent storage, in bytes.

Applies to Render Postgres databases and persistent disks.

Includes universal properties only.

render.service.disk.usage

The amount of occupied persistent storage for a service, in bytes.

Applies to Render Postgres databases and persistent disks.

Includes universal properties only.

render.keyvalue.connections

The number of active connections to a particular Render Key Value instance.

Includes universal properties only.

render.postgres.connections

The number of active connections to a particular Render Postgres instance.

Includes universal properties, along with the following:

PropertyDescription
database_name

The name of the PostgreSQL database created in the instance (e.g., my_db_abcd). This value is helpful if your Render Postgres instance hosts multiple databases.

This value usually does not match the value of service.name.

render.postgres.replication.lag

The delay for a particular Render Postgres instance replicating changes to its read replica (if it has one), in milliseconds.

Includes universal properties only.

History of changes to reported metrics

DateChange

2025-03-11

Added initial set of reported metrics.