Introduction
The tyk-demoopentelemetry-demo deployment runs the community OpenTelemetry Demo application alongside Tyk Gateway with a complete, pre-configured observability stack. With a single command you get a working environment with realistic API traffic, structured logs, distributed traces, and metrics flowing through Grafana.
This guide covers:
- Running the full stack locally with
./up.sh opentelemetry-demo - Navigating the four pre-built Tyk Grafana dashboards
- Understanding the Tyk Gateway configuration that enables OTLP export
- Understanding the OTel Collector pipeline that routes telemetry to Grafana backends
- Adapting the configuration to your own Docker Compose environment
Prerequisites
- tyk-demo repository cloned locally
- Docker and Docker Compose installed
- At least 8 GB RAM available — the full stack runs approximately 25 containers
- A Tyk licence (the demo uses Tyk Self-Managed)
Quick Start
Before running the command below, complete the initial repository setup described in the tyk-demo README — this includes configuring your Tyk licence and any other prerequisites. From the tyk-demo repository root, run:| Service | URL |
|---|---|
| OpenTelemetry Demo UI | http://localhost:8085 |
| Grafana | http://localhost:8085/grafana/ |
| Load Generator UI | http://localhost:8085/loadgen/ |
| Feature Flags | http://localhost:8085/feature/ |
| Tyk Dashboard | http://localhost:3000 |
Grafana is pre-provisioned with all data sources and dashboards. No manual setup is required — navigate to http://localhost:8085/grafana/ and open the Tyk Demo folder to find the four dashboards.
Architecture
Traffic Flow
The demo wires together a multi-language e-commerce application through Tyk Gateway. For the full service architecture of the underlying OpenTelemetry Demo application, see the OpenTelemetry Demo architecture. The Frontend Proxy routes all/api/* traffic through Tyk Gateway, which applies rate limiting, authentication, and telemetry enrichment before forwarding to the frontend microservice. Tyk Dashboard manages API definitions centrally.
Telemetry Data Flow
All services — both the microservices and Tyk Gateway — send telemetry to a single OpenTelemetry Collector endpoint. The Collector enriches, transforms, and fans out to multiple backends that Grafana queries: The key architectural point is that the OTel Collector is the single ingestion endpoint for all telemetry. Services only need to know one address (otel-collector:4317). In this setup, the Collector fans out traces to both Jaeger and Tempo, and logs to both Loki and OpenSearch — demonstrating how OTel Collector’s pipeline configuration lets you route signals to multiple backends without touching service code. The pre-built Grafana dashboards query Tempo for traces and Loki for logs.
Grafana Dashboards
Open Grafana at http://localhost:8085/grafana/ and navigate to Dashboards → Tyk Demo folder. Four dashboards are pre-provisioned.Fleet Health
Audience: Platform engineers and DevOps managing multiple gateway instances. Shows the operational health of your gateway fleet: instance count and APIs loaded, a config drift gauge (whether all instances are serving the same API set), Go runtime health (heap, GC, goroutines), per-gateway request rates, and error log streams sourced from Loki. Use this dashboard to quickly confirm all gateway instances are healthy and in sync before investigating API-level issues.API Portfolio Overview
Audience: API platform leads and on-call SRE. Provides a high-level view of the full API estate: request rate, error rate, P95 latency, and active API count as KPIs; per-API leaderboards ranked by traffic, latency, and error rate; multi-tenancy breakdown by organisation andX-Tenant-ID header; consumer segmentation by API key, OAuth client ID, and developer portal application; and SLO tracking with error budget and burn rate. Clicking an API bar links directly to the Troubleshooting dashboard for that API.
API Troubleshooting
Audience: Backend engineers and SRE investigating a specific API. Scoped to a single API (use the api_id variable at the top). Shows latency attribution split between gateway processing time and upstream response time, error breakdown by response flag (for exampleURS = upstream 5xx, URT = upstream timeout), recent and error-filtered distributed traces via Grafana Tempo, and structured access log analysis with trace ID correlation. Use this dashboard to answer “where is the slowness?” and correlate logs with specific traces.
OTLP Metrics Explorer
Audience: Engineers evaluating or customising Tyk’s OTLP instrumentation. A reference dashboard showing all metrics that Tyk Gateway exports out of the box: request counts, latency histograms, error tracking, and dimension-rich counters segmented by API key, OAuth client, tenant ID, custom response headers, config data, and context variables. No application code changes and custom plugins are needed to get these — all dimensions are enriched by the gateway at the proxy layer.How It Works: Tyk Gateway Observability Configuration
Tyk Gateway exports all three telemetry signals — traces, logs, and metrics — to the OTel Collector. Each signal is enabled and configured independently.Traces
Enable distributed tracing and configure the OTel Collector as the export endpoint:- Environment Variables
- tyk.conf
TraceIDRatioBased with a rate of 0.1 samples 10% of traces — a reasonable default for production. To capture all traces while debugging, set the rate to 1.0 or use AlwaysOn. For full sampling configuration options, see the Tyk Gateway configuration reference.
When tracing is enabled, Tyk generates two spans per request: a parent span covering the full request lifecycle and a child span for the upstream call. You can enable detailed tracing per API to also generate a span for each middleware step.
Logs
Tyk Gateway writes structured logs to stdout/stderr. Two settings work together to make gateway logs useful in Grafana:- Environment Variables
- tyk.conf
log_format: json (requires Tyk Gateway v5.6.0+): outputs all application logs as JSON objects, which the OTel Collector can parse and index without custom extraction logic.
access_logs.enabled: true (requires Tyk Gateway v5.8.0+): enables per-request access log entries including api_id, api_name, path, method, status, latency_total, upstream_latency, and client_ip. When tracing is also enabled, each access log entry includes a trace_id field, enabling direct correlation between an access log line and the corresponding distributed trace in Grafana Tempo.
Gateway logs are not exported via OTLP. Instead, the OTel Collector reads them from the Docker container log path using the
filelog receiver. See How It Works: OTel Collector Pipeline below for details.Metrics
Enable OTLP metrics export and configure the push interval:- Environment Variables
- tyk.conf
export_interval controls how often (in seconds) the gateway pushes metrics to the collector. The default is 30 seconds; the demo uses 5 seconds for more responsive dashboard updates.
Tyk exports default gateway metrics (request rate, latency, error rate, Go runtime) automatically. You can add custom counters and histograms that attach API gateway context — request headers, response headers, session data, JWT claims — as metric dimensions. The demo configures 19 such instruments via TYK_GW_OPENTELEMETRY_METRICS_APIMETRICS. Here is one example:
X-Tenant-ID request header), API ID, and HTTP status code — with no changes to upstream application code. For the full list of available dimension sources and the complete demo metric set, see Custom Metrics.
How It Works: OTel Collector Pipeline
The OTel Collector acts as the central telemetry hub. Its configuration (otelcol-config.yml) defines what data it receives, how it processes it, and where it sends it.
Receivers
Two receivers handle Tyk’s telemetry: OTLP receiver — accepts traces and metrics pushed directly from Tyk Gateway:/hostfs/var/lib/docker/containers) so the Collector can read log files from the host. The filter step ensures only tyk-gateway container logs are processed. The JSON parser extracts the inner Tyk log payload (timestamp, severity, fields) from the Docker JSON wrapper.
This filelog configuration is specific to Docker. For Kubernetes, the log path and parser differ — see Collecting Gateway Logs with OTel on Kubernetes.
Processors
A Tyk-specific processor promotes gateway resource attributes into metric datapoint labels, making them available as Grafana variables:tyk.gw.id, tyk.gw.group.id, tyk.gw.tags, tyk.gw.dataplane) once at startup. This processor copies them into every metric datapoint’s attribute set so you can filter or group by gateway instance in Grafana dashboards.
Export Pipelines
Three pipelines route telemetry to the appropriate backends:Adapting to Your Environment
To replicate this observability stack in your own Docker Compose deployment, you need three changes: Tyk Gateway environment variables, an OTel Collector service and configuration, and Grafana dashboard provisioning.1. Tyk Gateway Service
Add these environment variables to yourtyk-gateway service in docker-compose.yml:
TYK_GW_LOGFORMAT requires Tyk Gateway v5.6.0+. TYK_GW_ACCESSLOGS_ENABLED requires Tyk Gateway v5.8.0+.
2. OTel Collector Service and Configuration
Add anotel-collector service to your docker-compose.yml:
filelog receiver can tail gateway logs.
Create otelcol-config.yml alongside your docker-compose.yml:
3. Grafana Dashboards
The four Tyk dashboards are available as JSON provisioning files in the tyk-demo repository at:- Copy the JSON files into your Grafana dashboard provisioning directory (typically mapped via
grafana/provisioning/dashboards/) - Ensure Prometheus, Loki, and Tempo datasources are configured in Grafana
- Open each dashboard JSON and update the
datasource.uidvalues to match your datasource UIDs (find these in Grafana under Connections → Data sources)