Cognition: OpenTelemetry Integration

Route your existing OpenTelemetry spans through Cognition. Traces, errors, runtime data, and security events all go to one place through a single SDK.

If your application already uses OpenTelemetry, you can route those spans through Cognition rather than setting up a separate OTLP exporter. Enable the bridge and your traces join the same pipeline as error events, runtime snapshots, and security threats — all going to one destination.


Prerequisites

Install the OTel peer dependencies:

npm install @opentelemetry/api @opentelemetry/sdk-trace-base

These packages are optional. The SDK functions fully without them. If tracing.enabled: true but the packages are not installed, the SDK logs an error (in debug mode) and continues without tracing.


Setup

import { Cognition } from '@skytells/cognition';

const cognition = Cognition.init({
  apiKey: process.env.SKYTELLS_API_KEY!,
  projectId: process.env.SKYTELLS_PROJECT_ID!,
  tracing: {
    enabled: true,
    sampleRate: 1.0, // 100% of traces (default)
  },
});

When tracing.enabled is true, the SDK:

  1. Dynamically imports @opentelemetry/sdk-trace-base (lazy — zero cost when disabled)
  2. Creates a BasicTracerProvider
  3. Adds a BatchSpanProcessor with the CognitionSpanExporter
  4. Registers the provider globally via provider.register()

After registration, any OTel-instrumented code in the process has its spans exported through Cognition automatically.


How It Works

Your Code (with OTel instrumentation)


OTel Tracer (global TracerProvider registered by Cognition)


BatchSpanProcessor
  ├── Max queue: 2048 spans
  ├── Max batch size: 512 spans
  └── Flush interval: 5s


CognitionSpanExporter


TransportManager.capture()


EventBuffer → HttpTransport → dsn.skytells.ai

Sample Rate

ValueMeaning
1.0100% — capture every trace (default)
0.550% — capture half of traces
0.110% — capture 1 in 10 traces
0.00% — effectively disabled

For high-traffic production services, start with 0.1 and adjust based on volume:

{
  tracing: {
    enabled: true,
    sampleRate: 0.1,
  },
}

Span Event Structure

Each OTel span is converted into a Cognition event with type: 'trace_span':

{
  type: 'trace_span',
  timestamp: number,           // When the span was exported

  traceId: string,             // 32-char hex trace ID
  spanId: string,              // 16-char hex span ID
  parentSpanId?: string,       // Parent span ID (if any)
  operationName: string,       // Span name

  kind: number,                // SpanKind: 0=INTERNAL, 1=SERVER, 2=CLIENT, 3=PRODUCER, 4=CONSUMER

  startTime: number,           // ms since epoch
  endTime: number,             // ms since epoch
  durationMs: number,          // endTime - startTime

  status: {
    code: number,              // 0=UNSET, 1=OK, 2=ERROR
    message?: string,
  },

  attributes: Record<string, unknown>, // All span attributes
}

Span events pass through the same beforeSend hook and transport pipeline as error and runtime events.


With Auto-Instrumentation

Combine Cognition tracing with OTel auto-instrumentation for end-to-end HTTP, database, and gRPC tracing — no manual span creation needed:

import { Cognition } from '@skytells/cognition';
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
import { registerInstrumentations } from '@opentelemetry/instrumentation';

// Initialize Cognition first — this registers the global TracerProvider
const cognition = Cognition.init({
  apiKey: process.env.SKYTELLS_API_KEY!,
  projectId: process.env.SKYTELLS_PROJECT_ID!,
  tracing: {
    enabled: true,
    sampleRate: 0.1,
  },
});

// Register auto-instrumentation — uses the TracerProvider Cognition registered
registerInstrumentations({
  instrumentations: [
    getNodeAutoInstrumentations({
      '@opentelemetry/instrumentation-http': {
        ignoreIncomingRequestHook: (req) => req.url === '/health',
      },
      '@opentelemetry/instrumentation-fs': { enabled: false },
    }),
  ],
});

// Now all HTTP, database, and gRPC calls are traced automatically

Initialize Cognition before calling registerInstrumentations. The auto-instrumentation libraries use whatever TracerProvider is registered globally at the time they initialize.


With a Custom TracerProvider

If you already have your own TracerProvider, you can add Cognition as an additional exporter rather than letting Cognition register its own:

import { CognitionSpanExporter } from '@skytells/cognition';
import { BasicTracerProvider, BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';

const cognition = Cognition.init({
  apiKey: process.env.SKYTELLS_API_KEY!,
  projectId: process.env.SKYTELLS_PROJECT_ID!,
  tracing: { enabled: false }, // Don't let Cognition register its own provider
});

// Create your own provider
const provider = new BasicTracerProvider();

// Add Cognition exporter
const cognitionExporter = new CognitionSpanExporter((event) => {
  cognition.captureEvent(event);
});

provider.addSpanProcessor(new BatchSpanProcessor(cognitionExporter));

// Add other exporters alongside (Jaeger, Zipkin, OTLP, etc.)
// provider.addSpanProcessor(new BatchSpanProcessor(jaegerExporter));

provider.register();

CognitionSpanExporter API

MethodDescription
export(spans, callback)Convert OTel spans to Cognition events and pass to the callback
shutdown()Mark exporter as shut down; reject future exports
forceFlush()No-op — flushing is handled by the Cognition transport layer
import { CognitionSpanExporter } from '@skytells/cognition';

// Standalone usage (advanced)
const exporter = new CognitionSpanExporter((event) => {
  console.log(event); // Handle the span event
});

BatchSpanProcessor Configuration

Cognition configures the BatchSpanProcessor with these defaults:

SettingValue
maxQueueSize2048
maxExportBatchSize512
scheduledDelayMillis5000ms

These values are not configurable via the Cognition config. For custom processor settings, use CognitionSpanExporter directly with your own BatchSpanProcessor.


Graceful Shutdown

When cognition.close() is called, the OTel lifecycle is properly terminated:

  1. provider.shutdown() is called
  2. The BatchSpanProcessor flushes any queued spans
  3. The exporter processes remaining spans
  4. The Cognition transport flushes final events

No spans are lost during graceful shutdown.


Troubleshooting

"OpenTelemetry packages are required for tracing"

npm install @opentelemetry/api @opentelemetry/sdk-trace-base

Spans not appearing in the Console

  1. Ensure tracing: { enabled: true } in your config
  2. Confirm OTel packages are installed
  3. Enable debug: true to see initialization logs
  4. Verify your instrumentation libraries are using the global TracerProvider (initialized before registerInstrumentations)

Duplicate TracerProvider registered

  • If you're registering your own provider elsewhere, set tracing: { enabled: false } and use CognitionSpanExporter directly
  • Or remove the other provider registration

How is this guide?

On this page