Large language model agents are finally getting decent at day-to-day analytics work—so long as you feed them reliable tools. Matomo LLM Tooling is my bridge between Optimizely Opal agents and the Matomo stack: a TypeScript SDK, Fastify-powered tool API, and pragmatic helpers that make both reporting (read) and tracking (write) workflows safe for automation.

The project lives in MatoKit and reuses its connector between Optimizely Opal and Matomo. Under the hood, it exposes Matomo’s Reporting and Tracking APIs through an Opal-compatible surface that hides the messy parts (auth, segmentation syntax, rate limits) while keeping the agent in control of the analysis.

Why LLM tooling for Matomo?

Matomo’s APIs are flexible but verbose. Analysts and growth teams spend time wrestling with query strings, token auth, and dataset joins before they can even ask a question. Meanwhile, Opal agents can orchestrate multi-step experiments—yet they need trustworthy, typed primitives to avoid garbage-in/garbage-out. The Matomo LLM Tooling stack aims to:

  • Shorten discovery — serve machine-readable tool metadata so Opal agents understand which actions exist and what parameters they accept.
  • Guarantee contracts — compile a typed SDK backed by generated interfaces for Matomo endpoints to prevent prompt-level guesswork.
  • Balance read vs write — keep reporting queries and tracking payloads in a single toolkit so an agent can fetch baselines, propose experiments, and push instrumentation updates without context switching.
  • Respect governance — centralise auth, caching, and segmentation rules instead of sprinkling tokens across prompts.

What’s shipping first

Current focus is on delivering thin layers that stand up quickly in automation pipelines:

  1. Typed TypeScript SDK — wraps Matomo Reporting and Tracking endpoints with explicit input/output shapes, runtime validation via Zod, and ergonomic helpers for segments, date ranges, and goal metadata.
  2. Fastify “tools” service — hosts HTTP endpoints that map 1:1 to common analytics flows (e.g., report.getVisitsSummary, events.pushCustomEvent). Each endpoint publishes Opal discovery metadata so agents can self-register without manual wiring.
  3. Connector glue — relies on MatoKit’s Optimizely Opal connector to authenticate agents, enforce scopes, and route responses back into Opal. It keeps the human-in-the-loop approval patterns Opal teams already use.
  4. Utilities for reliability — shared modules for exponential backoff, memoised caching of heavy reports, consistent timezone handling, and graceful degradation when Matomo throttles.

Example agent workflow

A typical Opal agent might:

  1. Call the insights.getKpis tool to pull a 30-day baseline for conversion rate and cart abandonment.
  2. Use the SDK’s segmentation helpers to compare cohorts by consent status or geography without leaking tokens into the prompt.
  3. Draft an experiment brief, then invoke tracking.queueEvent to emit a Matomo goal hit for a QA run or to register a new funnel step.
  4. Log the changes and structured metrics back into Opal for review.

Because every tool response includes provenance metadata, humans can audit the agent’s steps before anything ships to production.

Roadmap and invites

Near-term work focuses on rounding out the tracking story—batch payloads, offline queues, and richer validation for ecommerce events. Longer term, I want to layer on experiment templates that map Opal hypotheses to Matomo content tracking, plus first-class support for privacy-preserving metrics.

If you rely on Matomo and are experimenting with Opal or other LLM agents, I’d love feedback. Drop me a note, or open an issue in the MatoKit repo so we can compare integration notes.

Getting started

The project is still private while I stabilise the SDK surface. If you want to experiment:

  1. Request access to the MatoKit repository.
  2. Pull the matomo-llm-tooling package and run pnpm install && pnpm dev to start the Fastify tool server locally.
  3. Configure Opal with the emitted discovery manifest so the agent can see the new tools.
  4. Point the server at your Matomo instance using service tokens stored outside of the prompt context.

That combination lets LLM agents ask better analytics questions, propose bolder experiments, and push clean tracking updates without breaking your privacy or compliance model.