Adding Connectors

This is the complete guide to adding a new connector to ToolRouter. A connector lets a user authorize ToolRouter against one of their SaaS accounts — Notion workspace, Google Workspace, Slack team, Linear org, Shopify store — so tools can act on their behalf without the user ever typing or pasting a token.

Adding a connector is deliberately boring. Most providers need one new file, a two-line registry edit, two env vars, and a test file that is mostly runConnectorContractTests(). The framework carries PKCE, state, refresh locks, token persistence, discovery annotation, and the frontend card — you only describe the quirks that make the provider different.

What is a connector (and how is it different from a provider)?

ToolRouter has two independent integration layers. They sound similar and they should not be used interchangeably.

ProviderConnector
What it isAn upstream API the platform pays to call on behalf of all usersA SaaS account the user authorizes us to act on
Auth modelPlatform-held API keyPer-user OAuth access token
ExamplesExa, fal.ai, Prodia, Serper, OpenRouter, ElevenLabsNotion, Google Workspace, Slack, Linear, Shopify, HubSpot
Where config livessrc/tools/shared/<provider>-client.ts + src/core/provider-catalog.tssrc/connectors/catalog/<name>.ts + src/connectors/catalog/index.ts
How tools declare itrequirements: [{ type: 'secret', name: 'fal', ... }]requirements: [{ type: 'connector', name: 'notion', connector: 'notion', ... }]
How handlers use itresolveKey(context, 'fal', 'FAL_KEY')context.getConnectorToken('notion')
Billed toPlatform (passed through to user via raw_cost)Nothing — the user's own SaaS account
User-facing copy"powered by [N] models" — but never name the provider"Connect your Notion workspace"

Quick rule of thumb: if a tool could run without a specific user (e.g. "search the public web"), it uses a provider. If a tool only makes sense against a specific person's account (e.g. "list my unread Slack DMs"), it uses a connector.

Adding a new upstream API you pay for? Read Adding Providers instead. This guide is only for OAuth-based user connectors.

When should I add a new connector?

Add a connector when there is a tool (or a family of tools) that only makes sense against a user's own account, and the provider exposes OAuth 2.0 as a sanctioned integration path.

Good candidates:

  • Knowledge bases — Notion, Confluence, Coda, Craft.
  • Productivity suites — Google Workspace, Microsoft 365, iCloud Contacts+Calendar (not yet available).
  • Chat / messaging — Slack, Discord, Teams, Intercom.
  • Project management — Linear, Jira, Asana, Monday, ClickUp.
  • CRM — HubSpot, Salesforce, Pipedrive, Attio.
  • Commerce — Shopify, Stripe Connect, Square, BigCommerce.
  • Design / marketing — Figma, Webflow, Airtable, Mailchimp.

Poor candidates (don't bother):

  • APIs the platform should call centrally with its own key — these are providers, not connectors. Anything where the user "paying for their own usage" is the point, and the user isn't the resource owner, belongs in Adding Providers.
  • Providers with no sanctioned OAuth flow — if the only way in is scraping an authenticated cookie, it isn't a connector. Write a provider with a credential requirement instead, or skip the integration.
  • One-shot consumer apps — if the tool only needs one API call per user and there is no useful recurring access pattern, an API key requirement is simpler.

Capture the shape of the connector in a dated file under docs/plans/ before writing code, the same way you would for a tool. It keeps the provider quirks, scope list, and scope justification in one reviewable place.

Overview

Adding a connector touches 6 files:

FilePurpose
src/connectors/catalog/<name>.tsProvider config — the createConfig() factory with all the quirks
src/connectors/catalog/index.tsProvider loader registry — one new entry in CONNECTOR_CATALOG
tests/connectors/<name>-provider.test.tsContract tests using runConnectorContractTests() plus provider-specific tests
.env.local.exampleTemplate entry for OAUTH_<NAME>_CLIENT_ID / OAUTH_<NAME>_CLIENT_SECRET
.env.localLocal OAuth app credentials for dev
src/connectors/hooks/<name>.tsOptional — post-connection script (Atlassian cloud-id, Shopify shop metadata, etc.)

Everything else is automatic:

  • PKCE state generation and verification
  • Authorization URL building and redirect
  • Token exchange, refresh, and persistence to Convex
  • Discovery filtering (tools with unmet connector requirements are hidden)
  • The /dashboard/connectors card UI
  • The /v1/connectors/available endpoint
  • Per-user token resolution inside skill handlers

The framework files you will reference but never modify:

FilePurpose
src/connectors/catalog/types.tsThe ConnectorConfig interface — the contract every provider implements
src/connectors/fetch.tsoauthFetch() — the shared HTTP helper every provider uses
src/connectors/oauth2-client.tsThin wrapper over simple-oauth2 that honors the declarative auth-quirk fields + handles PKCE, token exchange, and refresh
src/gateway/connector-routes.tsHTTP routes /v1/connectors/* mounted by the gateway
src/connectors/resolve-token.tsThe resolveConnectorToken() helper wired into SkillContext
tests/connectors/provider-contract.tsrunConnectorContractTests() — the shared contract test helper

How do I scaffold a new connector?

There is no CLI command yet. Copy the stub file at src/connectors/catalog/notion.ts (or whichever existing provider is shaped closest to yours), rename it, and edit. Every provider is ~60–120 lines of declarative config plus one fetchAccountInfo function.

Step 1: Register an OAuth app with the provider

Every connector needs a registered OAuth app in the provider's developer console. You do this once per environment (local, staging, production) — each environment gets its own OAuth app so tokens and redirect URIs don't cross-contaminate.

At minimum the provider's console will ask for:

FieldValue
App nameToolRouter (local) / ToolRouter (staging) / ToolRouter
Redirect URI{API_BASE}/v1/connectors/{name}/callback — see below
ScopesWhatever your createConfig().scopes array will list
Homepage / support URLhttps://toolrouter.com
Privacy / terms URLhttps://toolrouter.com/privacy / https://toolrouter.com/terms

The redirect URI is the one piece that is easy to get wrong. It must match exactly, character-for-character, what the gateway will build at runtime. The format is always:

{API_BASE}/v1/connectors/{name}/callback

Where {name} is the kebab-case connector name (the key in CONNECTOR_CATALOG) and {API_BASE} is:

EnvironmentAPI base
Local devhttp://localhost:3141
Staginghttps://toolrouter-staging-7bf8.up.railway.app
Productionhttps://api.toolrouter.com

For example, for a new notion connector you would register three redirect URIs:

http://localhost:3141/v1/connectors/notion/callback
https://toolrouter-staging-7bf8.up.railway.app/v1/connectors/notion/callback
https://api.toolrouter.com/v1/connectors/notion/callback

Developer console links for the providers ToolRouter currently targets:

ProviderConsole
Google Workspacehttps://console.cloud.google.com/apis/credentials
Notionhttps://www.notion.so/my-integrations
Slackhttps://api.slack.com/apps
Microsoft 365https://entra.microsoft.com/ (App registrations)
Linearhttps://linear.app/settings/api/applications
HubSpothttps://developers.hubspot.com/docs/api/oauth-quickstart-guide
Airtablehttps://airtable.com/create/oauth
Stripe Connecthttps://dashboard.stripe.com/settings/connect
Shopifyhttps://partners.shopify.com/
Figmahttps://www.figma.com/developers/apps

Save the client ID and client secret — you will paste them into .env.local in Step 4.

Step 2: Create the provider config

Create src/connectors/catalog/<name>.ts. This file exports a single createConfig(): ConnectorConfig factory that returns a fully populated config object.

The contract

The interface every provider implements lives in src/connectors/catalog/types.ts. Read that file — it is the source of truth for the shape. Here is what every field does:

FieldTypeRequiredPurpose
namestringYesKebab-case connector name. Must match the key in CONNECTOR_CATALOG.
displayNamestringYesHuman name shown on the dashboard card: "Google Workspace", "Notion".
categoryConnectorCategoryNoOne of productivity, communication, crm, design, commerce, data, devtools, media, other. Used for dashboard grouping.
aliasstringNoAlternate name the catalog can also be looked up by.
clientIdstringYesOAuth client ID. Loaded via requireEnv('OAUTH_<NAME>_CLIENT_ID', name).
clientSecretstringYesOAuth client secret. Loaded via requireEnv('OAUTH_<NAME>_CLIENT_SECRET', name).
authorizationUrlstringYesProvider's authorize endpoint. Must be HTTPS. Supports ${connectionConfig.x} interpolation.
tokenUrlstringYesProvider's token endpoint. Must be HTTPS. Supports ${connectionConfig.x} interpolation.
revocationUrlstringNoProvider's revoke endpoint. Omit if the provider has none (LinkedIn, Notion).
scopesstring[]YesDefault scopes to request. Non-empty.
usePkcebooleanYestrue for every modern provider. Only set false if the provider rejects PKCE (LinkedIn rejects code_verifier when a client secret is also present).
extraAuthorizeParamsRecord<string, string>NoExtra query params appended to the authorization URL (Google's access_type: offline, Notion's owner: user).
extraTokenParamsRecord<string, string>NoExtra body params appended to the token exchange request.
supportsMultipleAccountsbooleanYesCan the same user have more than one connection? (Multiple Google accounts, multiple Slack workspaces.)
supportsRefreshbooleanYesDoes the provider issue refresh tokens? Notion and Linear are false; Google is true.
fetchAccountInfofunctionYesTakes the fresh tokens, returns { externalAccountId, accountLabel, metadata }. See below.
revokeTokenfunctionNoBest-effort revocation on disconnect.
exchangeCodefunctionNoOverride the token exchange. Only for non-RFC-6749 flows.
refreshTokensfunctionNoOverride the refresh flow. Only for non-RFC-6749 flows.
authorizationMethod'body' | 'header'NoHow credentials are sent to the token endpoint. body (default) for most providers, header (HTTP Basic) for Notion, Figma, Twitter.
bodyFormat'form' | 'json'NoToken endpoint body encoding. form (default) for most, json for Notion.
scopeSeparatorstringNoSeparator for joining scopes in the authorize URL. Defaults to space ( ). Shopify uses ,.
alternateAccessTokenResponsePathstringNoDot path into the token response for nested tokens. Slack uses authed_user.access_token.
connectionConfigRecord<string, ConnectionConfigField>NoPer-connection config fields the user fills in before connecting. Keyed by field name. Used for per-tenant providers (Shopify shop, Atlassian site).
postConnectionScriptstringNoName of a hook module in src/connectors/hooks/ to run after the connection succeeds (Atlassian cloud-id fetch, Shopify shop metadata).

Every declarative field has an equivalent override. The rule is: use declarative fields first, overrides only as a last resort. 95% of providers should never touch exchangeCode or refreshTokens.

Provider documentation header

Every provider file starts with a JSDoc header that pins down the quirks. The header is the first thing reviewers read — make it count.

typescript
/**
 * <Provider> OAuth connector.
 *
 * Docs: https://...
 *
 * KNOWN QUIRKS (read before editing):
 *
 * 1. <describe the quirk and the regression it protects against>
 * 2. <...>
 *
 * SCOPE JUSTIFICATION:
 *   - <scope>: why it's in the default set
 *   - <scope>: why it's in the default set
 */

This is not optional. Every connector the platform has shipped has burned a day of debugging on a provider quirk that would have been obvious from the header. If you catch a quirk during implementation, document it here.

Archetype 1: Standard OAuth 2.0 provider (Google-shaped)

Most providers are boring. They implement RFC 6749 to the letter, accept PKCE, return JSON from the token endpoint, and expose a /userinfo endpoint you can hit with the access token. Google is the reference implementation.

typescript
// src/connectors/catalog/google.ts
/**
 * Google Workspace Connector.
 *
 * Docs: https://developers.google.com/identity/protocols/oauth2/web-server
 *
 * CRITICAL QUIRKS HANDLED HERE:
 *
 *   1. `access_type=offline` + `prompt=consent` MUST both be in
 *      extraAuthorizeParams. Google only returns a refresh_token on the user's
 *      FIRST authorization unless `prompt=consent` is set. If a user re-auths
 *      without `prompt=consent`, we get NO refresh token and can never refresh
 *      again — silent, unrecoverable footgun. Regression test in
 *      tests/connectors/google-provider.test.ts guards both params.
 *
 *   2. `include_granted_scopes=true` enables incremental authorization so a
 *      tool that adds a new scope later doesn't lose previously-granted ones.
 */

import { oauthFetch } from '../fetch.js';
import { requireEnv } from './index.js';
import type {
  ConnectorAccountInfo,
  ConnectorConfig,
  OAuthTokenSet,
} from './types.js';

const CONNECTOR_KIND = 'google';

const USERINFO_URL = 'https://openidconnect.googleapis.com/v1/userinfo';
const REVOKE_URL = 'https://oauth2.googleapis.com/revoke';

const DEFAULT_SCOPES = [
  'openid',
  'email',
  'profile',
  'https://www.googleapis.com/auth/gmail.modify',
  'https://www.googleapis.com/auth/calendar',
  'https://www.googleapis.com/auth/drive',
  'https://www.googleapis.com/auth/documents',
  'https://www.googleapis.com/auth/spreadsheets',
];

export function createConfig(): ConnectorConfig {
  const clientId = requireEnv('OAUTH_GOOGLE_CLIENT_ID', CONNECTOR_KIND);
  const clientSecret = requireEnv('OAUTH_GOOGLE_CLIENT_SECRET', CONNECTOR_KIND);

  return {
    name: CONNECTOR_KIND,
    displayName: 'Google Workspace',
    category: 'productivity',
    clientId,
    clientSecret,
    authorizationUrl: 'https://accounts.google.com/o/oauth2/v2/auth',
    tokenUrl: 'https://oauth2.googleapis.com/token',
    revocationUrl: REVOKE_URL,
    scopes: DEFAULT_SCOPES,
    usePkce: true,
    extraAuthorizeParams: {
      // DO NOT REMOVE — see file header note 1.
      access_type: 'offline',
      prompt: 'consent',
      include_granted_scopes: 'true',
    },
    supportsMultipleAccounts: true,
    supportsRefresh: true,
    fetchAccountInfo: googleFetchAccountInfo,
    revokeToken: googleRevokeToken,
  };
}

export async function googleFetchAccountInfo(tokens: OAuthTokenSet): Promise<ConnectorAccountInfo> {
  const data = await oauthFetch<{
    sub?: string; email?: string; name?: string; picture?: string; email_verified?: boolean;
  }>(USERINFO_URL, {
    headers: { Authorization: `Bearer ${tokens.accessToken}` },
    providerName: CONNECTOR_KIND,
  });

  if (!data.sub || !data.email) {
    throw new Error('Google userinfo response missing required fields (sub, email)');
  }

  return {
    externalAccountId: data.sub,
    accountLabel: data.email,
    metadata: { name: data.name, picture: data.picture, verified_email: data.email_verified },
  };
}

export async function googleRevokeToken(tokens: OAuthTokenSet): Promise<void> {
  await oauthFetch(REVOKE_URL, {
    method: 'POST',
    body: new URLSearchParams({ token: tokens.accessToken }),
    providerName: CONNECTOR_KIND,
  });
}

That is the full file. Roughly 70 lines, all of it declarative or calls into the shared oauthFetch() helper. The framework does everything else.

Archetype 2: Notion-shaped (HTTP Basic + JSON body + no refresh)

Notion breaks the standard flow in three small ways: the token endpoint requires HTTP Basic auth, the body is JSON (not form-encoded), and access tokens never expire so there are no refresh tokens. All three are handled declaratively.

typescript
// src/connectors/catalog/notion.ts
/**
 * Notion OAuth connector.
 *
 * Docs: https://developers.notion.com/docs/authorization
 *
 * KNOWN QUIRKS:
 *
 * 1. NO refresh tokens — Notion access tokens never expire by design.
 *    `supportsRefresh: false`. Do NOT flip this flag.
 *
 * 2. Token endpoint requires HTTP Basic auth (base64(clientId:clientSecret)),
 *    not body params. Handled via `authorizationMethod: 'header'`.
 *
 * 3. Token endpoint body must be JSON, not form-urlencoded.
 *    Handled via `bodyFormat: 'json'`.
 *
 * 4. Token response includes workspace_id, workspace_name, workspace_icon,
 *    bot_id, owner. externalAccountId = workspace_id (stable), accountLabel
 *    = workspace_name (display only).
 *
 * 5. Notion is capability-based, not scope-based. The authorize URL accepts
 *    an `owner=user` param for user-level installs; workspace-level installs
 *    omit it. We request `owner: user` because our tools act on individual
 *    user content, not shared workspace content.
 *
 * 6. Scopes are effectively fixed — we pass a single sentinel `workspace`
 *    scope so validateConnectorConfig() sees a non-empty array. The provider
 *    ignores it; real capabilities are configured in the Notion UI.
 */

import type { ConnectorAccountInfo, ConnectorConfig, OAuthTokenSet } from './types.js';
import { requireEnv } from './index.js';
import { oauthFetch } from '../fetch.js';

const CONNECTOR_KIND = 'notion';

export function createConfig(): ConnectorConfig {
  const clientId = requireEnv('OAUTH_NOTION_CLIENT_ID', CONNECTOR_KIND);
  const clientSecret = requireEnv('OAUTH_NOTION_CLIENT_SECRET', CONNECTOR_KIND);

  return {
    name: CONNECTOR_KIND,
    displayName: 'Notion',
    clientId,
    clientSecret,
    authorizationUrl: 'https://api.notion.com/v1/oauth/authorize',
    tokenUrl: 'https://api.notion.com/v1/oauth/token',
    // DO NOT set revocationUrl — Notion has no revocation endpoint.
    scopes: ['workspace'], // sentinel; see header note 6
    usePkce: true,
    extraAuthorizeParams: {
      owner: 'user', // see header note 5
    },
    supportsMultipleAccounts: true,
    supportsRefresh: false, // see header note 1
    authorizationMethod: 'header', // see header note 2
    bodyFormat: 'json', // see header note 3
    fetchAccountInfo: notionFetchAccountInfo,
  };
}

export async function notionFetchAccountInfo(tokens: OAuthTokenSet): Promise<ConnectorAccountInfo> {
  // Notion returns workspace info directly in the token response, but we fetch
  // /users/me anyway as a sanity check that the token actually works.
  await oauthFetch('https://api.notion.com/v1/users/me', {
    headers: {
      Authorization: `Bearer ${tokens.accessToken}`,
      'Notion-Version': '2022-06-28',
    },
    providerName: CONNECTOR_KIND,
  });

  const meta = (tokens as unknown as Record<string, unknown>).metadata as
    | { workspace_id?: string; workspace_name?: string; workspace_icon?: string; bot_id?: string }
    | undefined;

  const workspaceId = meta?.workspace_id;
  const workspaceName = meta?.workspace_name;

  if (!workspaceId || !workspaceName) {
    throw new Error('Notion token response missing workspace_id or workspace_name');
  }

  return {
    externalAccountId: workspaceId, // STABLE — see header note 4
    accountLabel: workspaceName,
    metadata: { workspace_icon: meta?.workspace_icon, bot_id: meta?.bot_id },
  };
}

The key thing: Notion's quirks are handled with three declarative fields (authorizationMethod, bodyFormat, supportsRefresh). There is no custom exchangeCode override, no bespoke HTTP code. The framework carries all of it.

Archetype 3: Shopify-shaped (per-shop URLs + comma-separated scopes)

Shopify is the awkward one. Every store has its own subdomain, so the authorization URL is different for every connection. The user has to tell us the shop domain before we even know which URL to redirect to.

typescript
// src/connectors/catalog/shopify.ts
/**
 * Shopify OAuth connector.
 *
 * Docs: https://shopify.dev/docs/apps/auth/oauth/getting-started
 *
 * KNOWN QUIRKS:
 *
 * 1. Authorization URL is per-shop: {shop}.myshopify.com/admin/oauth/authorize
 *    Handled via `${connectionConfig.shop}` template in authorizationUrl.
 *
 * 2. Scopes are COMMA-separated, not space-separated.
 *    Handled via `scopeSeparator: ','`.
 *
 * 3. externalAccountId = shop domain (the ".myshopify.com" domain, not the
 *    vanity domain the merchant may have set up).
 *
 * 4. HMAC validation on the callback is enforced inside the gateway routes,
 *    not here — see src/gateway/connector-routes.ts.
 *
 * SCOPE JUSTIFICATION:
 *   - read_products, write_products: product catalog read/write
 *   - read_orders: order history for analytics tools
 *   - read_customers: customer list for support tools
 */

import type { ConnectorAccountInfo, ConnectorConfig, OAuthTokenSet } from './types.js';
import { requireEnv } from './index.js';
import { oauthFetch } from '../fetch.js';

const CONNECTOR_KIND = 'shopify';

export function createConfig(): ConnectorConfig {
  const clientId = requireEnv('OAUTH_SHOPIFY_CLIENT_ID', CONNECTOR_KIND);
  const clientSecret = requireEnv('OAUTH_SHOPIFY_CLIENT_SECRET', CONNECTOR_KIND);

  return {
    name: CONNECTOR_KIND,
    displayName: 'Shopify',
    category: 'commerce',
    clientId,
    clientSecret,
    // Templated — ${connectionConfig.shop} is substituted at runtime.
    authorizationUrl: 'https://${connectionConfig.shop}.myshopify.com/admin/oauth/authorize',
    tokenUrl: 'https://${connectionConfig.shop}.myshopify.com/admin/oauth/access_token',
    scopes: ['read_products', 'write_products', 'read_orders', 'read_customers'],
    scopeSeparator: ',', // see header note 2
    usePkce: true,
    supportsMultipleAccounts: true,
    supportsRefresh: false,
    // Record keyed by field name — the framework renders one input per entry
    // on the connect page before redirecting to the authorize URL.
    connectionConfig: {
      shop: {
        type: 'string',
        title: 'Shop domain',
        description: 'Your .myshopify.com subdomain (e.g. "my-store" for my-store.myshopify.com).',
        example: 'my-store',
        pattern: '^[a-z0-9][a-z0-9-]*$',
        suffix: '.myshopify.com',
      },
    },
    fetchAccountInfo: shopifyFetchAccountInfo,
  };
}

export async function shopifyFetchAccountInfo(tokens: OAuthTokenSet): Promise<ConnectorAccountInfo> {
  const meta = (tokens as unknown as Record<string, unknown>).connectionConfig as
    | { shop?: string }
    | undefined;
  const shop = meta?.shop;
  if (!shop) throw new Error('Shopify token resolved without a connectionConfig.shop value');

  const data = await oauthFetch<{ shop: { id: number; name: string; domain: string } }>(
    `https://${shop}.myshopify.com/admin/api/2024-10/shop.json`,
    {
      headers: { 'X-Shopify-Access-Token': tokens.accessToken },
      providerName: CONNECTOR_KIND,
    },
  );

  return {
    externalAccountId: `${shop}.myshopify.com`, // STABLE — see header note 3
    accountLabel: data.shop.name,
    metadata: { shop_id: data.shop.id, domain: data.shop.domain },
  };
}

The template syntax ${connectionConfig.shop} is not a JavaScript template literal — it is a declarative placeholder the framework substitutes after the user submits the connection form. Keep it as a plain string or the template will be evaluated at import time with an undefined value.

Other archetypes you may hit

ProviderQuirkField to use
SlackToken response nests the access token at authed_user.access_tokenalternateAccessTokenResponsePath: 'authed_user.access_token'
Stripe ConnectCompletely bespoke /oauth/token flow with grant_type=authorization_code but custom param namesexchangeCode override (last resort)
AirtableRefresh tokens rotate on every useNothing special — the framework persists the new refresh token automatically
LinearUserinfo is via GraphQL, not RESTCall the GraphQL endpoint inside fetchAccountInfo with oauthFetch()
HubSpotAccess tokens expire in 30 minutesJust set supportsRefresh: true — the framework handles the refresh cadence
MicrosoftTenant-scoped endpoints (common vs {tenant})Use common for multi-tenant, or add tenant to connectionConfig

Step 3: Register the provider in the loader

Open src/connectors/catalog/index.ts and add a one-line entry to CONNECTOR_CATALOG:

typescript
// src/connectors/catalog/index.ts
import type { ConnectorConfig } from './types.js';

type ConnectorFactory = () => Promise<ConnectorConfig>;

const CONNECTOR_CATALOG: Record<string, ConnectorFactory> = {
  google: async () => (await import('./google.js')).createConfig(),
  notion: async () => (await import('./notion.js')).createConfig(),
  slack: async () => (await import('./slack.js')).createConfig(),
  // ...
  shopify: async () => (await import('./shopify.js')).createConfig(), // ← your new line
};

Three rules:

  1. The key must match createConfig().name. If they drift, the loader throws at startup and validate:connectors fails. The validator checks this explicitly.
  2. Use await import(), not a top-level import. Provider configs read env vars at load time, and we want that to happen lazily — one provider with a missing secret should not break every other provider.
  3. Keep the map alphabetical so diffs stay clean.

That is the entire registry change. getConnectorConfig(name) now resolves for your provider, the /v1/connectors/available endpoint picks it up, the dashboard shows a Connect card, and discovery starts annotating tools that require it.

Step 4: Add env vars

Every connector needs two env vars:

OAUTH_<NAME>_CLIENT_ID=
OAUTH_<NAME>_CLIENT_SECRET=

Where <NAME> is the uppercased, underscored form of your connector name. googleOAUTH_GOOGLE_CLIENT_ID. shopifyOAUTH_SHOPIFY_CLIENT_ID.

Local development

Add them to .env.local:

OAUTH_SHOPIFY_CLIENT_ID=your-local-client-id
OAUTH_SHOPIFY_CLIENT_SECRET=your-local-client-secret

Also add a template entry to .env.local.example so the next developer knows they exist:

# Shopify connector (https://partners.shopify.com)
OAUTH_SHOPIFY_CLIENT_ID=
OAUTH_SHOPIFY_CLIENT_SECRET=

Staging and production (Railway)

bash
# Staging
railway link --service ToolRouter --environment staging
railway variables --set "OAUTH_SHOPIFY_CLIENT_ID=..." --set "OAUTH_SHOPIFY_CLIENT_SECRET=..."

# Production
railway link --service ToolRouter --environment production
railway variables --set "OAUTH_SHOPIFY_CLIENT_ID=..." --set "OAUTH_SHOPIFY_CLIENT_SECRET=..."

Each environment gets its own OAuth app registered at the provider (see Step 1), so the client IDs and secrets are different in every environment. This is deliberate — token rotation in staging never touches production tokens.

How the registry picks them up

The requireEnv() helper in src/connectors/catalog/index.ts throws a clear error naming the missing variable if either env var is absent:

OAuth provider "shopify" is not configured: missing env var OAUTH_SHOPIFY_CLIENT_ID.
Set it in .env.local or in the deployment environment.

The listReadyConnectors() helper walks CONNECTOR_CATALOG and only returns providers whose env vars are present. The /v1/connectors/available endpoint calls this, so the dashboard Connect button only appears once both env vars exist. Registering the provider without setting env vars doesn't break anything — the card is just hidden until the env vars land.

Step 5: Write the test file

Create tests/connectors/<name>-provider.test.ts. Every connector test file starts with a call to the shared runConnectorContractTests() helper, which runs the standard battery of contract checks: config loads, required fields populated, scopes non-empty, PKCE enabled, expected params present, missing env vars throw clear errors. You then add provider-specific tests for fetchAccountInfo / revokeToken response shapes.

A full working example for the shopify connector:

typescript
// tests/connectors/shopify-provider.test.ts
import { afterAll, afterEach, beforeAll, describe, expect, it } from 'vitest';
import { http, HttpResponse } from 'msw';
import { setupServer } from 'msw/node';

import { shopifyFetchAccountInfo } from '../../src/connectors/catalog/shopify.js';
import type { OAuthTokenSet } from '../../src/connectors/catalog/types.js';
import { runConnectorContractTests } from './provider-contract.js';

const server = setupServer();

beforeAll(() => server.listen({ onUnhandledRequest: 'error' }));
afterEach(() => server.resetHandlers());
afterAll(() => server.close());

const mockTokens = {
  accessToken: 'shpat_test-access-token',
  tokenType: 'Bearer',
  // connectionConfig is injected by the gateway at resolution time.
  connectionConfig: { shop: 'my-store' },
} as unknown as OAuthTokenSet;

// ─── Standard contract: every connector runs through this ───────────────
runConnectorContractTests({
  name: 'shopify',
  envVars: ['OAUTH_SHOPIFY_CLIENT_ID', 'OAUTH_SHOPIFY_CLIENT_SECRET'],
  setEnv: () => {
    process.env.OAUTH_SHOPIFY_CLIENT_ID = 'test-client-id';
    process.env.OAUTH_SHOPIFY_CLIENT_SECRET = 'test-client-secret';
  },
  expectedScopes: ['read_products', 'write_products', 'read_orders', 'read_customers'],
  expectedSupportsRefresh: false,
  expectedSupportsMultipleAccounts: true,
  expectedUsePkce: true,
});

// ─── Provider-specific tests: Shopify's unique response shapes ──────────

describe('shopifyFetchAccountInfo', () => {
  it('calls /admin/api/shop.json with the access token and maps the response', async () => {
    let receivedTokenHeader: string | null = null;
    server.use(
      http.get('https://my-store.myshopify.com/admin/api/2024-10/shop.json', ({ request }) => {
        receivedTokenHeader = request.headers.get('X-Shopify-Access-Token');
        return HttpResponse.json({
          shop: { id: 12345, name: 'My Store', domain: 'my-store.com' },
        });
      }),
    );

    const info = await shopifyFetchAccountInfo(mockTokens);

    expect(receivedTokenHeader).toBe('shpat_test-access-token');
    expect(info.externalAccountId).toBe('my-store.myshopify.com');
    expect(info.accountLabel).toBe('My Store');
    expect(info.metadata).toEqual({ shop_id: 12345, domain: 'my-store.com' });
  });

  it('throws when connectionConfig.shop is missing', async () => {
    const tokensWithoutShop = { accessToken: 'x', tokenType: 'Bearer' } as OAuthTokenSet;
    await expect(shopifyFetchAccountInfo(tokensWithoutShop)).rejects.toThrow(/connectionConfig\.shop/);
  });

  it('throws when the /shop.json response is non-2xx', async () => {
    server.use(
      http.get('https://my-store.myshopify.com/admin/api/2024-10/shop.json', () =>
        HttpResponse.json({ errors: 'Invalid API key' }, { status: 401 }),
      ),
    );

    await expect(shopifyFetchAccountInfo(mockTokens)).rejects.toThrow(/401/);
  });
});

What the contract test helper checks

Calling runConnectorContractTests({ name, envVars, setEnv, expectedScopes, ... }) generates these assertions for free:

AssertionWhy
getConnectorConfig(name) returns a defined configCatches typos in CONNECTOR_CATALOG
All required string fields are truthyCatches empty displayName / clientId / URLs
authorizationUrl and tokenUrl are HTTPSCatches http:// regressions
scopes is a non-empty arrayCatches empty scope lists
usePkce matches expectedUsePkce (default true)Catches accidental PKCE-off
supportsRefresh matches expectedSupportsRefreshCatches regression flips
supportsMultipleAccounts matches expectedSupportsMultipleAccountsSame
fetchAccountInfo is a functionCatches missing hook
Every scope in expectedScopes is presentRegression guard for scope removal
Every key/value in expectedExtraAuthorizeParams is presentRegression guard for critical params (Google's prompt=consent)
For every env var: removing it throws an error containing the var nameCatches ambiguous "undefined is not a function" errors

If any of these fail, you either forgot a field in the config or the provider deviates from the standard in a way that needs a new declarative field. Prefer the latter — if the test makes sense as a regression guard, it is correct.

What you still have to write yourself

Provider-specific fetchAccountInfo tests — success, missing required fields, non-2xx. And revokeToken tests if you defined one. That is usually 30–60 extra lines of msw handlers and expect() calls.

Run the tests

bash
npm run test -- tests/connectors/shopify-provider.test.ts

Expected: every assertion passes. If runConnectorContractTests() fails on a regression guard you didn't expect, the error message names the field and the expected value — fix the config, not the test.

Step 6: (Optional) postConnectionScript hook

Some providers need extra work right after the token exchange succeeds — before the connector row is written to Convex. The canonical example is Atlassian: the /oauth/token response doesn't include the cloud-id, so you have to make a follow-up call to /oauth/token/accessible-resources and stash the cloud-id in metadata.

If you need this, create src/connectors/hooks/<name>.ts:

typescript
// src/connectors/hooks/atlassian.ts
/**
 * Atlassian post-connection hook.
 *
 * Fetches the Jira / Confluence cloud-id for the authorized workspace
 * and stashes it in metadata so skills can build API URLs without an
 * extra round-trip on every call.
 *
 * Docs: https://developer.atlassian.com/cloud/jira/platform/oauth-2-3lo-apps/
 */

import { oauthFetch } from '../fetch.js';
import type { ConnectorPostAuthContext } from '../catalog/types.js';

interface AccessibleResource {
  id: string;
  name: string;
  url: string;
  scopes: string[];
  avatarUrl: string;
}

export default async function atlassianPostConnection(
  ctx: ConnectorPostAuthContext,
): Promise<void> {
  const resources = await oauthFetch<AccessibleResource[]>(
    'https://api.atlassian.com/oauth/token/accessible-resources',
    {
      headers: { Authorization: `Bearer ${ctx.tokens.accessToken}` },
      providerName: 'atlassian',
    },
  );

  if (!Array.isArray(resources) || resources.length === 0) {
    throw new Error('Atlassian: no accessible resources returned — user may not have granted site access');
  }

  // Single-site user → use that site. Multi-site users pick via a follow-up
  // prompt in the frontend; for now we just pick the first and stash all IDs.
  const primary = resources[0];

  await ctx.updateMetadata({
    cloud_id: primary.id,
    site_name: primary.name,
    site_url: primary.url,
    available_sites: resources.map((r) => ({ id: r.id, name: r.name, url: r.url })),
  });
  await ctx.updateAccountLabel(primary.name);
}

Then reference it from the connector config:

typescript
return {
  name: 'atlassian',
  // ...
  postConnectionScript: 'atlassian', // kebab-case filename without extension
  fetchAccountInfo: atlassianFetchAccountInfo,
};

The framework dynamically imports src/connectors/hooks/<postConnectionScript>.js, calls its default export with a ConnectorPostAuthContext (see src/connectors/catalog/types.ts), and the hook uses the updateMetadata / updateAccountLabel callbacks to write changes back to the newly-created connector row. Hook failures are logged but don't roll back the connector — the row is usable, only the enrichment is missing.

When you actually need this

Rarely. If any of these are true, you need a post-connection hook:

  • The provider's token response does not give you a stable connection ID and the only way to get one is a follow-up API call.
  • The provider exposes per-connection config you can only fetch after auth (workspace ID, subdomain, organization slug).
  • The provider requires you to register a webhook on their side as part of "installing the app" and that has to happen before the first tool call.

If none of those are true, skip this step. Put the extra API calls inside fetchAccountInfo instead.

Validator check

If postConnectionScript is set, validate:connectors checks that src/connectors/hooks/<name>.ts exists. Missing hook file → validator fails.

Step 7: Run the validator

bash
npm run validate:connectors

The validator walks every connector in CONNECTOR_CATALOG, runs its createConfig() with mock env vars, and checks:

CheckWhat it catches
Provider module loadsMissing file, missing createConfig export, syntax errors
createConfig() returns a configThrowing on load, returning undefined
validateConnectorConfig(config) passesMissing required fields, empty scopes, non-HTTPS URLs, non-kebab-case names
config.name === loader keyRename drift (registry says shopify, config says shopifyy)
Every optional hook is a function when declaredTypo like fetchAccountInfo: somethingUndefined
postConnectionScript points to a real fileTypo in the hook name
Test file exists at tests/connectors/<name>-provider.test.tsForgetting the test file altogether

On success:

Validating 10 provider(s)...
OK — all 10 provider(s) passed validation

On failure:

Validating 10 provider(s)...

FAILED — 2 validation error(s):

  [shopify] validateConnectorConfig rejected the config: OAuth provider "shopify": scopes must be a non-empty array
  [stripe] no test file found at tests/connectors/stripe-provider.test.ts (or tests/oauth/stripe-provider.test.ts)

Wire this into CI. The script exits non-zero on any failure.

Step 8: Smoke test

The local smoke test is the only way to be sure the provider works end-to-end before merging. Run it every time.

Start the gateway

bash
npm run dev:api

Watch stderr for [connectors] mounted /v1/connectors/* — that confirms the connector routes registered.

Verify the provider is discoverable

bash
curl -s http://localhost:3141/v1/connectors/available | jq

Expected: your new connector appears in the array with displayName, name, and whatever connectionConfig fields you declared. If it is missing, either:

  • the env vars for your connector are not set (the endpoint hides providers with missing config)
  • the provider threw during createConfig() — check stderr for the actual error

Open the dashboard

Visit http://localhost:3000/dashboard/connectors. The Connect card for your provider should be visible. Click Connect.

Expected flow:

  1. Browser redirects to the provider's authorize page
  2. You log in and approve the scopes
  3. Provider redirects back to /v1/connectors/<name>/callback
  4. Gateway exchanges the code, calls fetchAccountInfo, writes the connector row
  5. Browser ends up on /dashboard/connectors with a row showing your account label

Things that can go wrong:

SymptomFix
"Redirect URI mismatch" error from the providerYour redirect URI in the provider's console doesn't match {API_BASE}/v1/connectors/<name>/callback exactly. Trailing slashes, http vs https, localhost vs 127.0.0.1 all count.
Callback returns 500 with "invalid_grant"PKCE verifier mismatch, usually because the /start and /callback requests went to different gateway instances. In dev this is a clock skew issue — check the state table TTL.
fetchAccountInfo throws "missing required fields"The provider changed the userinfo response shape. Log the response body and update the parser.
Token exchange throws "http 400"The provider rejected the body format. Check bodyFormat, authorizationMethod, and extraTokenParams.
Dashboard card is stuck on "Connecting..."The gateway wrote the row but the frontend poll missed it. Reload the page.

Disconnect and reconnect

Click Disconnect, then Connect again. The expected behaviour:

  • If supportsMultipleAccounts: false, the new connection overwrites the old row (same externalAccountId).
  • If supportsMultipleAccounts: true, the new connection either matches the old externalAccountId (deduped — overwrite) or is a new account (second row appears).

If disconnect + reconnect with the same provider account creates two rows, your externalAccountId is not stable — go back to Step 2 and fix fetchAccountInfo.

Refresh test (only if `supportsRefresh: true`)

In Convex, manually set the connector's expiresAt to a timestamp in the past. Then call any tool that uses the connector. The gateway should:

  1. Notice the token is stale
  2. Call /v1/connectors/<name>/refresh internally
  3. Exchange the refresh token for a fresh access token
  4. Persist the new token
  5. Hand the fresh token to the skill handler
  6. Skill succeeds

If step 3 throws "invalid_refresh_token", the refresh token rotation rule is wrong — some providers rotate on every refresh (Airtable), others do not (Google). The framework handles both, but only if your config is correct.

Discovery annotation

The discover pipeline is exposed via the MCP discover meta-tool (not a plain REST endpoint). The quickest way to verify discovery picks up your connector is to call it through the staging MCP you have wired into Claude Code — search for a tool that declares your connector as a requirement, and inspect the result.

What to look for:

  • When disconnected, the tool should come back with available: false, reason: 'connector_required', and connector_required: [{ connector: '<your-connector>', connect_url: 'https://.../connectors/<your-connector>?tool=<tool-name>', ... }].
  • When connected, the tool should come back fully available with no connector_required field.
  • Every discover response should also include a top-level connectors array listing the user's connected accounts. Your new connector should appear there once the user has connected it, with kind, displayName, accountLabel, isDefault, and scope fields populated.

If discovery never annotates the tool, the requirement's connector field probably doesn't match an entry in CONNECTOR_CATALOG, or createConfig() is throwing during discover (check stderr). The registration-time validator in src/core/registry.ts should have caught a mismatched name at gateway boot.

Provider quirk reference

The most common provider quirks and how to handle them declaratively:

QuirkSolution
Token endpoint needs HTTP Basic auth (clientId:clientSecret)authorizationMethod: 'header'
Token endpoint body must be JSON (not form-urlencoded)bodyFormat: 'json'
Scopes are comma-separated in the authorize URLscopeSeparator: ','
Per-shop / per-tenant authorization URLauthorizationUrl: 'https://${connectionConfig.shop}.example.com/oauth/authorize' + matching connectionConfig field
Token response nests the access token deep in the bodyalternateAccessTokenResponsePath: 'authed_user.access_token'
Provider issues no refresh tokenssupportsRefresh: false
Provider only allows one connection per usersupportsMultipleAccounts: false
Provider requires extra query params on authorize (prompt, access_type, owner)extraAuthorizeParams: { prompt: 'consent', ... }
Provider requires extra body params on token exchangeextraTokenParams: { grant_type: 'authorization_code', ... }
Cloud-id / site-id / workspace-id only available after authpostConnectionScript: '<name>' + hook file
Completely bespoke code-exchange flow (last resort)exchangeCode override
Completely bespoke refresh flow (last resort)refreshTokens override

If your provider's quirk is not on this table, you have one of two options: add a new declarative field to ConnectorConfig and handle it once in the framework, or override exchangeCode / refreshTokens. Prefer the former — the point of the framework is that every provider looks the same to anyone debugging a weird flow three months later.

PKCE and state

You do not handle PKCE or state yourself. The gateway routes at src/gateway/connector-routes.ts do both:

  • On /v1/connectors/<name>/start, the gateway generates a PKCE code-verifier, derives the code-challenge, generates a CSRF state, and writes a row to the oauth_state Convex table keyed by the state value. The state row has a 10-minute TTL.
  • On /v1/connectors/<name>/callback, the gateway reads the state row, verifies it, pulls the code-verifier, and exchanges the code.

usePkce: true is the default and what every modern provider expects. Set it to false only if the provider explicitly rejects PKCE (which as of 2026 is roughly none of them — Slack deprecated the non-PKCE flow in 2024).

The state parameter is not optional and cannot be disabled. Every flow gets a fresh state row.

Common pitfalls

The mistakes that have actually landed in review on real provider PRs:

  • Forgetting to add the provider to CONNECTOR_CATALOG. The file exists, createConfig() is exported, but the registry doesn't know about it. Discovery never annotates, the Connect card never shows. validate:connectors won't catch this because it walks CONNECTOR_CATALOG. Symptom: GET /v1/connectors/available doesn't include your provider.
  • Redirect URI typos. The one in the provider's console must match {API_BASE}/v1/connectors/<name>/callback character-for-character. http vs https, trailing slashes, port numbers, uppercase vs lowercase — all matter. Symptom: provider shows "redirect_uri_mismatch" before you ever hit the gateway.
  • Using raw fetch() instead of oauthFetch(). Your hook hangs for 60 seconds on a dead provider, the gateway blocks, other users' flows time out. oauthFetch() enforces a 10-second timeout. No exceptions.
  • Setting supportsRefresh: true for a provider that doesn't issue refresh tokens. The framework thinks it can refresh, the refresh call fails with invalid_grant the first time a token expires, and the user gets silently kicked. Double-check the provider's docs before setting this flag.
  • Using email / name as externalAccountId. Users change emails. Users change display names. externalAccountId must be something the provider guarantees is stable: Google sub, Notion workspace_id, Slack team_id, Shopify shop domain, Stripe stripe_user_id. Use the metadata, not the label.
  • Hardcoding scopes across all users. The default scope set is what every user gets. If a single tool needs extra scopes, that is a sign the tool should either (a) require the scopes as part of its requirement declaration or (b) belong to a different connector entirely.
  • Mixing connector env vars across environments. Local .env.local, staging Railway, and production Railway each need their own OAuth app and their own client IDs. Don't reuse. Separate OAuth apps is the only thing standing between a staging token leak and a production account compromise.
  • Committing the client secret. .env.local is gitignored; .env.local.example is committed. Put the template in .env.local.example with empty values, put the real values in .env.local. Never the other way around.
  • Forgetting to register the provider in MEDIA_PROVIDER_REQUIREMENTS. Wait — that is a different system. Media providers are providers, not connectors. Ignore this for connector PRs.
  • Test file at tests/oauth/ instead of tests/connectors/. The primary location is tests/connectors/. The legacy tests/oauth/ location is accepted by the validator as a fallback for older code. New code goes in tests/connectors/.
  • JSDoc header missing the scope justification. Every scope in the default set must earn its place. If you can't write one line about why the scope is there, remove it.

When do I need a flow override?

exchangeCode and refreshTokens are escape hatches. You almost never need them. The question to ask is: "does this provider's flow deviate from RFC 6749 in a way no declarative field can model?"

SituationUse declarative fieldsUse an override
Token endpoint wants HTTP BasicauthorizationMethod: 'header'
Token endpoint wants JSON bodybodyFormat: 'json'
Token endpoint wants extra paramsextraTokenParams
Token response is nestedalternateAccessTokenResponsePath
Token response has custom fields we want to persistReturn them from fetchAccountInfo metadata
Provider uses a completely different flow (grant_type=<custom>, signed JWT assertion, etc.)exchangeCode override
Refresh requires a signed JWT or bespoke signaturerefreshTokens override

If you reach for an override, leave a JSDoc comment explaining exactly why the declarative fields don't work. The next person should be able to decide at a glance whether the override is still needed or whether the framework has since grown a declarative field that obviates it.

In the 10 providers currently scoped (Google, Notion, Slack, Microsoft, Linear, HubSpot, Airtable, Stripe, Shopify, Figma), exactly one — Stripe Connect — is expected to need an exchangeCode override, because Stripe Connect's OAuth flow is technically a different product from OAuth 2.0 proper.

Testing against a real OAuth app

Most providers will refuse to register http://localhost:3141/v1/connectors/<name>/callback as a redirect URI — they require HTTPS. You have three options for testing locally:

Option 1: ngrok

Easiest. brew install ngrok, then:

bash
ngrok http 3141

You will get an https://<random>.ngrok-free.app URL. Register that as the redirect URI in the provider's dev console (https://<random>.ngrok-free.app/v1/connectors/<name>/callback). Start the gateway with TOOLROUTER_PUBLIC_API_URL=https://<random>.ngrok-free.app npm run dev:api so the gateway builds the callback URL from the ngrok domain instead of localhost.

The ngrok domain changes every restart unless you have a paid plan. Register a new redirect URI each time, or use the paid static-domain feature.

Option 2: Cloudflare Tunnel

Same idea, no account needed:

bash
cloudflared tunnel --url http://localhost:3141

Prints a trycloudflare.com URL. Use it the same way as ngrok.

Option 3: Staging

Push your branch, deploy to staging, and test against the real staging URL. The staging redirect URI is already registered in your OAuth app, and staging has its own env vars. This is the honest way to run a smoke test — everything runs the same infrastructure as production.

The downside is the iteration loop is slower (push → Railway deploy → test). Use it for the final smoke test, not for every keystroke.

Checklist

Before merging a new connector PR:

Config:

  • [ ] src/connectors/catalog/<name>.ts exists with a createConfig(): ConnectorConfig export
  • [ ] JSDoc header lists every known quirk and justifies every default scope
  • [ ] clientId / clientSecret loaded via requireEnv() — never hardcoded
  • [ ] authorizationUrl and tokenUrl are HTTPS
  • [ ] scopes is non-empty
  • [ ] usePkce: true (or JSDoc explains why not)
  • [ ] supportsRefresh and supportsMultipleAccounts reflect actual provider behaviour
  • [ ] fetchAccountInfo uses oauthFetch() and returns a stable externalAccountId
  • [ ] revokeToken is defined OR omitted with a JSDoc note ("provider has no revocation endpoint")
  • [ ] Override hooks (exchangeCode, refreshTokens) only used if no declarative field fits

Registry:

  • [ ] Entry added to CONNECTOR_CATALOG in src/connectors/catalog/index.ts
  • [ ] CONNECTOR_CATALOG key matches createConfig().name exactly

Env vars:

  • [ ] OAUTH_<NAME>_CLIENT_ID / OAUTH_<NAME>_CLIENT_SECRET template added to .env.local.example
  • [ ] Local values set in .env.local
  • [ ] OAuth app registered in the provider's dev console with the right redirect URI
  • [ ] Staging Railway env vars set
  • [ ] Production Railway env vars set (only for final merge)

Tests:

  • [ ] tests/connectors/<name>-provider.test.ts exists
  • [ ] runConnectorContractTests() called with expected scopes and extra params
  • [ ] Provider-specific fetchAccountInfo tests for success + missing fields + non-2xx
  • [ ] revokeToken tests if defined
  • [ ] npm run test -- tests/connectors/<name>-provider.test.ts passes

Validation:

  • [ ] npm run validate:connectors passes
  • [ ] npx tsc --noEmit passes

Smoke test:

  • [ ] Connect flow works end-to-end on local dev (via ngrok, Cloudflare Tunnel, or staging)
  • [ ] Disconnect + reconnect with the same account dedupes (same externalAccountId)
  • [ ] Token refresh works on an expired token (only if supportsRefresh: true)
  • [ ] /v1/connectors/available lists the connector
  • [ ] Discovery annotates a tool with type: 'connector' requirement as connector_required when disconnected
  • [ ] A sample tool that uses context.getConnectorToken('<name>') runs successfully when connected

Hook (only if postConnectionScript is set):

  • [ ] src/connectors/hooks/<name>.ts exists and has a default async export taking ConnectorPostAuthContext
  • [ ] Hook uses oauthFetch() for any HTTP calls
  • [ ] Hook calls ctx.updateMetadata() / ctx.updateAccountLabel() to persist enrichment back to the connector row
  • [ ] Hook failure is acceptable — the connector row is still usable even if the hook throws (framework logs + continues)

If every box is checked, the connector is ready to ship.

  • Adding Providers — integrating a new upstream API you pay to call (not user-authorized)
  • Tool Authoring — how to declare type: 'connector' requirements and use context.getConnectorToken() inside a handler
  • Architecture — where connectors sit in the runtime
  • CLI — validation and testing commands