This is the complete guide to adding a new connector to ToolRouter. A connector lets a user authorize ToolRouter against one of their SaaS accounts — Notion workspace, Google Workspace, Slack team, Linear org, Shopify store — so tools can act on their behalf without the user ever typing or pasting a token.
Adding a connector is deliberately boring. Most providers need one new file, a two-line registry edit, two env vars, and a test file that is mostly runConnectorContractTests(). The framework carries PKCE, state, refresh locks, token persistence, discovery annotation, and the frontend card — you only describe the quirks that make the provider different.
What is a connector (and how is it different from a provider)?
ToolRouter has two independent integration layers. They sound similar and they should not be used interchangeably.
| Provider | Connector | |
|---|---|---|
| What it is | An upstream API the platform pays to call on behalf of all users | A SaaS account the user authorizes us to act on |
| Auth model | Platform-held API key | Per-user OAuth access token |
| Examples | Exa, fal.ai, Prodia, Serper, OpenRouter, ElevenLabs | Notion, Google Workspace, Slack, Linear, Shopify, HubSpot |
| Where config lives | src/tools/shared/<provider>-client.ts + src/core/provider-catalog.ts | src/connectors/catalog/<name>.ts + src/connectors/catalog/index.ts |
| How tools declare it | requirements: [{ type: 'secret', name: 'fal', ... }] | requirements: [{ type: 'connector', name: 'notion', connector: 'notion', ... }] |
| How handlers use it | resolveKey(context, 'fal', 'FAL_KEY') | context.getConnectorToken('notion') |
| Billed to | Platform (passed through to user via raw_cost) | Nothing — the user's own SaaS account |
| User-facing copy | "powered by [N] models" — but never name the provider | "Connect your Notion workspace" |
Quick rule of thumb: if a tool could run without a specific user (e.g. "search the public web"), it uses a provider. If a tool only makes sense against a specific person's account (e.g. "list my unread Slack DMs"), it uses a connector.
Adding a new upstream API you pay for? Read Adding Providers instead. This guide is only for OAuth-based user connectors.
When should I add a new connector?
Add a connector when there is a tool (or a family of tools) that only makes sense against a user's own account, and the provider exposes OAuth 2.0 as a sanctioned integration path.
Good candidates:
- Knowledge bases — Notion, Confluence, Coda, Craft.
- Productivity suites — Google Workspace, Microsoft 365, iCloud Contacts+Calendar (not yet available).
- Chat / messaging — Slack, Discord, Teams, Intercom.
- Project management — Linear, Jira, Asana, Monday, ClickUp.
- CRM — HubSpot, Salesforce, Pipedrive, Attio.
- Commerce — Shopify, Stripe Connect, Square, BigCommerce.
- Design / marketing — Figma, Webflow, Airtable, Mailchimp.
Poor candidates (don't bother):
- APIs the platform should call centrally with its own key — these are providers, not connectors. Anything where the user "paying for their own usage" is the point, and the user isn't the resource owner, belongs in Adding Providers.
- Providers with no sanctioned OAuth flow — if the only way in is scraping an authenticated cookie, it isn't a connector. Write a provider with a credential requirement instead, or skip the integration.
- One-shot consumer apps — if the tool only needs one API call per user and there is no useful recurring access pattern, an API key requirement is simpler.
Capture the shape of the connector in a dated file under docs/plans/ before writing code, the same way you would for a tool. It keeps the provider quirks, scope list, and scope justification in one reviewable place.
Overview
Adding a connector touches 6 files:
| File | Purpose |
|---|---|
src/connectors/catalog/<name>.ts | Provider config — the createConfig() factory with all the quirks |
src/connectors/catalog/index.ts | Provider loader registry — one new entry in CONNECTOR_CATALOG |
tests/connectors/<name>-provider.test.ts | Contract tests using runConnectorContractTests() plus provider-specific tests |
.env.local.example | Template entry for OAUTH_<NAME>_CLIENT_ID / OAUTH_<NAME>_CLIENT_SECRET |
.env.local | Local OAuth app credentials for dev |
src/connectors/hooks/<name>.ts | Optional — post-connection script (Atlassian cloud-id, Shopify shop metadata, etc.) |
Everything else is automatic:
- PKCE state generation and verification
- Authorization URL building and redirect
- Token exchange, refresh, and persistence to Convex
- Discovery filtering (tools with unmet connector requirements are hidden)
- The
/dashboard/connectorscard UI - The
/v1/connectors/availableendpoint - Per-user token resolution inside skill handlers
The framework files you will reference but never modify:
| File | Purpose |
|---|---|
src/connectors/catalog/types.ts | The ConnectorConfig interface — the contract every provider implements |
src/connectors/fetch.ts | oauthFetch() — the shared HTTP helper every provider uses |
src/connectors/oauth2-client.ts | Thin wrapper over simple-oauth2 that honors the declarative auth-quirk fields + handles PKCE, token exchange, and refresh |
src/gateway/connector-routes.ts | HTTP routes /v1/connectors/* mounted by the gateway |
src/connectors/resolve-token.ts | The resolveConnectorToken() helper wired into SkillContext |
tests/connectors/provider-contract.ts | runConnectorContractTests() — the shared contract test helper |
How do I scaffold a new connector?
There is no CLI command yet. Copy the stub file at src/connectors/catalog/notion.ts (or whichever existing provider is shaped closest to yours), rename it, and edit. Every provider is ~60–120 lines of declarative config plus one fetchAccountInfo function.
Step 1: Register an OAuth app with the provider
Every connector needs a registered OAuth app in the provider's developer console. You do this once per environment (local, staging, production) — each environment gets its own OAuth app so tokens and redirect URIs don't cross-contaminate.
At minimum the provider's console will ask for:
| Field | Value |
|---|---|
| App name | ToolRouter (local) / ToolRouter (staging) / ToolRouter |
| Redirect URI | {API_BASE}/v1/connectors/{name}/callback — see below |
| Scopes | Whatever your createConfig().scopes array will list |
| Homepage / support URL | https://toolrouter.com |
| Privacy / terms URL | https://toolrouter.com/privacy / https://toolrouter.com/terms |
The redirect URI is the one piece that is easy to get wrong. It must match exactly, character-for-character, what the gateway will build at runtime. The format is always:
{API_BASE}/v1/connectors/{name}/callbackWhere {name} is the kebab-case connector name (the key in CONNECTOR_CATALOG) and {API_BASE} is:
| Environment | API base |
|---|---|
| Local dev | http://localhost:3141 |
| Staging | https://toolrouter-staging-7bf8.up.railway.app |
| Production | https://api.toolrouter.com |
For example, for a new notion connector you would register three redirect URIs:
http://localhost:3141/v1/connectors/notion/callback
https://toolrouter-staging-7bf8.up.railway.app/v1/connectors/notion/callback
https://api.toolrouter.com/v1/connectors/notion/callbackDeveloper console links for the providers ToolRouter currently targets:
| Provider | Console |
|---|---|
| Google Workspace | https://console.cloud.google.com/apis/credentials |
| Notion | https://www.notion.so/my-integrations |
| Slack | https://api.slack.com/apps |
| Microsoft 365 | https://entra.microsoft.com/ (App registrations) |
| Linear | https://linear.app/settings/api/applications |
| HubSpot | https://developers.hubspot.com/docs/api/oauth-quickstart-guide |
| Airtable | https://airtable.com/create/oauth |
| Stripe Connect | https://dashboard.stripe.com/settings/connect |
| Shopify | https://partners.shopify.com/ |
| Figma | https://www.figma.com/developers/apps |
Save the client ID and client secret — you will paste them into .env.local in Step 4.
Step 2: Create the provider config
Create src/connectors/catalog/<name>.ts. This file exports a single createConfig(): ConnectorConfig factory that returns a fully populated config object.
The contract
The interface every provider implements lives in src/connectors/catalog/types.ts. Read that file — it is the source of truth for the shape. Here is what every field does:
| Field | Type | Required | Purpose |
|---|---|---|---|
name | string | Yes | Kebab-case connector name. Must match the key in CONNECTOR_CATALOG. |
displayName | string | Yes | Human name shown on the dashboard card: "Google Workspace", "Notion". |
category | ConnectorCategory | No | One of productivity, communication, crm, design, commerce, data, devtools, media, other. Used for dashboard grouping. |
alias | string | No | Alternate name the catalog can also be looked up by. |
clientId | string | Yes | OAuth client ID. Loaded via requireEnv('OAUTH_<NAME>_CLIENT_ID', name). |
clientSecret | string | Yes | OAuth client secret. Loaded via requireEnv('OAUTH_<NAME>_CLIENT_SECRET', name). |
authorizationUrl | string | Yes | Provider's authorize endpoint. Must be HTTPS. Supports ${connectionConfig.x} interpolation. |
tokenUrl | string | Yes | Provider's token endpoint. Must be HTTPS. Supports ${connectionConfig.x} interpolation. |
revocationUrl | string | No | Provider's revoke endpoint. Omit if the provider has none (LinkedIn, Notion). |
scopes | string[] | Yes | Default scopes to request. Non-empty. |
usePkce | boolean | Yes | true for every modern provider. Only set false if the provider rejects PKCE (LinkedIn rejects code_verifier when a client secret is also present). |
extraAuthorizeParams | Record<string, string> | No | Extra query params appended to the authorization URL (Google's access_type: offline, Notion's owner: user). |
extraTokenParams | Record<string, string> | No | Extra body params appended to the token exchange request. |
supportsMultipleAccounts | boolean | Yes | Can the same user have more than one connection? (Multiple Google accounts, multiple Slack workspaces.) |
supportsRefresh | boolean | Yes | Does the provider issue refresh tokens? Notion and Linear are false; Google is true. |
fetchAccountInfo | function | Yes | Takes the fresh tokens, returns { externalAccountId, accountLabel, metadata }. See below. |
revokeToken | function | No | Best-effort revocation on disconnect. |
exchangeCode | function | No | Override the token exchange. Only for non-RFC-6749 flows. |
refreshTokens | function | No | Override the refresh flow. Only for non-RFC-6749 flows. |
authorizationMethod | 'body' | 'header' | No | How credentials are sent to the token endpoint. body (default) for most providers, header (HTTP Basic) for Notion, Figma, Twitter. |
bodyFormat | 'form' | 'json' | No | Token endpoint body encoding. form (default) for most, json for Notion. |
scopeSeparator | string | No | Separator for joining scopes in the authorize URL. Defaults to space ( ). Shopify uses ,. |
alternateAccessTokenResponsePath | string | No | Dot path into the token response for nested tokens. Slack uses authed_user.access_token. |
connectionConfig | Record<string, ConnectionConfigField> | No | Per-connection config fields the user fills in before connecting. Keyed by field name. Used for per-tenant providers (Shopify shop, Atlassian site). |
postConnectionScript | string | No | Name of a hook module in src/connectors/hooks/ to run after the connection succeeds (Atlassian cloud-id fetch, Shopify shop metadata). |
Every declarative field has an equivalent override. The rule is: use declarative fields first, overrides only as a last resort. 95% of providers should never touch exchangeCode or refreshTokens.
Provider documentation header
Every provider file starts with a JSDoc header that pins down the quirks. The header is the first thing reviewers read — make it count.
/**
* <Provider> OAuth connector.
*
* Docs: https://...
*
* KNOWN QUIRKS (read before editing):
*
* 1. <describe the quirk and the regression it protects against>
* 2. <...>
*
* SCOPE JUSTIFICATION:
* - <scope>: why it's in the default set
* - <scope>: why it's in the default set
*/This is not optional. Every connector the platform has shipped has burned a day of debugging on a provider quirk that would have been obvious from the header. If you catch a quirk during implementation, document it here.
Archetype 1: Standard OAuth 2.0 provider (Google-shaped)
Most providers are boring. They implement RFC 6749 to the letter, accept PKCE, return JSON from the token endpoint, and expose a /userinfo endpoint you can hit with the access token. Google is the reference implementation.
// src/connectors/catalog/google.ts
/**
* Google Workspace Connector.
*
* Docs: https://developers.google.com/identity/protocols/oauth2/web-server
*
* CRITICAL QUIRKS HANDLED HERE:
*
* 1. `access_type=offline` + `prompt=consent` MUST both be in
* extraAuthorizeParams. Google only returns a refresh_token on the user's
* FIRST authorization unless `prompt=consent` is set. If a user re-auths
* without `prompt=consent`, we get NO refresh token and can never refresh
* again — silent, unrecoverable footgun. Regression test in
* tests/connectors/google-provider.test.ts guards both params.
*
* 2. `include_granted_scopes=true` enables incremental authorization so a
* tool that adds a new scope later doesn't lose previously-granted ones.
*/
import { oauthFetch } from '../fetch.js';
import { requireEnv } from './index.js';
import type {
ConnectorAccountInfo,
ConnectorConfig,
OAuthTokenSet,
} from './types.js';
const CONNECTOR_KIND = 'google';
const USERINFO_URL = 'https://openidconnect.googleapis.com/v1/userinfo';
const REVOKE_URL = 'https://oauth2.googleapis.com/revoke';
const DEFAULT_SCOPES = [
'openid',
'email',
'profile',
'https://www.googleapis.com/auth/gmail.modify',
'https://www.googleapis.com/auth/calendar',
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/documents',
'https://www.googleapis.com/auth/spreadsheets',
];
export function createConfig(): ConnectorConfig {
const clientId = requireEnv('OAUTH_GOOGLE_CLIENT_ID', CONNECTOR_KIND);
const clientSecret = requireEnv('OAUTH_GOOGLE_CLIENT_SECRET', CONNECTOR_KIND);
return {
name: CONNECTOR_KIND,
displayName: 'Google Workspace',
category: 'productivity',
clientId,
clientSecret,
authorizationUrl: 'https://accounts.google.com/o/oauth2/v2/auth',
tokenUrl: 'https://oauth2.googleapis.com/token',
revocationUrl: REVOKE_URL,
scopes: DEFAULT_SCOPES,
usePkce: true,
extraAuthorizeParams: {
// DO NOT REMOVE — see file header note 1.
access_type: 'offline',
prompt: 'consent',
include_granted_scopes: 'true',
},
supportsMultipleAccounts: true,
supportsRefresh: true,
fetchAccountInfo: googleFetchAccountInfo,
revokeToken: googleRevokeToken,
};
}
export async function googleFetchAccountInfo(tokens: OAuthTokenSet): Promise<ConnectorAccountInfo> {
const data = await oauthFetch<{
sub?: string; email?: string; name?: string; picture?: string; email_verified?: boolean;
}>(USERINFO_URL, {
headers: { Authorization: `Bearer ${tokens.accessToken}` },
providerName: CONNECTOR_KIND,
});
if (!data.sub || !data.email) {
throw new Error('Google userinfo response missing required fields (sub, email)');
}
return {
externalAccountId: data.sub,
accountLabel: data.email,
metadata: { name: data.name, picture: data.picture, verified_email: data.email_verified },
};
}
export async function googleRevokeToken(tokens: OAuthTokenSet): Promise<void> {
await oauthFetch(REVOKE_URL, {
method: 'POST',
body: new URLSearchParams({ token: tokens.accessToken }),
providerName: CONNECTOR_KIND,
});
}That is the full file. Roughly 70 lines, all of it declarative or calls into the shared oauthFetch() helper. The framework does everything else.
Archetype 2: Notion-shaped (HTTP Basic + JSON body + no refresh)
Notion breaks the standard flow in three small ways: the token endpoint requires HTTP Basic auth, the body is JSON (not form-encoded), and access tokens never expire so there are no refresh tokens. All three are handled declaratively.
// src/connectors/catalog/notion.ts
/**
* Notion OAuth connector.
*
* Docs: https://developers.notion.com/docs/authorization
*
* KNOWN QUIRKS:
*
* 1. NO refresh tokens — Notion access tokens never expire by design.
* `supportsRefresh: false`. Do NOT flip this flag.
*
* 2. Token endpoint requires HTTP Basic auth (base64(clientId:clientSecret)),
* not body params. Handled via `authorizationMethod: 'header'`.
*
* 3. Token endpoint body must be JSON, not form-urlencoded.
* Handled via `bodyFormat: 'json'`.
*
* 4. Token response includes workspace_id, workspace_name, workspace_icon,
* bot_id, owner. externalAccountId = workspace_id (stable), accountLabel
* = workspace_name (display only).
*
* 5. Notion is capability-based, not scope-based. The authorize URL accepts
* an `owner=user` param for user-level installs; workspace-level installs
* omit it. We request `owner: user` because our tools act on individual
* user content, not shared workspace content.
*
* 6. Scopes are effectively fixed — we pass a single sentinel `workspace`
* scope so validateConnectorConfig() sees a non-empty array. The provider
* ignores it; real capabilities are configured in the Notion UI.
*/
import type { ConnectorAccountInfo, ConnectorConfig, OAuthTokenSet } from './types.js';
import { requireEnv } from './index.js';
import { oauthFetch } from '../fetch.js';
const CONNECTOR_KIND = 'notion';
export function createConfig(): ConnectorConfig {
const clientId = requireEnv('OAUTH_NOTION_CLIENT_ID', CONNECTOR_KIND);
const clientSecret = requireEnv('OAUTH_NOTION_CLIENT_SECRET', CONNECTOR_KIND);
return {
name: CONNECTOR_KIND,
displayName: 'Notion',
clientId,
clientSecret,
authorizationUrl: 'https://api.notion.com/v1/oauth/authorize',
tokenUrl: 'https://api.notion.com/v1/oauth/token',
// DO NOT set revocationUrl — Notion has no revocation endpoint.
scopes: ['workspace'], // sentinel; see header note 6
usePkce: true,
extraAuthorizeParams: {
owner: 'user', // see header note 5
},
supportsMultipleAccounts: true,
supportsRefresh: false, // see header note 1
authorizationMethod: 'header', // see header note 2
bodyFormat: 'json', // see header note 3
fetchAccountInfo: notionFetchAccountInfo,
};
}
export async function notionFetchAccountInfo(tokens: OAuthTokenSet): Promise<ConnectorAccountInfo> {
// Notion returns workspace info directly in the token response, but we fetch
// /users/me anyway as a sanity check that the token actually works.
await oauthFetch('https://api.notion.com/v1/users/me', {
headers: {
Authorization: `Bearer ${tokens.accessToken}`,
'Notion-Version': '2022-06-28',
},
providerName: CONNECTOR_KIND,
});
const meta = (tokens as unknown as Record<string, unknown>).metadata as
| { workspace_id?: string; workspace_name?: string; workspace_icon?: string; bot_id?: string }
| undefined;
const workspaceId = meta?.workspace_id;
const workspaceName = meta?.workspace_name;
if (!workspaceId || !workspaceName) {
throw new Error('Notion token response missing workspace_id or workspace_name');
}
return {
externalAccountId: workspaceId, // STABLE — see header note 4
accountLabel: workspaceName,
metadata: { workspace_icon: meta?.workspace_icon, bot_id: meta?.bot_id },
};
}The key thing: Notion's quirks are handled with three declarative fields (authorizationMethod, bodyFormat, supportsRefresh). There is no custom exchangeCode override, no bespoke HTTP code. The framework carries all of it.
Archetype 3: Shopify-shaped (per-shop URLs + comma-separated scopes)
Shopify is the awkward one. Every store has its own subdomain, so the authorization URL is different for every connection. The user has to tell us the shop domain before we even know which URL to redirect to.
// src/connectors/catalog/shopify.ts
/**
* Shopify OAuth connector.
*
* Docs: https://shopify.dev/docs/apps/auth/oauth/getting-started
*
* KNOWN QUIRKS:
*
* 1. Authorization URL is per-shop: {shop}.myshopify.com/admin/oauth/authorize
* Handled via `${connectionConfig.shop}` template in authorizationUrl.
*
* 2. Scopes are COMMA-separated, not space-separated.
* Handled via `scopeSeparator: ','`.
*
* 3. externalAccountId = shop domain (the ".myshopify.com" domain, not the
* vanity domain the merchant may have set up).
*
* 4. HMAC validation on the callback is enforced inside the gateway routes,
* not here — see src/gateway/connector-routes.ts.
*
* SCOPE JUSTIFICATION:
* - read_products, write_products: product catalog read/write
* - read_orders: order history for analytics tools
* - read_customers: customer list for support tools
*/
import type { ConnectorAccountInfo, ConnectorConfig, OAuthTokenSet } from './types.js';
import { requireEnv } from './index.js';
import { oauthFetch } from '../fetch.js';
const CONNECTOR_KIND = 'shopify';
export function createConfig(): ConnectorConfig {
const clientId = requireEnv('OAUTH_SHOPIFY_CLIENT_ID', CONNECTOR_KIND);
const clientSecret = requireEnv('OAUTH_SHOPIFY_CLIENT_SECRET', CONNECTOR_KIND);
return {
name: CONNECTOR_KIND,
displayName: 'Shopify',
category: 'commerce',
clientId,
clientSecret,
// Templated — ${connectionConfig.shop} is substituted at runtime.
authorizationUrl: 'https://${connectionConfig.shop}.myshopify.com/admin/oauth/authorize',
tokenUrl: 'https://${connectionConfig.shop}.myshopify.com/admin/oauth/access_token',
scopes: ['read_products', 'write_products', 'read_orders', 'read_customers'],
scopeSeparator: ',', // see header note 2
usePkce: true,
supportsMultipleAccounts: true,
supportsRefresh: false,
// Record keyed by field name — the framework renders one input per entry
// on the connect page before redirecting to the authorize URL.
connectionConfig: {
shop: {
type: 'string',
title: 'Shop domain',
description: 'Your .myshopify.com subdomain (e.g. "my-store" for my-store.myshopify.com).',
example: 'my-store',
pattern: '^[a-z0-9][a-z0-9-]*$',
suffix: '.myshopify.com',
},
},
fetchAccountInfo: shopifyFetchAccountInfo,
};
}
export async function shopifyFetchAccountInfo(tokens: OAuthTokenSet): Promise<ConnectorAccountInfo> {
const meta = (tokens as unknown as Record<string, unknown>).connectionConfig as
| { shop?: string }
| undefined;
const shop = meta?.shop;
if (!shop) throw new Error('Shopify token resolved without a connectionConfig.shop value');
const data = await oauthFetch<{ shop: { id: number; name: string; domain: string } }>(
`https://${shop}.myshopify.com/admin/api/2024-10/shop.json`,
{
headers: { 'X-Shopify-Access-Token': tokens.accessToken },
providerName: CONNECTOR_KIND,
},
);
return {
externalAccountId: `${shop}.myshopify.com`, // STABLE — see header note 3
accountLabel: data.shop.name,
metadata: { shop_id: data.shop.id, domain: data.shop.domain },
};
}The template syntax ${connectionConfig.shop} is not a JavaScript template literal — it is a declarative placeholder the framework substitutes after the user submits the connection form. Keep it as a plain string or the template will be evaluated at import time with an undefined value.
Other archetypes you may hit
| Provider | Quirk | Field to use |
|---|---|---|
| Slack | Token response nests the access token at authed_user.access_token | alternateAccessTokenResponsePath: 'authed_user.access_token' |
| Stripe Connect | Completely bespoke /oauth/token flow with grant_type=authorization_code but custom param names | exchangeCode override (last resort) |
| Airtable | Refresh tokens rotate on every use | Nothing special — the framework persists the new refresh token automatically |
| Linear | Userinfo is via GraphQL, not REST | Call the GraphQL endpoint inside fetchAccountInfo with oauthFetch() |
| HubSpot | Access tokens expire in 30 minutes | Just set supportsRefresh: true — the framework handles the refresh cadence |
| Microsoft | Tenant-scoped endpoints (common vs {tenant}) | Use common for multi-tenant, or add tenant to connectionConfig |
Step 3: Register the provider in the loader
Open src/connectors/catalog/index.ts and add a one-line entry to CONNECTOR_CATALOG:
// src/connectors/catalog/index.ts
import type { ConnectorConfig } from './types.js';
type ConnectorFactory = () => Promise<ConnectorConfig>;
const CONNECTOR_CATALOG: Record<string, ConnectorFactory> = {
google: async () => (await import('./google.js')).createConfig(),
notion: async () => (await import('./notion.js')).createConfig(),
slack: async () => (await import('./slack.js')).createConfig(),
// ...
shopify: async () => (await import('./shopify.js')).createConfig(), // ← your new line
};Three rules:
- The key must match
createConfig().name. If they drift, the loader throws at startup andvalidate:connectorsfails. The validator checks this explicitly. - Use
await import(), not a top-level import. Provider configs read env vars at load time, and we want that to happen lazily — one provider with a missing secret should not break every other provider. - Keep the map alphabetical so diffs stay clean.
That is the entire registry change. getConnectorConfig(name) now resolves for your provider, the /v1/connectors/available endpoint picks it up, the dashboard shows a Connect card, and discovery starts annotating tools that require it.
Step 4: Add env vars
Every connector needs two env vars:
OAUTH_<NAME>_CLIENT_ID=
OAUTH_<NAME>_CLIENT_SECRET=Where <NAME> is the uppercased, underscored form of your connector name. google → OAUTH_GOOGLE_CLIENT_ID. shopify → OAUTH_SHOPIFY_CLIENT_ID.
Local development
Add them to .env.local:
OAUTH_SHOPIFY_CLIENT_ID=your-local-client-id
OAUTH_SHOPIFY_CLIENT_SECRET=your-local-client-secretAlso add a template entry to .env.local.example so the next developer knows they exist:
# Shopify connector (https://partners.shopify.com)
OAUTH_SHOPIFY_CLIENT_ID=
OAUTH_SHOPIFY_CLIENT_SECRET=Staging and production (Railway)
# Staging
railway link --service ToolRouter --environment staging
railway variables --set "OAUTH_SHOPIFY_CLIENT_ID=..." --set "OAUTH_SHOPIFY_CLIENT_SECRET=..."
# Production
railway link --service ToolRouter --environment production
railway variables --set "OAUTH_SHOPIFY_CLIENT_ID=..." --set "OAUTH_SHOPIFY_CLIENT_SECRET=..."Each environment gets its own OAuth app registered at the provider (see Step 1), so the client IDs and secrets are different in every environment. This is deliberate — token rotation in staging never touches production tokens.
How the registry picks them up
The requireEnv() helper in src/connectors/catalog/index.ts throws a clear error naming the missing variable if either env var is absent:
OAuth provider "shopify" is not configured: missing env var OAUTH_SHOPIFY_CLIENT_ID.
Set it in .env.local or in the deployment environment.The listReadyConnectors() helper walks CONNECTOR_CATALOG and only returns providers whose env vars are present. The /v1/connectors/available endpoint calls this, so the dashboard Connect button only appears once both env vars exist. Registering the provider without setting env vars doesn't break anything — the card is just hidden until the env vars land.
Step 5: Write the test file
Create tests/connectors/<name>-provider.test.ts. Every connector test file starts with a call to the shared runConnectorContractTests() helper, which runs the standard battery of contract checks: config loads, required fields populated, scopes non-empty, PKCE enabled, expected params present, missing env vars throw clear errors. You then add provider-specific tests for fetchAccountInfo / revokeToken response shapes.
A full working example for the shopify connector:
// tests/connectors/shopify-provider.test.ts
import { afterAll, afterEach, beforeAll, describe, expect, it } from 'vitest';
import { http, HttpResponse } from 'msw';
import { setupServer } from 'msw/node';
import { shopifyFetchAccountInfo } from '../../src/connectors/catalog/shopify.js';
import type { OAuthTokenSet } from '../../src/connectors/catalog/types.js';
import { runConnectorContractTests } from './provider-contract.js';
const server = setupServer();
beforeAll(() => server.listen({ onUnhandledRequest: 'error' }));
afterEach(() => server.resetHandlers());
afterAll(() => server.close());
const mockTokens = {
accessToken: 'shpat_test-access-token',
tokenType: 'Bearer',
// connectionConfig is injected by the gateway at resolution time.
connectionConfig: { shop: 'my-store' },
} as unknown as OAuthTokenSet;
// ─── Standard contract: every connector runs through this ───────────────
runConnectorContractTests({
name: 'shopify',
envVars: ['OAUTH_SHOPIFY_CLIENT_ID', 'OAUTH_SHOPIFY_CLIENT_SECRET'],
setEnv: () => {
process.env.OAUTH_SHOPIFY_CLIENT_ID = 'test-client-id';
process.env.OAUTH_SHOPIFY_CLIENT_SECRET = 'test-client-secret';
},
expectedScopes: ['read_products', 'write_products', 'read_orders', 'read_customers'],
expectedSupportsRefresh: false,
expectedSupportsMultipleAccounts: true,
expectedUsePkce: true,
});
// ─── Provider-specific tests: Shopify's unique response shapes ──────────
describe('shopifyFetchAccountInfo', () => {
it('calls /admin/api/shop.json with the access token and maps the response', async () => {
let receivedTokenHeader: string | null = null;
server.use(
http.get('https://my-store.myshopify.com/admin/api/2024-10/shop.json', ({ request }) => {
receivedTokenHeader = request.headers.get('X-Shopify-Access-Token');
return HttpResponse.json({
shop: { id: 12345, name: 'My Store', domain: 'my-store.com' },
});
}),
);
const info = await shopifyFetchAccountInfo(mockTokens);
expect(receivedTokenHeader).toBe('shpat_test-access-token');
expect(info.externalAccountId).toBe('my-store.myshopify.com');
expect(info.accountLabel).toBe('My Store');
expect(info.metadata).toEqual({ shop_id: 12345, domain: 'my-store.com' });
});
it('throws when connectionConfig.shop is missing', async () => {
const tokensWithoutShop = { accessToken: 'x', tokenType: 'Bearer' } as OAuthTokenSet;
await expect(shopifyFetchAccountInfo(tokensWithoutShop)).rejects.toThrow(/connectionConfig\.shop/);
});
it('throws when the /shop.json response is non-2xx', async () => {
server.use(
http.get('https://my-store.myshopify.com/admin/api/2024-10/shop.json', () =>
HttpResponse.json({ errors: 'Invalid API key' }, { status: 401 }),
),
);
await expect(shopifyFetchAccountInfo(mockTokens)).rejects.toThrow(/401/);
});
});What the contract test helper checks
Calling runConnectorContractTests({ name, envVars, setEnv, expectedScopes, ... }) generates these assertions for free:
| Assertion | Why |
|---|---|
getConnectorConfig(name) returns a defined config | Catches typos in CONNECTOR_CATALOG |
| All required string fields are truthy | Catches empty displayName / clientId / URLs |
authorizationUrl and tokenUrl are HTTPS | Catches http:// regressions |
scopes is a non-empty array | Catches empty scope lists |
usePkce matches expectedUsePkce (default true) | Catches accidental PKCE-off |
supportsRefresh matches expectedSupportsRefresh | Catches regression flips |
supportsMultipleAccounts matches expectedSupportsMultipleAccounts | Same |
fetchAccountInfo is a function | Catches missing hook |
Every scope in expectedScopes is present | Regression guard for scope removal |
Every key/value in expectedExtraAuthorizeParams is present | Regression guard for critical params (Google's prompt=consent) |
| For every env var: removing it throws an error containing the var name | Catches ambiguous "undefined is not a function" errors |
If any of these fail, you either forgot a field in the config or the provider deviates from the standard in a way that needs a new declarative field. Prefer the latter — if the test makes sense as a regression guard, it is correct.
What you still have to write yourself
Provider-specific fetchAccountInfo tests — success, missing required fields, non-2xx. And revokeToken tests if you defined one. That is usually 30–60 extra lines of msw handlers and expect() calls.
Run the tests
npm run test -- tests/connectors/shopify-provider.test.tsExpected: every assertion passes. If runConnectorContractTests() fails on a regression guard you didn't expect, the error message names the field and the expected value — fix the config, not the test.
Step 6: (Optional) postConnectionScript hook
Some providers need extra work right after the token exchange succeeds — before the connector row is written to Convex. The canonical example is Atlassian: the /oauth/token response doesn't include the cloud-id, so you have to make a follow-up call to /oauth/token/accessible-resources and stash the cloud-id in metadata.
If you need this, create src/connectors/hooks/<name>.ts:
// src/connectors/hooks/atlassian.ts
/**
* Atlassian post-connection hook.
*
* Fetches the Jira / Confluence cloud-id for the authorized workspace
* and stashes it in metadata so skills can build API URLs without an
* extra round-trip on every call.
*
* Docs: https://developer.atlassian.com/cloud/jira/platform/oauth-2-3lo-apps/
*/
import { oauthFetch } from '../fetch.js';
import type { ConnectorPostAuthContext } from '../catalog/types.js';
interface AccessibleResource {
id: string;
name: string;
url: string;
scopes: string[];
avatarUrl: string;
}
export default async function atlassianPostConnection(
ctx: ConnectorPostAuthContext,
): Promise<void> {
const resources = await oauthFetch<AccessibleResource[]>(
'https://api.atlassian.com/oauth/token/accessible-resources',
{
headers: { Authorization: `Bearer ${ctx.tokens.accessToken}` },
providerName: 'atlassian',
},
);
if (!Array.isArray(resources) || resources.length === 0) {
throw new Error('Atlassian: no accessible resources returned — user may not have granted site access');
}
// Single-site user → use that site. Multi-site users pick via a follow-up
// prompt in the frontend; for now we just pick the first and stash all IDs.
const primary = resources[0];
await ctx.updateMetadata({
cloud_id: primary.id,
site_name: primary.name,
site_url: primary.url,
available_sites: resources.map((r) => ({ id: r.id, name: r.name, url: r.url })),
});
await ctx.updateAccountLabel(primary.name);
}Then reference it from the connector config:
return {
name: 'atlassian',
// ...
postConnectionScript: 'atlassian', // kebab-case filename without extension
fetchAccountInfo: atlassianFetchAccountInfo,
};The framework dynamically imports src/connectors/hooks/<postConnectionScript>.js, calls its default export with a ConnectorPostAuthContext (see src/connectors/catalog/types.ts), and the hook uses the updateMetadata / updateAccountLabel callbacks to write changes back to the newly-created connector row. Hook failures are logged but don't roll back the connector — the row is usable, only the enrichment is missing.
When you actually need this
Rarely. If any of these are true, you need a post-connection hook:
- The provider's token response does not give you a stable connection ID and the only way to get one is a follow-up API call.
- The provider exposes per-connection config you can only fetch after auth (workspace ID, subdomain, organization slug).
- The provider requires you to register a webhook on their side as part of "installing the app" and that has to happen before the first tool call.
If none of those are true, skip this step. Put the extra API calls inside fetchAccountInfo instead.
Validator check
If postConnectionScript is set, validate:connectors checks that src/connectors/hooks/<name>.ts exists. Missing hook file → validator fails.
Step 7: Run the validator
npm run validate:connectorsThe validator walks every connector in CONNECTOR_CATALOG, runs its createConfig() with mock env vars, and checks:
| Check | What it catches |
|---|---|
| Provider module loads | Missing file, missing createConfig export, syntax errors |
createConfig() returns a config | Throwing on load, returning undefined |
validateConnectorConfig(config) passes | Missing required fields, empty scopes, non-HTTPS URLs, non-kebab-case names |
config.name === loader key | Rename drift (registry says shopify, config says shopifyy) |
| Every optional hook is a function when declared | Typo like fetchAccountInfo: somethingUndefined |
postConnectionScript points to a real file | Typo in the hook name |
Test file exists at tests/connectors/<name>-provider.test.ts | Forgetting the test file altogether |
On success:
Validating 10 provider(s)...
OK — all 10 provider(s) passed validationOn failure:
Validating 10 provider(s)...
FAILED — 2 validation error(s):
[shopify] validateConnectorConfig rejected the config: OAuth provider "shopify": scopes must be a non-empty array
[stripe] no test file found at tests/connectors/stripe-provider.test.ts (or tests/oauth/stripe-provider.test.ts)Wire this into CI. The script exits non-zero on any failure.
Step 8: Smoke test
The local smoke test is the only way to be sure the provider works end-to-end before merging. Run it every time.
Start the gateway
npm run dev:apiWatch stderr for [connectors] mounted /v1/connectors/* — that confirms the connector routes registered.
Verify the provider is discoverable
curl -s http://localhost:3141/v1/connectors/available | jqExpected: your new connector appears in the array with displayName, name, and whatever connectionConfig fields you declared. If it is missing, either:
- the env vars for your connector are not set (the endpoint hides providers with missing config)
- the provider threw during
createConfig()— check stderr for the actual error
Open the dashboard
Visit http://localhost:3000/dashboard/connectors. The Connect card for your provider should be visible. Click Connect.
Expected flow:
- Browser redirects to the provider's authorize page
- You log in and approve the scopes
- Provider redirects back to
/v1/connectors/<name>/callback - Gateway exchanges the code, calls
fetchAccountInfo, writes the connector row - Browser ends up on
/dashboard/connectorswith a row showing your account label
Things that can go wrong:
| Symptom | Fix |
|---|---|
| "Redirect URI mismatch" error from the provider | Your redirect URI in the provider's console doesn't match {API_BASE}/v1/connectors/<name>/callback exactly. Trailing slashes, http vs https, localhost vs 127.0.0.1 all count. |
| Callback returns 500 with "invalid_grant" | PKCE verifier mismatch, usually because the /start and /callback requests went to different gateway instances. In dev this is a clock skew issue — check the state table TTL. |
fetchAccountInfo throws "missing required fields" | The provider changed the userinfo response shape. Log the response body and update the parser. |
| Token exchange throws "http 400" | The provider rejected the body format. Check bodyFormat, authorizationMethod, and extraTokenParams. |
| Dashboard card is stuck on "Connecting..." | The gateway wrote the row but the frontend poll missed it. Reload the page. |
Disconnect and reconnect
Click Disconnect, then Connect again. The expected behaviour:
- If
supportsMultipleAccounts: false, the new connection overwrites the old row (sameexternalAccountId). - If
supportsMultipleAccounts: true, the new connection either matches the oldexternalAccountId(deduped — overwrite) or is a new account (second row appears).
If disconnect + reconnect with the same provider account creates two rows, your externalAccountId is not stable — go back to Step 2 and fix fetchAccountInfo.
Refresh test (only if `supportsRefresh: true`)
In Convex, manually set the connector's expiresAt to a timestamp in the past. Then call any tool that uses the connector. The gateway should:
- Notice the token is stale
- Call
/v1/connectors/<name>/refreshinternally - Exchange the refresh token for a fresh access token
- Persist the new token
- Hand the fresh token to the skill handler
- Skill succeeds
If step 3 throws "invalid_refresh_token", the refresh token rotation rule is wrong — some providers rotate on every refresh (Airtable), others do not (Google). The framework handles both, but only if your config is correct.
Discovery annotation
The discover pipeline is exposed via the MCP discover meta-tool (not a plain REST endpoint). The quickest way to verify discovery picks up your connector is to call it through the staging MCP you have wired into Claude Code — search for a tool that declares your connector as a requirement, and inspect the result.
What to look for:
- When disconnected, the tool should come back with
available: false,reason: 'connector_required', andconnector_required: [{ connector: '<your-connector>', connect_url: 'https://.../connectors/<your-connector>?tool=<tool-name>', ... }]. - When connected, the tool should come back fully available with no
connector_requiredfield. - Every discover response should also include a top-level
connectorsarray listing the user's connected accounts. Your new connector should appear there once the user has connected it, withkind,displayName,accountLabel,isDefault, andscopefields populated.
If discovery never annotates the tool, the requirement's connector field probably doesn't match an entry in CONNECTOR_CATALOG, or createConfig() is throwing during discover (check stderr). The registration-time validator in src/core/registry.ts should have caught a mismatched name at gateway boot.
Provider quirk reference
The most common provider quirks and how to handle them declaratively:
| Quirk | Solution |
|---|---|
| Token endpoint needs HTTP Basic auth (clientId:clientSecret) | authorizationMethod: 'header' |
| Token endpoint body must be JSON (not form-urlencoded) | bodyFormat: 'json' |
| Scopes are comma-separated in the authorize URL | scopeSeparator: ',' |
| Per-shop / per-tenant authorization URL | authorizationUrl: 'https://${connectionConfig.shop}.example.com/oauth/authorize' + matching connectionConfig field |
| Token response nests the access token deep in the body | alternateAccessTokenResponsePath: 'authed_user.access_token' |
| Provider issues no refresh tokens | supportsRefresh: false |
| Provider only allows one connection per user | supportsMultipleAccounts: false |
| Provider requires extra query params on authorize (prompt, access_type, owner) | extraAuthorizeParams: { prompt: 'consent', ... } |
| Provider requires extra body params on token exchange | extraTokenParams: { grant_type: 'authorization_code', ... } |
| Cloud-id / site-id / workspace-id only available after auth | postConnectionScript: '<name>' + hook file |
| Completely bespoke code-exchange flow (last resort) | exchangeCode override |
| Completely bespoke refresh flow (last resort) | refreshTokens override |
If your provider's quirk is not on this table, you have one of two options: add a new declarative field to ConnectorConfig and handle it once in the framework, or override exchangeCode / refreshTokens. Prefer the former — the point of the framework is that every provider looks the same to anyone debugging a weird flow three months later.
PKCE and state
You do not handle PKCE or state yourself. The gateway routes at src/gateway/connector-routes.ts do both:
- On
/v1/connectors/<name>/start, the gateway generates a PKCE code-verifier, derives the code-challenge, generates a CSRF state, and writes a row to theoauth_stateConvex table keyed by the state value. The state row has a 10-minute TTL. - On
/v1/connectors/<name>/callback, the gateway reads the state row, verifies it, pulls the code-verifier, and exchanges the code.
usePkce: true is the default and what every modern provider expects. Set it to false only if the provider explicitly rejects PKCE (which as of 2026 is roughly none of them — Slack deprecated the non-PKCE flow in 2024).
The state parameter is not optional and cannot be disabled. Every flow gets a fresh state row.
Common pitfalls
The mistakes that have actually landed in review on real provider PRs:
- Forgetting to add the provider to
CONNECTOR_CATALOG. The file exists,createConfig()is exported, but the registry doesn't know about it. Discovery never annotates, the Connect card never shows.validate:connectorswon't catch this because it walksCONNECTOR_CATALOG. Symptom:GET /v1/connectors/availabledoesn't include your provider. - Redirect URI typos. The one in the provider's console must match
{API_BASE}/v1/connectors/<name>/callbackcharacter-for-character.httpvshttps, trailing slashes, port numbers, uppercase vs lowercase — all matter. Symptom: provider shows "redirect_uri_mismatch" before you ever hit the gateway. - Using raw
fetch()instead ofoauthFetch(). Your hook hangs for 60 seconds on a dead provider, the gateway blocks, other users' flows time out.oauthFetch()enforces a 10-second timeout. No exceptions. - Setting
supportsRefresh: truefor a provider that doesn't issue refresh tokens. The framework thinks it can refresh, the refresh call fails withinvalid_grantthe first time a token expires, and the user gets silently kicked. Double-check the provider's docs before setting this flag. - Using email / name as
externalAccountId. Users change emails. Users change display names.externalAccountIdmust be something the provider guarantees is stable: Googlesub, Notionworkspace_id, Slackteam_id, Shopify shop domain, Stripestripe_user_id. Use the metadata, not the label. - Hardcoding scopes across all users. The default scope set is what every user gets. If a single tool needs extra scopes, that is a sign the tool should either (a) require the scopes as part of its requirement declaration or (b) belong to a different connector entirely.
- Mixing connector env vars across environments. Local
.env.local, staging Railway, and production Railway each need their own OAuth app and their own client IDs. Don't reuse. Separate OAuth apps is the only thing standing between a staging token leak and a production account compromise. - Committing the client secret.
.env.localis gitignored;.env.local.exampleis committed. Put the template in.env.local.examplewith empty values, put the real values in.env.local. Never the other way around. - Forgetting to register the provider in
MEDIA_PROVIDER_REQUIREMENTS. Wait — that is a different system. Media providers are providers, not connectors. Ignore this for connector PRs. - Test file at
tests/oauth/instead oftests/connectors/. The primary location istests/connectors/. The legacytests/oauth/location is accepted by the validator as a fallback for older code. New code goes intests/connectors/. - JSDoc header missing the scope justification. Every scope in the default set must earn its place. If you can't write one line about why the scope is there, remove it.
When do I need a flow override?
exchangeCode and refreshTokens are escape hatches. You almost never need them. The question to ask is: "does this provider's flow deviate from RFC 6749 in a way no declarative field can model?"
| Situation | Use declarative fields | Use an override |
|---|---|---|
| Token endpoint wants HTTP Basic | authorizationMethod: 'header' | — |
| Token endpoint wants JSON body | bodyFormat: 'json' | — |
| Token endpoint wants extra params | extraTokenParams | — |
| Token response is nested | alternateAccessTokenResponsePath | — |
| Token response has custom fields we want to persist | Return them from fetchAccountInfo metadata | — |
Provider uses a completely different flow (grant_type=<custom>, signed JWT assertion, etc.) | — | exchangeCode override |
| Refresh requires a signed JWT or bespoke signature | — | refreshTokens override |
If you reach for an override, leave a JSDoc comment explaining exactly why the declarative fields don't work. The next person should be able to decide at a glance whether the override is still needed or whether the framework has since grown a declarative field that obviates it.
In the 10 providers currently scoped (Google, Notion, Slack, Microsoft, Linear, HubSpot, Airtable, Stripe, Shopify, Figma), exactly one — Stripe Connect — is expected to need an exchangeCode override, because Stripe Connect's OAuth flow is technically a different product from OAuth 2.0 proper.
Testing against a real OAuth app
Most providers will refuse to register http://localhost:3141/v1/connectors/<name>/callback as a redirect URI — they require HTTPS. You have three options for testing locally:
Option 1: ngrok
Easiest. brew install ngrok, then:
ngrok http 3141You will get an https://<random>.ngrok-free.app URL. Register that as the redirect URI in the provider's dev console (https://<random>.ngrok-free.app/v1/connectors/<name>/callback). Start the gateway with TOOLROUTER_PUBLIC_API_URL=https://<random>.ngrok-free.app npm run dev:api so the gateway builds the callback URL from the ngrok domain instead of localhost.
The ngrok domain changes every restart unless you have a paid plan. Register a new redirect URI each time, or use the paid static-domain feature.
Option 2: Cloudflare Tunnel
Same idea, no account needed:
cloudflared tunnel --url http://localhost:3141Prints a trycloudflare.com URL. Use it the same way as ngrok.
Option 3: Staging
Push your branch, deploy to staging, and test against the real staging URL. The staging redirect URI is already registered in your OAuth app, and staging has its own env vars. This is the honest way to run a smoke test — everything runs the same infrastructure as production.
The downside is the iteration loop is slower (push → Railway deploy → test). Use it for the final smoke test, not for every keystroke.
Checklist
Before merging a new connector PR:
Config:
- [ ]
src/connectors/catalog/<name>.tsexists with acreateConfig(): ConnectorConfigexport - [ ] JSDoc header lists every known quirk and justifies every default scope
- [ ]
clientId/clientSecretloaded viarequireEnv()— never hardcoded - [ ]
authorizationUrlandtokenUrlare HTTPS - [ ]
scopesis non-empty - [ ]
usePkce: true(or JSDoc explains why not) - [ ]
supportsRefreshandsupportsMultipleAccountsreflect actual provider behaviour - [ ]
fetchAccountInfousesoauthFetch()and returns a stableexternalAccountId - [ ]
revokeTokenis defined OR omitted with a JSDoc note ("provider has no revocation endpoint") - [ ] Override hooks (
exchangeCode,refreshTokens) only used if no declarative field fits
Registry:
- [ ] Entry added to
CONNECTOR_CATALOGinsrc/connectors/catalog/index.ts - [ ]
CONNECTOR_CATALOGkey matchescreateConfig().nameexactly
Env vars:
- [ ]
OAUTH_<NAME>_CLIENT_ID/OAUTH_<NAME>_CLIENT_SECRETtemplate added to.env.local.example - [ ] Local values set in
.env.local - [ ] OAuth app registered in the provider's dev console with the right redirect URI
- [ ] Staging Railway env vars set
- [ ] Production Railway env vars set (only for final merge)
Tests:
- [ ]
tests/connectors/<name>-provider.test.tsexists - [ ]
runConnectorContractTests()called with expected scopes and extra params - [ ] Provider-specific
fetchAccountInfotests for success + missing fields + non-2xx - [ ]
revokeTokentests if defined - [ ]
npm run test -- tests/connectors/<name>-provider.test.tspasses
Validation:
- [ ]
npm run validate:connectorspasses - [ ]
npx tsc --noEmitpasses
Smoke test:
- [ ] Connect flow works end-to-end on local dev (via ngrok, Cloudflare Tunnel, or staging)
- [ ] Disconnect + reconnect with the same account dedupes (same
externalAccountId) - [ ] Token refresh works on an expired token (only if
supportsRefresh: true) - [ ]
/v1/connectors/availablelists the connector - [ ] Discovery annotates a tool with
type: 'connector'requirement asconnector_requiredwhen disconnected - [ ] A sample tool that uses
context.getConnectorToken('<name>')runs successfully when connected
Hook (only if postConnectionScript is set):
- [ ]
src/connectors/hooks/<name>.tsexists and has adefault asyncexport takingConnectorPostAuthContext - [ ] Hook uses
oauthFetch()for any HTTP calls - [ ] Hook calls
ctx.updateMetadata()/ctx.updateAccountLabel()to persist enrichment back to the connector row - [ ] Hook failure is acceptable — the connector row is still usable even if the hook throws (framework logs + continues)
If every box is checked, the connector is ready to ship.
Read next
- Adding Providers — integrating a new upstream API you pay to call (not user-authorized)
- Tool Authoring — how to declare
type: 'connector'requirements and usecontext.getConnectorToken()inside a handler - Architecture — where connectors sit in the runtime
- CLI — validation and testing commands