Build Supaflow Connectors with AI: Introducing Connector Dev Skills

Most people who want to build a Supaflow connector already have deep domain expertise. They know the Asana API inside out, or they understand exactly how Stripe's pagination works, or they have spent years working with Dynamics 365. What they do not know -- and should not have to learn from scratch -- is the Supaflow connector SDK: how schema discovery works, how incremental sync state is managed, what lifecycle methods to implement, and what contracts the pipeline engine expects.
That is the gap this project fills. We codified everything we know about the Supaflow connector SDK into an AI skill framework that any coding agent can follow -- so you can focus on your domain expertise while the skill handles the SDK plumbing.
It is open source and available now on GitHub: supaflow-labs/supaflow-connector-dev-skills.
What It Does
The skill teaches an AI agent (Claude Code, OpenAI Codex, or similar) how to build a Supaflow connector using a phased workflow with gate checks. The agent cannot advance to the next phase until automated verification passes.
What the skill knows: The Supaflow connector SDK -- project structure, authentication patterns, schema discovery contracts, read/write lifecycle, incremental sync, type mapping, and all the platform rules that must be followed.
What the skill does not know: Your domain. It has no built-in knowledge of any specific API, database, or service. You bring the domain expertise -- which endpoints to call, how pagination works, what objects and fields exist, how authentication is structured for your system -- and the skill translates that into a working connector that follows Supaflow's contracts.
It supports three connector modes:
- Source connectors -- read data from APIs and databases (phases 1-6)
- Warehouse destinations -- load data into Snowflake, Postgres, etc. (phases 1-4, then phase 7)
- Activation targets -- push data to downstream APIs like Salesforce or HubSpot (phases 1-4, then phase 8)
And two implementation tracks:
- REST API connectors -- for HTTP-based data sources (phases vary by connector mode: 1-6 for sources, adding phase 7 or 8 for destinations)
- JDBC connectors -- a streamlined track using Supaflow's
BaseJdbcConnectorwith phases 3-5 replaced by a single JDBC guide
The 8 Phases
Each phase has a defined scope, prerequisites, and gate checks that must pass before the agent moves on.
| Phase | What Gets Built | Key Gates |
|---|---|---|
| 1. Project Setup | Maven module, shell class, shade plugin | Build config, no committed artifacts |
| 2. Connector Identity | Name, properties, capabilities | Annotations, capability declarations |
| 3. Connection & Auth | Connection management, OAuth/API key/basic auth | Token handling, connection tests |
| 4. Schema Discovery | ObjectMetadata, FieldMetadata, cursor fields | originalDataType set, cursor fields locked |
| 5. Read Operations | Data reading, SyncState, incremental sync | CutoffTime pattern, cancellation checks |
| 6. Integration Testing | Live API tests with real credentials | Tests exist and compile |
| 7. Warehouse Destination | stage(), load(), DDL generation, MERGE | Staging, load modes, schema DDL |
| 8. Activation Target | API push, field mapping, error handling | Activation mapping, merge keys |
Automated Verification
The included verify_connector.sh script runs 27 automated checks against your connector code. It auto-detects whether you are building a source, destination, or hybrid connector and adjusts checks accordingly.
Some of the things it catches:
- Calling
processor.close()manually (the executor manages this lifecycle) - Missing
originalDataTypeon FieldMetadata (breaks type mapping) - Ignoring
SyncStatein read operations (breaks incremental sync) - Missing cancellation checks in long-running loops
- Declared capabilities that do not match implemented methods
- Missing integration tests
The agent runs this script at each phase gate. Zero errors means the gate passes.
Anti-Pattern Catalog
The framework includes documentation of 20+ connector anti-patterns we have encountered, each with wrong code, correct code, and an explanation of why it matters. A few examples:
Never call processor.close() -- the pipeline executor manages the processor lifecycle. Calling it manually causes double-close errors.
Always set originalDataType -- every FieldMetadata must record the source system's native type string. Without it, type mapping and schema evolution break silently.
Always use the CutoffTime pattern -- for incremental sync, use CutoffTimeSyncUtils to compute the time boundary. Naive timestamp tracking leads to data loss during overlapping sync windows.
How to Use It
With Claude Code
git clone https://github.com/supaflow-labs/supaflow-connector-dev-skills.git
# Link the skill into Claude Code
mkdir -p ~/.claude/skills
ln -s "$(pwd)/supaflow-connector-dev-skills/skills/build-supaflow-connector" \
~/.claude/skills/build-supaflow-connector
Then invoke with:
/build-supaflow-connector
platform_root: /path/to/supaflow-platform
connector_name: stripe
connector_mode: source
auth_type: API key
api_surface: objects=customers,invoices; pagination=cursor; cursor_fields=created
test_credentials: STRIPE_API_KEY
Notice the api_surface parameter -- that is where your domain knowledge goes. You tell the agent what objects exist, how pagination works, and which fields serve as cursors. The skill handles the rest: project scaffolding, SDK contracts, lifecycle methods, and verification.
With OpenAI Codex
mkdir -p ~/.codex/skills
rsync -a supaflow-connector-dev-skills/skills/build-supaflow-connector/ \
~/.codex/skills/build-supaflow-connector/
Invoke with $build-supaflow-connector or describe what you want in natural language.
Verification Only
Already have a connector and want to check it against the framework's rules?
bash scripts/verify_connector.sh <connector-name> /path/to/supaflow-platform
What Is Included
The repository contains over 10,000 lines of structured documentation:
- SKILL.md -- the main skill entry point with phase orchestration logic
- 8 phase guides -- detailed implementation instructions with code examples
- JDBC connector guide -- streamlined track for database connectors
- Anti-patterns catalog -- 20+ documented mistakes with correct alternatives
- Verification script -- 27 automated quality checks
- Portability guide -- instructions for adapting to different platform paths and agent environments
Why Open Source
We want connector development to be accessible to anyone with domain expertise, regardless of whether they know the Supaflow SDK. If you understand your system's API -- the endpoints, the auth flow, the data model -- you should be able to build a production-grade connector without reading thousands of lines of platform internals first. The skill framework bridges that gap.
Whether you are building a connector for an internal system or contributing to the Supaflow ecosystem, the skill gives you the same guardrails and quality checks we use ourselves. Fork it, extend it, or use it as a reference for building similar skill frameworks for your own platform.
The repository is MIT licensed: github.com/supaflow-labs/supaflow-connector-dev-skills
If you have questions or want to contribute, open an issue on the repo or reach out at support@supa-flow.io.
