Getting Started with Supaflow
Welcome to Supaflow! This guide will walk you through building your first data pipeline in minutes.
What You'll Build
By the end of this guide, you'll have:
- ✅ Connected a data source (where data comes from)
- ✅ Connected a destination (where data goes)
- ✅ Created a pipeline to move data
- ✅ Run your first sync
- ✅ Scheduled automatic syncs
Time to complete: 10-15 minutes
When you sign in, Supaflow creates a default workspace for you. All your sources, destinations, and pipelines live in this workspace. You can create additional workspaces later for different environments (dev, staging, production).
Step 1: Connect a Destination
Set up where your data will be loaded (your data warehouse).
To start:
- Navigate to Destinations in the sidebar
- Click + New Destination
- Select your destination type (e.g., Snowflake, PostgreSQL, BigQuery)
- Fill in connection details:
- Name - Display name for this connection
- API Name - Auto-generated identifier
- Authentication credentials (varies by connector)
- Click Test & Save
What happens:
- Connection is tested in the background
- Status shows "Provisioning" then changes to "Active"
- Schema discovery runs automatically
- You're ready to use this destination in pipelines
📺 Watch how to connect Snowflake as a destination in under 1 minute
Step 2: Connect a Source
Set up where your data comes from.
To start:
- Navigate to Sources in the sidebar
- Click + New Source
- Select your source type (e.g., Salesforce, HubSpot, PostgreSQL)
- Choose authentication method:
- OAuth - Click Authorize button (Salesforce, HubSpot)
- Basic Auth - Enter credentials
- Fill in connection details
- Click Test & Save
What happens:
- Connection is tested
- For OAuth sources, you'll authorize access and return to Supaflow
- Status changes to "Active"
- Schema discovery runs automatically
- You're ready to create a pipeline
📺 Watch how to connect Salesforce as a source in under 1 minute
Step 3: Create Your First Pipeline
Create a pipeline to move data from source to destination.
To start:
Option 1: Via Pipelines (Quick)
- Navigate to Pipelines in the sidebar
- Click + Create Pipeline
- Select a project (or create a new one)
- Pipeline wizard opens
Option 2: Via Projects (Organized)
- Navigate to Projects in the sidebar
- Select or create a project
- Click + Create Pipeline
- Pipeline wizard opens with project context
Projects organize related pipelines within your workspace. Each project connects to a warehouse and can contain multiple pipelines. Learn more in Projects.
Pipeline Wizard Steps
Step 1: Choose Source
- Select from your active sources
- Shows connector icon and name
Step 2: Configure Pipeline
- Choose destination (pre-selected if creating from project)
- Configure sync settings (Ingestion Mode, Error Handling, etc.)
- System generates a unique destination prefix
Step 3: Choose Objects to Sync
- Browse catalogs → schemas → tables
- Select which tables to sync
- Choose specific fields (or select all)
- Selections save automatically
Step 4: Review & Save
- Review pipeline name (auto-generated, editable)
- Review selected objects and settings
- Click Create Pipeline
What happens:
- Pipeline is created and appears in your pipelines list
- Ready to run your first sync
📺 Watch how to build a complete Salesforce to Snowflake pipeline in minutes
Step 4: Run Your First Sync
Execute your pipeline to move data.
To start:
- Navigate to Pipelines in the sidebar
- Find your pipeline
- Click ••• → Sync Now
What happens:
- Success message appears with "View activity" link
- Pipeline sync begins immediately
- Activity is created to track execution
Monitor Your Sync
To view progress:
- Click the View activity link (or navigate to Activities in sidebar)
- Your pipeline run appears at the top of the activities list
- Click the activity row to view details
Activity detail page shows:
- Summary cards - Objects synced, rows ingested/loaded
- Object-level details - Status and row counts for each table
- Ingestion mode - Historical (full sync) or Incremental (changes only)
- Stage breakdown - Ingest and Load metrics for each object
Status updates:
- Queued → Running → Completed (or Failed)
- Auto-refresh keeps status current
- Duration timer shows elapsed time
When complete:
- Status shows "Completed"
- Data is available in your destination
- Check your destination database to verify synced tables
- First run is always Historical (full data sync)
- Subsequent runs will be Incremental (only changes)
- First run takes longer due to schema creation
- Click any activity to see detailed progress
Step 5: Schedule Automatic Syncs
Keep your data fresh with scheduled syncs.
To start:
- Navigate to Schedules in the sidebar
- Click + Create Schedule
- Fill in the schedule form:
Select what to run:
- Click dropdown and choose your pipeline
Choose frequency:
- Every few minutes (PRO) - 5, 10, 15, or 30 minutes
- Hourly (PRO) - Every 1-12 hours
- Daily - Specific time each day
- Weekly - Specific day(s) and time
- Monthly - Specific day of month
- Custom cron expression (PRO) - Any pattern
Configure timing:
- Select time of day
- Choose days of week (for minute/hourly/daily schedules)
- Schedule summary shows next 5 run times
Name & Save:
- Schedule name is auto-generated (editable)
- Click Create schedule
What happens:
- Schedule appears in schedules list
- Shows "Next Run" time
- Creates activities automatically at scheduled times
- View execution history in Activities page
Schedules require a trial or paid plan. PRO users get access to advanced frequencies (every few minutes, hourly, custom cron expressions).
📺 Watch how to orchestrate end-to-end data pipeline workflows with scheduling
Step 6: Invite Your Team (Optional)
Collaborate with team members on your workspace.
To start:
- Navigate to Settings in the sidebar
- Click Organization in the settings menu
- Scroll to the Members section
- Click + Invite Member
- Enter email address and select role:
- Owner - Full workspace access
- Editor - Create/edit pipelines and resources
- Viewer - Read-only access
- Click Send Invite
What happens:
- Invitation sent via email
- Member appears as "Pending" until they accept
- When accepted, they gain access based on their role
What's Next?
Now that you've built your first pipeline, explore more Supaflow features:
Multi-Step Workflows
- Orchestrations - Chain pipelines and tasks together with dependencies
- Tasks - Run SQL transformations between pipeline steps
Reverse ETL
- Activation Pipelines - Push data from warehouse back to SaaS apps like Salesforce
Environments & Deployment
- Snowflake Native App - Deploy Supaflow directly inside your Snowflake account
- Workspaces - Create dev, staging, and production environments
- Deployments - Promote pipelines across workspaces (dev → stage → prod)
Monitoring & Operations
- Activities - Monitor pipeline runs with detailed execution metrics
- Usage - Track credits and resource consumption
Getting Help
Need assistance? We're here to help:
- Documentation - Browse our guides for detailed instructions
- Support - Email support@supa-flow.io
Happy data pipelining! 🚀