PostgreSQL logo

PostgreSQL Connector

Extract data from PostgreSQL databases (version 9.6+) or use PostgreSQL as a destination. Supports SSL, JDBC connection properties, and automatic schema discovery across all schemas.

SourceDestinationBronze

Why Supaflow

All connectors included

No per-connector fees. Every connector is available on every plan.

Pay for compute, not rows

Credit-based pricing. No per-row charges, no MAR surprises.

One platform

Ingestion, dbt Core transformation, reverse ETL, and orchestration in a single workspace.

Capabilities

Username/Password Authentication with SSL

Connect with database credentials over configurable SSL modes: disable, allow, prefer, require, verify-ca, or verify-full. Supports additional JDBC connection parameters.

Automatic Schema Discovery

Discovers all schemas, tables, and columns in the connected database. Schema refresh interval is configurable from 0 (every run) to 10080 minutes (weekly).

Incremental Sync

Tracks cursor positions using timestamp columns so subsequent runs fetch only changed rows. Full refresh mode is also available for tables without reliable timestamps.

Granular Permissions

Works with a read-only database user. Grant SELECT on individual tables, entire schemas, or the full database depending on your security requirements.

Supported Objects

Database Objects

Tables

All tables the connecting user has SELECT permission on.

Views

Standard views accessible to the connecting user.

Schemas

All schemas the user has USAGE permission on (e.g., public, analytics, sales).

How It Works

1

Prepare your PostgreSQL database

Create a dedicated read-only user with CONNECT and SELECT privileges on target schemas. If your database has IP restrictions, add 18.214.240.61 to pg_hba.conf and ensure listen_addresses allows external connections.

2

Enter connection details

Provide the database host, port (default 5432), database name, username, and password. Configure SSL mode based on your security requirements.

3

Test and save

Click Test & Save to verify the connection. Supaflow runs a connectivity check and discovers available schemas, tables, and columns.

Use Cases

Replicate operational data to a warehouse

Sync PostgreSQL tables into Snowflake or another warehouse for analytics without running reporting queries against your production database.

Cross-database consolidation

Combine data from multiple PostgreSQL databases into a single warehouse for unified reporting across services.

Reverse ETL back to PostgreSQL

Write enriched or aggregated data from your warehouse back into PostgreSQL for use by operational applications.

Frequently Asked Questions

Which PostgreSQL versions are supported?
PostgreSQL 9.6 and higher. This includes managed services like Amazon RDS, Google Cloud SQL, Azure Database for PostgreSQL, and Supabase.
Do I need to whitelist an IP address?
If your database restricts inbound connections, add 18.214.240.61 to your firewall rules or pg_hba.conf. For cloud-hosted databases, add the IP in the networking settings of your provider.
Can I limit which tables are accessible?
Yes. Grant SELECT only on specific tables or schemas. The connecting user will only see objects they have permission to read.
Does Supaflow support PostgreSQL CDC (Change Data Capture)?
Supaflow supports incremental sync using timestamp-based cursor fields. For full CDC with logical replication, contact us about our roadmap.
Can I replicate PostgreSQL to Snowflake natively?
Yes. Supaflow runs as a Snowflake Native App, so your PostgreSQL data is loaded directly into Snowflake without passing through a third-party cloud. See our Snowflake Native ETL guide.
What about schema changes in my PostgreSQL database?
Supaflow detects schema changes automatically. New columns are added to the destination, type widening is applied, and new tables appear in schema discovery for you to enable.

Need a connector we don't support yet?

Build one with AI-powered Connector Dev Skills.

Learn More About the Connector SDK