Oracle Transportation Management Source
Connect Oracle Transportation Management (OTM) as a source to sync shipments, orders, releases, and logistics data.
Prerequisites
Before you begin, ensure you have:
- ✅ An active Oracle OTM instance (Cloud or On-Premises)
- ✅ Integration user credentials with API access
- ✅ For Basic Auth: OTM username and password
- ✅ For OAuth: IDCS application with Client Credentials grant
- ✅ API permissions for Export APIs and Data Integration APIs
- ✅ Network connectivity to your OTM server
Configuration
Step 1: Server Connection
Server URL*OTM server base URL
Example: https://your-instance.otmgtm.region.ocs.oraclecloud.com
Format: https://[subdomain].otmgtm.[region].ocs.oraclecloud.com
Your OTM server URL is the base URL you use to access the OTM application. It typically follows the pattern:
- Cloud:
https://[company]-[env].otmgtm.[region].ocs.oraclecloud.com - Example:
https://acme-prod.otmgtm.eu-frankfurt-1.ocs.oraclecloud.com
Do not include paths like /GC3/ or /logisticsRestApi/ - just the base URL.
Step 2: Authentication
Authentication Type*Choose authentication method. Basic Auth uses username/password, OAuth uses IDCS token-based authentication.
Options:
- basic (Default) - Direct username/password authentication
- oauth - OAuth 2.0 Client Credentials via IDCS
Default: basic
Option A: Basic Authentication (Default)
Username*OTM integration user username
Example: DOMAIN.USERNAME
Format: Typically includes domain prefix
OTM integration user password
Stored encrypted
-
Create Integration User in OTM:
- Log in to OTM as administrator
- Go to Administration → User Management
- Create a new user with integration permissions
- Username format is typically
DOMAIN.USERNAME(e.g.,DEFAULT.SUPAFLOW_API)
-
Grant Required Permissions:
- Data Export: Access to Export Request APIs
- Data Integration: Read access to logistics objects (Shipment, Order, Release, etc.)
- Object Permissions: Read access to tables you want to sync
-
Test Credentials:
- Try logging into OTM web interface with these credentials
- Verify user can access the objects you want to sync
Option B: OAuth 2.0 Authentication
Client ID*OAuth2 Client ID from IDCS application
Copy from IDCS Confidential Application
OAuth2 Client Secret from IDCS application
Stored encrypted
IDCS token endpoint URL
Example: https://idcs-xxx.identity.oraclecloud.com/oauth2/v1/token
Format: https://[idcs-tenant].identity.oraclecloud.com/oauth2/v1/token
OAuth scope for OTM API access
Leave empty for default scope
Example: urn:opc:resource:consumer::all
Oracle OTM uses Oracle Identity Cloud Service (IDCS) for OAuth authentication.
1. Create IDCS Application:
- Log in to Oracle Identity Cloud Service console
- Navigate to Applications → Add
- Select Confidential Application
- Enter application name (e.g., "Supaflow OTM Integration")
- Click Next
2. Configure Client Credentials:
- Select Configure this application as a client now
- Under Allowed Grant Types, select Client Credentials
- Under Grant the client access to Identity Cloud Service Admin APIs, select:
- Authenticator Client (if available)
- Or add the OTM resource scope manually
- Click Next through remaining screens
- Click Finish
3. Copy Credentials:
- After creating the application, copy the Client ID and Client Secret
- Find your IDCS Tenant name from the IDCS URL (e.g.,
idcs-abc123) - Construct Token URL:
https://[idcs-tenant].identity.oraclecloud.com/oauth2/v1/token
4. Add OTM Resource:
- If OTM is registered as a resource server in IDCS, add the appropriate scope
- Typical format:
urn:opc:resource:consumer::allor OTM-specific scope - Consult your Oracle administrator for the correct scope
OAuth tokens are automatically managed by Supaflow:
- Access Token: Automatically refreshed when expired
- Token Expiry: Tracked internally (typically 1 hour)
- No manual intervention needed after initial setup
Step 3: Export Settings
Export Mode*Sync mode returns data directly (recommended for most use cases). Async mode exports to Object Storage for very large datasets (>100K rows).
Options:
- sync (Default) - Direct synchronous data export via REST API
- async - Asynchronous export to Oracle Object Storage
Default: sync
Sync Mode (Recommended):
- Data returned directly in API response
- Suitable for datasets up to ~100,000 rows
- Faster for small to medium datasets
- No additional Oracle Cloud resources needed
Async Mode:
- Data exported to Oracle Object Storage
- Required for very large datasets (>100K rows)
- Supaflow downloads from Object Storage
- Requires Object Storage bucket and Pre-Authenticated Request (PAR) URL
Option: Async Mode Configuration
Target System URLObject Storage bucket Pre-Authenticated Request (PAR) URL for async exports. Required if Export Mode = async.
Example: https://objectstorage.region.oraclecloud.com/.../bucket.../o/
Format: Must end with /o/ for object uploads
If using async export mode, you need to provide an Oracle Object Storage PAR URL:
1. Create Object Storage Bucket:
- Log in to Oracle Cloud Infrastructure (OCI) console
- Navigate to Storage → Object Storage → Buckets
- Click Create Bucket
- Enter bucket name (e.g.,
supaflow-otm-exports) - Select Standard storage tier
- Click Create
2. Create Pre-Authenticated Request (PAR):
- Open the bucket you created
- Click Pre-Authenticated Requests in the left menu
- Click Create Pre-Authenticated Request
- Configure:
- Name: "Supaflow Write Access"
- Access Type: Permit object writes or Read and write
- Expiration: Set to future date (e.g., 1 year from now)
- Prefix: Leave empty or use prefix like
supaflow/
- Click Create
- IMPORTANT: Copy the PAR URL immediately - it's only shown once
- PAR URL format:
https://objectstorage.region.oraclecloud.com/p/{unique-token}/n/{namespace}/b/{bucket-name}/o/
3. Enter PAR URL in Supaflow:
- Paste the complete PAR URL into the Target System URL field
- Ensure URL ends with
/o/ - Save and test connection
Step 4: Advanced Settings (Optional)
Schema Refresh IntervalInterval in minutes for schema metadata refresh
Default: 60
Range: -1 to 10080
Options:
- 0 = Refresh schema before every pipeline execution (recommended for frequent schema changes)
- 60 = Refresh every hour (recommended for stable schemas)
- -1 = Disable automatic schema refresh (use for very stable schemas)
How many seconds to look back when resuming incremental syncs to capture late-arriving updates
Default: 0
Range: 0 to 86400 (24 hours)
Recommended: 300-600 seconds in production
- Set to 300-600 (5-10 minutes) if you have late-arriving logistics updates
- Use 0 if data arrives in real-time with accurate timestamps
- Higher values ensure completeness but may cause duplicate processing
- OTM systems often have delayed updates due to integration workflows
Step 5: Test & Save
After configuring your authentication and settings, click Test & Save to verify your connection and save the source.
Available Objects
Supaflow syncs the following OTM objects via Export APIs:
Core Tables
| Object | Primary Key | Description |
|---|---|---|
| SHIPMENT | SHIPMENT_GID | Shipment records containing logistics and transportation data |
| LOCATION | LOCATION_GID | Location records containing facility and address information |
| ITEM | ITEM_GID | Item records containing product and material master data |
| CONTACT | CONTACT_GID | Contact records containing person and organization contact information |
Order Tables
| Object | Primary Key | Description |
|---|---|---|
| ORDER_RELEASE | ORDER_RELEASE_GID | Order release records representing customer orders for fulfillment |
| ORDER_RELEASE_STATUS | ORDER_RELEASE_GID | Order release status history and tracking information |
Shipment Child Tables
| Object | Primary Key | Description |
|---|---|---|
| SHIPMENT_COST | SHIPMENT_COST_GID | Shipment cost records for freight charges and accessorials |
| SHIPMENT_COST_QUAL | SHIPMENT_COST_GID | Shipment cost qualifiers providing additional cost attributes |
| SHIPMENT_COST_REF | SHIPMENT_COST_GID | Shipment cost reference numbers and external identifiers |
| SHIPMENT_COST_REMARK | SHIPMENT_COST_GID | Shipment cost remarks and notes |
| SHIPMENT_STATUS | SHIPMENT_GID | Shipment status history and milestone tracking |
| SHIPMENT_STOP | SHIPMENT_GID | Shipment stop locations for pickup and delivery points |
| SHIP_UNIT | SHIP_UNIT_GID | Ship unit records representing handling units and containers |
Reference Tables
| Object | Primary Key | Description |
|---|---|---|
| ADJUSTMENT_REASON | ADJUSTMENT_REASON_GID | Adjustment reason codes for inventory and cost adjustments |
| ALLOCATION | ALLOCATION_GID | Allocation records for inventory and capacity reservations |
| AUDIT_TRAIL | AUDIT_TRAIL_GID | Audit trail records tracking data changes and user actions |
All objects use UPDATE_DATE as the cursor field for incremental sync. Objects are discovered dynamically via the Export API - if a table has no data, it will not appear in schema discovery but will become available once data exists.
Incremental Sync
OTM connector supports time-based incremental sync using cursor fields:
- Cursor field:
UPDATE_DATEorLAST_UPDATE_DATE(varies by object) - Sync mode: Time-range based with lookback support
- Strategy: Cutoff time approach to handle late-arriving data
How it works:
- Each sync captures a cutoff time (current time when sync starts)
- Fetches all records updated between last cursor and cutoff time
- Next sync starts from the previous cutoff time
- Lookback time ensures late updates are captured
Troubleshooting
Common issues and their solutions:
Basic Auth connection failed
Problem:
- "Authentication failed" error
- "401 Unauthorized" response
- Connection test fails
Solutions:
- Verify credentials:
- Check username format includes domain (e.g.,
DEFAULT.USERNAME) - Ensure password is correct (no extra spaces)
- Try logging into OTM web interface with these credentials
- Check username format includes domain (e.g.,
- Check user permissions:
- User must have Data Integration role
- User must have Export permissions
- Verify user is not locked or disabled
- Check server URL:
- Ensure URL is correct and accessible
- URL should be base URL only (no
/GC3/suffix) - Try accessing
{serverUrl}/GC3/in a browser
- Network connectivity:
- Verify OTM server is accessible from Supaflow
- Check if corporate firewall blocks the connection
- Contact Oracle support if server appears down
OAuth authentication failed
Problem:
- "Invalid OAuth credentials" error
- "Failed to obtain access token" error
- Token request returns 401/403
Solutions:
- Verify IDCS credentials:
- Client ID and Client Secret are correct
- Credentials copied from IDCS application without extra spaces
- IDCS application is Activated (not Deactivated)
- Check Token URL:
- Token URL format:
https://[tenant].identity.oraclecloud.com/oauth2/v1/token - Replace
[tenant]with your actual IDCS tenant name - No typos in the URL
- Token URL format:
- Verify grant type:
- IDCS application must have Client Credentials grant enabled
- Check under Configuration → Client Configuration
- Check OAuth scope:
- If scope is required, ensure it matches OTM resource scope
- Try leaving scope blank (uses default)
- Consult Oracle administrator for correct scope
- IDCS application status:
- Application must be Activated
- Check application is not expired
- Verify application has not been revoked
No objects appearing
Problem:
- Successfully connected but no OTM objects show up
- Schema is empty
- "No objects found" warning
Solutions:
- Check user permissions:
- User must have Read access to OTM objects
- Verify Data Export permissions in OTM
- Check Data Integration role is assigned
- Verify Export API access:
- User can access
/logisticsRestApi/data-int/v1/exportRequests/ - Try making a manual API call with user credentials
- Check Oracle documentation for required permissions
- User can access
- Check OTM configuration:
- Ensure OTM instance has objects configured
- Verify objects are not hidden by security policy
- Contact Oracle administrator to verify instance setup
- Refresh schema:
- Set Schema Refresh Interval to 0
- Save and test connection again
- Wait a few minutes for schema discovery to complete
Async export not working
Problem:
- Async mode fails with "Target System URL" error
- Data not appearing in Object Storage
- Connection test succeeds but exports fail
Solutions:
- Verify PAR URL:
- PAR URL must end with
/o/ - URL format:
https://objectstorage.{region}.oraclecloud.com/p/{token}/n/{namespace}/b/{bucket}/o/ - No extra spaces or characters in URL
- PAR URL must end with
- Check PAR expiration:
- Pre-Authenticated Request may have expired
- Go to OCI → Object Storage → Bucket → Pre-Authenticated Requests
- Check expiration date and create new PAR if expired
- Check PAR permissions:
- PAR must have Write or Read and Write access
- Read-only PARs will not work for exports
- Recreate PAR with correct permissions
- Verify Object Storage bucket:
- Bucket exists and is in the correct region
- Bucket is not archived or deleted
- Check bucket quota is not exceeded
- Network connectivity:
- OTM server can reach Oracle Object Storage
- No firewall blocking outbound HTTPS to Object Storage
- Verify region-specific Object Storage endpoints are accessible
Data export timeout
Problem:
- Export times out during sync
- "Request timeout" error
- Slow response from OTM
Solutions:
- Use async mode:
- Switch from sync to async export mode
- Async mode handles large datasets better
- Set up Object Storage PAR URL
- Check OTM server load:
- OTM may be experiencing high load
- Try sync during off-peak hours (evening/weekend)
- Contact Oracle support if server is consistently slow
- Reduce data volume:
- Select fewer objects to sync
- Use incremental sync instead of full sync
- Add filters to reduce row count
- Check network latency:
- Ping OTM server to check connectivity
- High latency may cause timeouts
- Consider using async mode to avoid timeout issues
Incremental sync not capturing updates
Problem:
- Incremental sync misses recent updates
- Full sync works but incremental doesn't
- Data appears stale
Solutions:
- Check cursor field:
- Verify object has
UPDATE_DATEorLAST_UPDATE_DATEfield - Cursor field must be populated for all records
- Check sample records in OTM to verify timestamps
- Verify object has
- Increase lookback time:
- Set Incremental Lookback to 300-600 seconds
- Helps catch late-arriving updates
- OTM integration workflows may delay updates
- Verify server time offset:
- OTM server clock may differ from Supaflow
- Connector automatically calculates offset
- Check sync logs for "server time offset" message
- Check for clock skew:
- Large clock differences can cause missed data
- Verify OTM server time is accurate
- Contact Oracle administrator if server time is wrong
- Resync to verify:
- Run a full resync to get all data
- Compare with incremental sync results
- Review sync state and cursor values in logs
High error count during sync
Problem:
- Many rows fail to process
- "Failed to process row" errors in logs
- Sync completes but with warnings
Solutions:
- Check data quality:
- OTM may have malformed data in some rows
- Review error logs for specific row failures
- Check if primary key values are present
- Verify field types:
- Type mismatches can cause processing errors
- Check if field values match expected types
- Schema inference may have guessed wrong types
- Review permissions:
- Some rows may have restricted access
- User permissions may filter certain records
- Verify user can access all required data
- Contact support:
- Provide sync job ID and error logs
- Include sample row data that fails
- Email support@supa-flow.io with details
Schema refresh not working
Problem:
- New objects not appearing
- Schema seems outdated
- Recently added fields missing
Solutions:
- Force schema refresh:
- Set Schema Refresh Interval to 0
- Save and test connection
- Wait a few minutes for discovery to complete
- Check object permissions:
- User may not have access to new objects
- Verify permissions were updated in OTM
- Check security policies haven't changed
- Verify object exists:
- Log into OTM and check object is configured
- Try exporting data manually via OTM UI
- Ensure object is not disabled
- Clear cached schema:
- Delete and recreate the source
- Complete connection test again
- New schema will be discovered
Support
Need help? Contact us at support@supa-flow.io