Skip to main content

Oracle Transportation Management Integration: The Complete Guide

· 18 min read
Puneet Gupta
Founder, Supaflow

Oracle Transportation Management integration is one of the more deceptive data engineering problems in the logistics stack. OTM exposes a well-documented REST API and sync looks straightforward on paper -- until you run it against real data and discover that the metadata catalog returns 400 on half the tables you need, "empty" timestamps arrive as literal 0, and sync-mode responses silently truncate at 1 MB.

This guide walks through how OTM data integration actually works in production: the objects that matter, the five quirks that break naive pipelines, the right way to handle incremental sync with OTM's server-side clock, and how to move OTM data into Snowflake or any cloud warehouse reliably.

What Oracle Transportation Management Stores (and Why You Want It Out)

Oracle Transportation Management (OTM), sometimes branded as Oracle OTM Cloud or Oracle Logistics Cloud, is a transportation execution and planning platform used by shippers, carriers, and third-party logistics providers (3PLs). It tracks shipments, orders, costs, routes, stops, and allocations across an organization's physical supply chain.

The operational data is valuable, but OTM is not a reporting system. Teams who want to analyze transportation spend, on-time performance, carrier mix, or landed cost need the data in a warehouse -- Snowflake, Databricks, BigQuery, or Redshift -- where it can join with order management, finance, and demand planning data.

Three audiences typically lead the OTM integration project:

  • Supply-chain analytics teams building dashboards on shipment volume, dwell time, and freight cost per unit
  • Finance teams reconciling accrued freight accruals against invoiced charges (this is almost always where the project gets funded)
  • Operations and customer-visibility teams feeding real-time shipment status into customer portals or data products

The shape of what you extract changes based on the audience, but the underlying APIs and quirks are the same.

How OTM Exposes Data: Three Paths, One Pipeline

OTM has three extraction surfaces. Understanding when each applies is the first thing an integration engineer needs to get right.

1. Export API (the right default)

The Export API -- documented under /logisticsRestApi/data-int/v1/exportRequests/ for Basic Auth or /logisticsRestApi/data/v1/exportRequests/ for OAuth -- is a synchronous REST endpoint that returns CSV-formatted query results directly in the response body. It is the right tool for most pipelines and handles single-pull volumes up to roughly 100,000 rows comfortably.

2. Data Integration API

Used for metadata discovery: schema, field types, primary keys, relationships. Like the Export API, the path differs by auth type -- /logisticsRestApi/resources-int/ for Basic and /logisticsRestApi/resources/ for OAuth. We'll cover its one major caveat below.

3. Asynchronous exports via Object Storage

For tables larger than 100,000 rows per pull or for full historical backfills, OTM can export CSV files directly to an Oracle Cloud Object Storage bucket via a Pre-Authenticated Request (PAR) URL. The connector downloads and parses the CSV after the export completes. Required when single-call sync mode times out or hits the response size limit.

A production OTM pipeline uses the Data Integration API to discover objects, the Export API for most tables, and the async Object Storage path only for the handful of tables that exceed the sync-mode ceiling.

The Core OTM Objects Most Teams Actually Need

Although OTM's full data dictionary -- more on that below -- exposes thousands of tables, only a small number are universally relevant across transportation analytics use cases. The 16 tables below are the ones most integration projects replicate first, grouped by business domain.

Core master data

ObjectPrimary KeyWhat it stores
SHIPMENTSHIPMENT_GIDShipment records with lane, carrier, equipment, and execution data
LOCATIONLOCATION_GIDFacility, address, and geography master
ITEMITEM_GIDProduct and material master
CONTACTCONTACT_GIDPerson and organization contacts attached to shipments and orders

Orders

ObjectPrimary KeyWhat it stores
ORDER_RELEASEORDER_RELEASE_GIDCustomer orders released to transportation planning
ORDER_RELEASE_STATUSORDER_RELEASE_GIDStatus history and milestone tracking for each order release

Shipment children

ObjectPrimary KeyWhat it stores
SHIPMENT_COSTSHIPMENT_COST_GIDFreight charges, accessorials, and cost lines
SHIPMENT_COST_QUALSHIPMENT_COST_GIDQualifier attributes on cost lines
SHIPMENT_COST_REFSHIPMENT_COST_GIDExternal cost reference numbers
SHIPMENT_COST_REMARKSHIPMENT_COST_GIDFree-form notes on cost lines
SHIPMENT_STATUSSHIPMENT_GIDShipment status history and milestones
SHIPMENT_STOPSHIPMENT_GIDPickup and delivery stops within a shipment
SHIP_UNITSHIP_UNIT_GIDHandling units (pallets, containers) within a shipment

Reference and operational

ObjectPrimary KeyWhat it stores
ADJUSTMENT_REASONADJUSTMENT_REASON_GIDAdjustment reason codes
ALLOCATIONALLOCATION_GIDInventory and capacity allocations
AUDIT_TRAILAUDIT_TRAIL_GIDUser action and data change history

Every table uses a GID-suffixed primary key (SHIPMENT_GID, ORDER_RELEASE_GID, ALLOCATION_GID, CONTACT_GID, and so on). GIDs are globally unique VARCHAR identifiers scoped by domain, not surrogate integers -- a detail that matters when you model relationships in your warehouse.

The Full Universe: 2,290 Tables in the OTM Data Dictionary

The 16 tables above cover most analytics needs, but OTM's official data dictionary, published by Oracle with each release, catalogs 2,290 tables across every operational area of the platform (based on the February 2025 snapshot in our connector reference; Oracle publishes an updated dictionary with every Transportation and Global Trade Management release, most recently 26B). They span:

  • Accessorials, rating, and tariff management (ACCESSORIAL_COST, RATE_GEO, RATE_OFFERING_LANE, and hundreds more)
  • Carriers, equipment, and fleet (SERVPROV, EQUIPMENT_TYPE, DRIVER, POWER_UNIT)
  • Load planning and routing (BULK_PLAN, LANE, ROUTING)
  • Freight settlement (VOUCHER, INVOICE, GL_CODE)
  • Customs and international (CUSTOMS_DECLARATION, HS_CODE, COMMODITY)
  • Customer integrations (dozens of X_* prefixed extension tables)

If your use case touches any of these domains, the table exists. Supaflow's OTM connector can discover and sync any of the 2,290 tables -- not just the core 16 -- using the Export API and data-driven schema inference.

Reference guides for individual OTM tables at /reference/otm/[table-name] are being published incrementally -- check back over the coming weeks. In the meantime, the canonical source is Oracle's OTM data dictionary shipped with each release.

The Five Quirks That Break OTM Pipelines

These are the specific failure modes that separate a demo-quality OTM integration from one that survives six months in production. Every one of them is something we hit in testing against real OTM Cloud instances and had to engineer around.

Quirk 1: The metadata catalog returns 400 for many tables

OTM's Data Integration API (/resources/ or /resources-int/) exposes per-object metadata -- field names, types, nullability, PK flags. It is the correct first stop for schema discovery. But for a non-trivial fraction of tables -- we've confirmed this for SHIPMENT_COST_REMARK, several SHIPMENT_COST_* variants, and many extension tables -- the metadata endpoint returns HTTP 400 with no useful error body. The object exists, the data exports fine, but metadata is simply unavailable.

How to handle it: fall back to data-driven type inference. Pull a sample of rows from the Export API, examine the column values, and infer types from the data itself. Our connector samples 1,000 rows, infers types per column, and then cross-validates against the metadata catalog only where metadata is available. A naive pipeline that treats a 400 as a fatal error will silently skip tables the user expected to appear.

Quirk 2: Empty temporal fields come back as literal 0

OTM's Export API returns the string "0" -- not null, not empty string -- for unset date and datetime fields. If your type coercion treats 0 as a numeric epoch, you get rows stamped 1970-01-01 across every optional timestamp column.

How to handle it: normalize "0" to null before canonical type conversion. This is a one-line fix once you know about it, and a multi-day debugging session if you don't. We verified the pattern empirically across 750,000+ real temporal values from production OTM instances; populated datetime fields are consistently ISO 8601 with a Z suffix, and unset fields are consistently the string "0".

Quirk 3: date fields are actually datetimes with Z suffix

OTM's metadata reports two temporal types: date and dateType (datetime). In practice, the Export API returns both in the same ISO 8601 format with a trailing Z. A column declared as date in the metadata will return values like 2026-04-14T00:00:00Z.

How to handle it: always parse OTM temporal fields as INSTANT (timezone-aware instant in time), never LOCALDATETIME. Our connector explicitly maps both date and dateType metadata hints to INSTANT and ignores the nominal date type. This prevents silent timezone bugs when the data lands in Snowflake or Postgres.

Quirk 4: Sync-mode responses truncate at roughly 1 MB

The synchronous Export API imposes a response size limit around 1 MB. If your query returns more data than that, the response is truncated but does not surface an error -- you get a valid-looking response with fewer rows than the full query would produce.

How to handle it: paginate until you get zero rows back, not until the response size is less than your requested page size. The naive termination condition (rows_returned < page_size) fires early on every truncated response and loses data silently. Our connector paginates until the Export API returns an empty result, which is the only reliable signal that the query has drained.

Quirk 5: Column naming leaks types

Several column suffixes in OTM have semantic meaning that type inference alone cannot recover. Columns suffixed _TEXT, _COMMENT, _REMARK, or _NOTE sometimes contain numeric-looking content ("12345") that, under pure data-driven type inference, gets miscast to NUMBER. When real text arrives ("RF-12/3 NOTES"), the load fails.

How to handle it: apply a name-based type override as a post-inference step. Any column whose name ends in _TEXT, _COMMENT, _REMARK, or _NOTE is forced to STRING regardless of sampled content. This is the kind of rule you can only know by hitting the failure in production.

Authentication: Basic vs OAuth via IDCS

OTM supports two authentication methods. Both work; the choice is mostly about your organization's Oracle Identity Cloud Service (IDCS) posture.

Basic authentication

Username and password for an OTM integration user -- username format is typically DOMAIN.USERNAME (e.g., DEFAULT.SUPAFLOW_API). The integration user needs Data Export and Data Integration roles plus read access on every object you intend to sync. Basic auth hits the /data-int/ and /resources-int/ path variants.

When to pick it: fast setup for internal or proof-of-concept pipelines. Simplest to rotate if a credential is compromised.

OAuth 2.0 Client Credentials via IDCS

A confidential application registered in Oracle IDCS with the Client Credentials grant. You provide client ID, client secret, the IDCS token URL (https://[tenant].identity.oraclecloud.com/oauth2/v1/token), and optionally a scope. OAuth hits the /data/ and /resources/ path variants.

When to pick it: production deployments, especially if your organization standardizes on IDCS for all service-to-service authentication. Also preferable if you want rotation and audit to flow through IDCS.

One implementation detail worth noting: IDCS tokens typically expire after 1 hour. A long-running sync or a multi-page export can span token expiry mid-pull. Our connector refreshes the token proactively with a 60-second safety margin before expires_in, and calls the refresh synchronously before every API operation (schema discovery, read, pagination). Naive OAuth implementations that refresh reactively on 401 tend to fail mid-export on larger tables.

Incremental Sync: Doing It Right with OTM's Server Clock

Every OTM table exposes an UPDATE_DATE (or LAST_UPDATE_DATE) column that serves as the incremental cursor. The naive approach -- "pull everything updated since the last cursor value" -- works for about two weeks in production before missing data.

Three things go wrong:

1. Server clock skew

The OTM server's wall clock can drift by seconds or minutes from wherever your integration runs. If you compare cursor values to now() on the ingestion side, you'll either miss recently-updated rows (clock is behind) or attempt to filter into the future (clock is ahead). Our connector measures the offset once during the connection test by reading OTM's Date HTTP response header (RFC 1123) and applies it to every subsequent cutoff calculation.

2. Late-arriving updates

OTM workflows include upstream integrations -- EDI processing, external rating, carrier confirmations -- that can stamp UPDATE_DATE values several minutes after the row actually settled. A tight cutoff window will miss these records forever.

The fix is a lookback window: on each sync, re-scan the N seconds immediately before the last cursor. Supaflow's OTM connector supports configurable lookback via the Incremental Lookback setting, with a recommended range of 300-600 seconds in production. The lookback applies only to subsequent syncs; the first full backfill uses the cutoff time directly.

3. Identical-timestamp pagination

Large batch updates in OTM can stamp thousands of rows with the same UPDATE_DATE value down to the second. If you paginate by cursor value alone, you'll either re-read the same batch on every sync or miss rows in the tail. The correct approach is to track the record count at the maximum cursor timestamp alongside the cursor value, so you can detect whether a given timestamp has been fully drained. Our connector persists cursor_value and records_at_max_timestamp together in sync state.

A production OTM incremental cursor is this triple, not a single timestamp:

(cutoff_time_in_server_frame, records_at_cutoff_timestamp, lookback_seconds)

Skip any one of the three and you'll hit a data quality issue within weeks.

Sync vs Async Export Mode: Picking the Right Path

The OTM connector supports two export modes. The choice is purely about data volume per pull.

Sync mode (recommended default) returns data directly in the HTTP response. It is faster, requires no additional Oracle Cloud resources, and handles up to ~100,000 rows per pull before hitting the response size ceiling.

Async mode writes the export to an Oracle Cloud Object Storage bucket via a Pre-Authenticated Request (PAR) URL. The connector polls the bucket, downloads the CSV, and parses it. Required for tables with >100,000 rows per sync window or for full historical backfills of large transactional tables (SHIPMENT, ORDER_RELEASE_STATUS, AUDIT_TRAIL are the usual suspects).

Setting up async mode requires creating a bucket in OCI Object Storage, generating a PAR URL with Read and Write access, and pasting the URL into the Target System URL field in the connector config. The PAR URL format is:

https://objectstorage.{region}.oraclecloud.com/p/{token}/n/{namespace}/b/{bucket}/o/

The URL must end with /o/. PARs expire; set the expiration well into the future (1 year is typical) and document the renewal date somewhere humans will see it.

Our connector supports cancellation checks every 1,000 rows during processing and every 1,000 records during CSV parsing, so a cancelled or failed job does not leave a runaway export consuming OTM API quota.

From OTM to Snowflake (or Any Warehouse)

Extracting OTM data is only half the pipeline. The other half is landing it in a warehouse with correct types, primary key constraints, and a sync schedule that matches your analytics freshness requirements.

Supaflow syncs OTM data into:

  • Snowflake — natively, with column types mapped to Snowflake's canonical forms (VARCHAR, NUMBER, TIMESTAMP_TZ, BOOLEAN). Composite primary keys supported. See the Snowflake destination docs.
  • Databricks, BigQuery, Redshift, and Postgres via the same extraction logic and mode-specific type mappings.
  • Iceberg tables on S3 or Object Storage for teams standardizing on open table formats.

Incremental merges into Snowflake use the connector's cursor state to upsert only the rows touched in each sync window, and composite primary keys (a handful of OTM tables have 2- or 3-field PKs) are preserved end-to-end.

Common OTM Integration Errors

The three failure categories we see most often in support tickets, with fixes:

"401 Unauthorized" on the first sync after setup

  • Verify the username format includes the domain prefix (DEFAULT.USERNAME, not just USERNAME)
  • For OAuth, confirm the IDCS application is Activated (not Deactivated) and the Client Credentials grant is enabled
  • Confirm the server URL is the base URL with no /GC3/ suffix -- the connector appends the path itself

"No objects found" after a successful connection test

  • The integration user needs Data Export and Data Integration roles plus read access on each object individually. OTM role assignments are object-level, not scope-level.
  • Set the schema refresh interval to 0 and re-run the connection test; this forces a fresh discovery.

Incremental sync "stops" after a week

  • Almost always a cursor or clock issue. Check the sync logs for the server time offset message -- if it's more than a few seconds, the connector is compensating but consider a one-time full resync to catch any rows that slipped through before the offset was measured.
  • Increase the lookback window to 600 seconds if your OTM instance has EDI or rating workflows that stamp updates asynchronously.

A fuller troubleshooting matrix for connection, schema, and sync issues is in the Oracle Transportation Management source docs.

Where Supaflow Fits

Supaflow's Oracle Transportation Management connector is purpose-built around every quirk listed above. It ships with:

  • Full 2,290-table coverage via data-driven schema inference where the metadata catalog fails
  • Both sync and async export modes -- sync by default, with async via an Oracle Cloud Object Storage PAR URL for large tables or full backfills
  • Server-clock-offset-aware incremental cursors with configurable lookback
  • IDCS OAuth with proactive token refresh and Basic Auth support
  • Composite primary key preservation end-to-end to the warehouse
  • Connector-based pricing -- a single price for the OTM connector regardless of row volume, not per-row or MAR billing

For teams currently hand-rolling OTM integration with Airflow + requests or paying row-based billing to a generic ELT vendor that never debugged the five quirks above, the switch usually pays for itself in eliminated on-call pages.

To try it: book a 15-minute demo or start a free trial and connect your OTM instance in under 10 minutes.

FAQ

What is Oracle Transportation Management (OTM)?

Oracle Transportation Management is a cloud-based transportation execution and planning platform sold by Oracle, used by shippers, carriers, and third-party logistics providers to manage shipments, orders, routing, carrier selection, freight costs, and visibility. It is part of Oracle's broader Logistics Cloud suite.

Can I extract OTM data without an integration user?

No. The Export API and Data Integration API both require authenticated requests. Your OTM administrator needs to create a dedicated integration user (Basic Auth) or register a confidential application in IDCS (OAuth Client Credentials grant) and assign Data Export and Data Integration roles along with object-level read permissions.

Does OTM support incremental sync or only full exports?

OTM supports incremental sync via the UPDATE_DATE (or LAST_UPDATE_DATE) cursor field on every object. Correct implementation requires handling server-side clock skew, late-arriving updates, and identical-timestamp pagination -- see the Incremental Sync section above for details.

How many tables are in the Oracle Transportation Management data dictionary?

Oracle's published OTM data dictionary lists approximately 2,290 tables (as of the February 2025 release). Most production pipelines sync 10-30 of these tables based on their analytics use case. Supaflow can sync any of the 2,290 tables.

What is the difference between OTM sync mode and async mode?

Sync mode returns CSV data directly in the HTTP response and is faster for tables under ~100,000 rows per pull. Async mode writes data to an Oracle Cloud Object Storage bucket via a Pre-Authenticated Request URL, which is required for larger tables or full backfills that exceed the sync-mode response size limit.

Can I use OTM's REST API without Supaflow?

Yes -- the REST API is documented by Oracle and can be called directly. In practice, building a production pipeline requires handling the five quirks covered in this guide (metadata 400s, "0" temporal sentinels, Z-suffixed date fields, 1 MB sync-mode truncation, name-based column type overrides), plus server-clock-aware incremental sync, token refresh, and async export orchestration. Supaflow packages all of that as a managed connector.

Does Supaflow support OTM on-premises or only OTM Cloud?

Both. The connector works against Oracle OTM Cloud (the otmgtm.[region].ocs.oraclecloud.com endpoints) and on-premises OTM installations reachable via HTTPS. Authentication options and available APIs are the same.

How often can OTM data be synced?

OTM imposes no hard schedule limit; the practical minimum depends on your OTM server's load and your data freshness requirements. Most customers schedule OTM syncs at 15-60 minute intervals for operational tables (SHIPMENT_STATUS, ORDER_RELEASE_STATUS) and daily for reference data (LOCATION, ITEM, CONTACT).

Next Steps

AI Tools
Ask ChatGPTClaudeAsk Claude
On this page