Skip to main content

Core Concepts

Understanding Nexla's data model and architecture is essential for effective SDK usage. This section covers the fundamental concepts that underpin the Nexla platform.

Learn More

For detailed technical concepts and implementation details, see the Core Concepts Guide page in our Nexla SDK guides.

SDK Architecture

The Nexla SDK is built with modern Python practices and provides a clean, typed interface to the Nexla platform:

  • Client: NexlaClient orchestrates authentication and request lifecycle
  • Credentials: CredentialsResource provides reusable access to external systems
  • Resources: High-level managers for flows, sources, destinations, nexsets, lookups, users, organizations, teams, projects, notifications, and metrics
  • Models: Pydantic request/response schemas keep payloads type-safe and serializable
  • Errors: A rich hierarchy surfaces context such as operation, resource type, and retry hints
  • Telemetry & Pagination: Optional OpenTelemetry spans wrap HTTP calls, and every resource inherits consistent pagination helpers

Flow Topology

Understanding how data flows through the Nexla platform is crucial for effective SDK usage. The following diagram shows the relationships between different components:

flowchart LR
Credentials["Credential\n(/data_credentials)"]
Source["Source\n(/data_sources)"]
Transform["Transform Step\n(Nexset.transform)"]
Nexset["Nexset (Dataset)\n(/data_sets)"]
Destination["Destination\n(/data_sinks)"]
Monitor["Metrics & Runs\n(/metrics, flow runs)"]

Credentials -->|data_credentials_id| Source
Source -->|flow graph| Transform
Transform --> Nexset
Nexset --> Destination
Source -.-> |detected data_sets| Nexset
Nexset --> Monitor
Destination --> Monitor

Key Components

The Nexla platform consists of several core components that work together to create data pipelines. Understanding these components and their roles is essential for effective SDK usage:

Credentials

Secure connection details that can be probed for structure/sample data using client.credentials.probe_* methods.

Sources

Ingest data using a connector source_type/connector_type and emit discovered nexsets. Sources are the entry points for data into your pipelines.

Transforms

Live on nexsets via NexsetCreate.transform and define schema-mapping logic before data is published downstream.

Destinations

Subscribe to nexsets and push data to sinks (sink_type). Destinations are the exit points for your processed data.

Flows

Stitch nodes (source → nexset(s) → destination) and expose run metrics via client.flows.list(include_run_metrics=True) and client.metrics.*.

Nexsets

Datasets that represent structured data flowing through your pipelines. They can be created from sources or derived from other nexsets through transformations.

Data Flow Patterns

The Nexla platform supports various data flow patterns depending on your use case. Here are the most common patterns:

Simple Pipeline

Source → Nexset → Destination

Complex Pipeline

Source → Nexset → Transform → Derived Nexset → Destination

Multi-Source Pipeline

Source A → Nexset A ↘
→ Combined Nexset → Destination
Source B → Nexset B ↗

Resource Relationships

Understanding how the different components interact with each other is crucial for designing effective data pipelines. Here are the key relationships between Nexla resources:

  • Credentials are referenced by sources and destinations for authentication
  • Sources automatically discover and create nexsets
  • Nexsets can be transformed to create derived datasets
  • Destinations subscribe to nexsets for data output
  • Flows orchestrate the entire pipeline from source to destination

Next Steps