📄️ FlexFlow Data Flows
Nexla’s FlexFlow data flow type is recommended for use in most workflows. This flow type uses the Kafka engine to facilitate seamless high-throughput movement of data from any source to any destination.
📄️ DB-CDC Data Flows
DB-CDC (Database–Change Data Capture) data flows use CDC to replicate tables across databases and/or cloud warehouses. This flow type runs on the Kafka engine and is designed for use with data is stored in multiple locations, where any changes to the data in one store need to be duplicated into another store.
📄️ Spark ETL Data Flows
Spark ETL data flows are designed for rapidly modifying large volumes of data stored in cloud databases or Databricks and moving the transformed data into another cloud storage or Databricks location. This flow type uses the Apache Spark engine and is ideal for large-scale data processing wherein minimizing latency in data movement is a critical need.
📄️ Replication Data Flows
Replication data flows are designed for use in workflows that require high-speed movement of unmodified files between storage systems. They can also be used to conduct high-speed cloning of individual tables between cloud data warehouses. This flow type is best suited for use when both retaining file structure and transferring data as quickly as possible are critical.