SeaTunnel is a next-generation super high-performance, distributed, massive data integration tool.
-
Updated
May 17, 2024 - Java
SeaTunnel is a next-generation super high-performance, distributed, massive data integration tool.
Apache Paimon is a lake format that enables building a Realtime Lakehouse Architecture with Flink and Spark for both streaming and batch operations.
Pravega - Streaming as a new software defined storage primitive
Concurrent and multi-stage data ingestion and data processing with Elixir
ingestr is a CLI tool to copy data between any databases with a single command seamlessly.
The Data Engineering Book - หนังสือวิศวกรรมข้อมูล ของคนไทย เพื่อคนไทย
OpenKit Java Reference Implementation
Use SQL to build ELT pipelines on a data lakehouse.
OpenKit .NET Reference Implementation
The Data Integration Library project provides a library of generic components based on a multi-stage architecture for data ingress and egress.
Product scraping from Walmart Canada website, with further cleaning and integration of data from a different store.
Orbital automates integration between data sources (APIs, Databases, Queues and Functions). BFF's, API Composition and ETL pipelines that adapt as your specs change.
Sample code for the AWS Big Data Blog Post Building a scalable streaming data processor with Amazon Kinesis Data Streams on AWS Fargate
Enables custom tracing of Python applications in Dynatrace
Enables custom tracing of Java applications in Dynatrace
A Python library that enables ML teams to share, load, and transform data in a collaborative, flexible, and efficient way 🌰
Add a description, image, and links to the data-ingestion topic page so that developers can more easily learn about it.
To associate your repository with the data-ingestion topic, visit your repo's landing page and select "manage topics."