What is a data pipeline?
Last updated
Last updated
A DATA PIPELINE IS CREATED FOR DATA ANALYTICS PURPOSES AND HAS:
Datasources - These can be internal or external and may be structured (e.g., the result of a database call), semi-structured (e.g. XML, JSON and CSV files, or a Google Sheets file), or unstructured (e.g., text documents or images).
Ingestion process - This is the means by which data is moved from the source into the pipeline (e.g., API call, secure file transfer).
Transformations - In most cases, data needs to be transformed from the input format of the raw data, to the one in which it is stored. There may be several transformations in a pipeline.
Data quality / cleansing - Data is checked for quality at various points in the pipeline. Data quality will typically include at least validation of data types and format, as well as conforming with the master data.
Enrichment - Data items may be enriched by adding additional fields, such as reference data.
Storage - Data is stored at various points in the pipeline, usually at least the landing zone and a structured store (such as a data warehouse).
End users - See below for a discussion about these.