Home>Blogs>ETL Solution>Data Streaming Framework

Data Streaming Framework

Abzooba Admin

–  Presales Team


Traditional extract, transform, load (ETL) solutions have, by necessity, evolved into real-time data streaming solutions as digital businesses have increased both the speed in executing transactions, and the need to share larger volumes of data across systems faster.

Traditional ETL solutions fall short in the following ways:

  1. Real-time data ingestion
  2. Real-time data delivery

All the activities of the transformation phase of ETL like data cleansing, enrichment, and processing need to be done more frequently as the number of data sources and volume skyrocket. There is also an obstacle to gain real time business insights by feeding data into machine learning and AI algorithms by traditional batch ETL processes.

Real time data streaming using a framework like Apache Kafka offers the following advantages over batch processing ETL:

  1. Automatically extract, transform, and load data as continuous, real-time streams
  2. Enhance operational efforts and reduce work
  3. Deliver data from up-to-date sources, whether it is coming from hundreds of millions of daily events from different devices, locations, or cloud computing, or physical servers

Solution Overview

Abzooba has developed a real time streaming framework to facilitate the ingestion and processing of streaming unstructured data.

The framework allows ingestion from several sources such as:

  1. Social Media sources
  2. Paid subscription sources
  3. Publications and News Articles, etc.

The framework comes with prebuilt processors to facilitate the following tasks

  1. Clean & Ingest
  2. Remodel & Enrich
  3. Implement machine learning pipelines with recommender pipeline

The data streaming pipeline architecture additionally offers scalability in the following ways:

  1. Grow both horizontally and vertically to support high number of users and content
  2. Support multiple data science projects simultaneously

Architecture Overview

The key pillars of the architecture of the solution are as follows:

  1. Event Sourcing pipeline – Kafka
  2. Real-time streaming
  3. Ingestion and transformation of data
  4. Analytics processes, triggered directly from pipeline
  5. Storage Tiers
  6. Hot: RedisDB, In memory shared cache
  7. Warm: SQL Database, Document Database
  8. Cold: S3, Data Lake

The architecture was envisioned with the following considerations in mind:

  1. Right sized tradeoff, Cost vs. Performance (tunable)
  2. Horizontal and Vertical on-demand scaling of workloads
  3. Cloud agnostic platform, hybrid cloud capability
  4. Modularity, pluggability, and upgradeability
  5. Event Driven Architecture (EDA)

Event Driven Architecture enables organizations to fully leverage the power of real time data streaming frameworks like Apache Kafka in the following ways:

  1. EDA is particularly well suited to the loosely coupled structure of complex engineered systems. Components can remain autonomous, being capable of coupling and decoupling into different networks in response to different events. Thus, components can be used and reused by many different networks
  2. Since events are recorded as they occur, enterprises have access to all the data and context they need to make the best decisions
  3. EDA is scalable because it is implemented using a modern distributed, and fault-tolerant architecture. Loosely coupled computing nodes work together to form a cohesive event-processing engine with unlimited scale. This decoupled and distributed environment gives the organization the power and peace of mind to handle any set of workloads

Ingestion & Streaming Unstructured Data:

  1. Events are first ingested from raw data sources and fed into Kafka stream to be processed by standardization module
  2. The standardization module standardizes the data as data is being picked up from multiple sources in multiple formats
  3. Standardized data is then analyzed using Cognitive Data Enrichment, which comprises of Named Entity Recognition, Document Relevancy and Topic Mapping processes; the outputs from the processes are then fed to corresponding Kafka streams (K1, K2,….,K10)
  4. Document Assembler combines all three outputs from the Kafka Stream and pushes the document for further enrichment
  5. Enriched data is received by the Cognitive Content Coordinator process which persists the data in S3, database (SQL) and Redis2

Machine Learning Pipeline (Recommender pipeline for article similarity):

  1. Parallelly the normalized data is ingested and processed by Document Similarity and Document Clustering processes
  2. The article is updated with the similarity score and clustering information as and when those results are received by the content coordinator
  3. Cognitive API process is invoked by the user interface to fetch the recommendations for the users
  4. Once the data is received by the Recommender then the data is cached in-memory (Redis2) for quick recommendation purposes
  5. The Cognitive API captures the click stream data, previous recommendations for the user and records in Kafka streams which in turn is ingested by the process Cognitive User Coordinator
  6. The Cognitive User Coordinator persists the data in storage layers for faster recommendation


  1. Data is being persisted in three different layers, as per relevancy and need of the current processes being executed, to deliver cost benefits without impacting the performance
  2. Processes have been split based on computational requirements and persistence, making CPU and memory usage available for processes which require high memory / CPU usage
  3. Processes like content and user coordinator are being used to manage persistence and latency, thereby reducing memory / CPU usage
  4. EDA helps in providing options to scale up the solution as per the volume of articles to be processed and the number of end-users


We look forward to receiving your feedback. You can send questions or suggestions to contact@abzooba.com.
Speak to AI expert