Posts

Showing posts with the label apache-storm-topology-example

Featured Post

14 Top Data Pipeline Key Terms Explained

Image
 Here are some key terms commonly used in data pipelines 1. Data Sources Definition: Points where data originates (e.g., databases, APIs, files, IoT devices). Examples: Relational databases (PostgreSQL, MySQL), APIs, cloud storage (S3), streaming data (Kafka), and on-premise systems. 2. Data Ingestion Definition: The process of importing or collecting raw data from various sources into a system for processing or storage. Methods: Batch ingestion, real-time/streaming ingestion. 3. Data Transformation Definition: Modifying, cleaning, or enriching data to make it usable for analysis or storage. Examples: Data cleaning (removing duplicates, fixing missing values). Data enrichment (joining with other data sources). ETL (Extract, Transform, Load). ELT (Extract, Load, Transform). 4. Data Storage Definition: Locations where data is stored after ingestion and transformation. Types: Data Lakes: Store raw, unstructured, or semi-structured data (e.g., S3, Azure Data Lake). Data Warehous...

Apache Storm Architecture Tutorial Flowchart

Image
There are two main reasons why Apache Storm is so popular. The number one is it can connect to many sources. The number two is scalable. The other advantage is fault-tolerant. That means, guaranteed data processing. The map-reduce jobs process data analytics in Hadoop. The topology in Storm is the real data processor. The co-ordination between Nimbus and Supervisor carried by Zookeeper Apache Storm The jobs in Hadoop are similar to the topology. The jobs run as per the schedule defined. In Storm, the topology runs forever. A topology consists of many worker processes spread across many machines.  A topology is a pre-defined design to get end product using your data. A topology comprises of 2 parts. These are Spout and bolts. The Spout is a funnel for topology Two nodes in Storm Master Node: similar to the Hadoop job tracker. It runs on a daemon called Nimbus. Worker Node: It runs on a daemon called Supervisor. The Supervisor listens to the work assigne...