Featured Post

14 Top Data Pipeline Key Terms Explained

Image
 Here are some key terms commonly used in data pipelines 1. Data Sources Definition: Points where data originates (e.g., databases, APIs, files, IoT devices). Examples: Relational databases (PostgreSQL, MySQL), APIs, cloud storage (S3), streaming data (Kafka), and on-premise systems. 2. Data Ingestion Definition: The process of importing or collecting raw data from various sources into a system for processing or storage. Methods: Batch ingestion, real-time/streaming ingestion. 3. Data Transformation Definition: Modifying, cleaning, or enriching data to make it usable for analysis or storage. Examples: Data cleaning (removing duplicates, fixing missing values). Data enrichment (joining with other data sources). ETL (Extract, Transform, Load). ELT (Extract, Load, Transform). 4. Data Storage Definition: Locations where data is stored after ingestion and transformation. Types: Data Lakes: Store raw, unstructured, or semi-structured data (e.g., S3, Azure Data Lake). Data Warehous...

Here is Hadoop MapReduce DataFlow Tutorial

Here are the six stages of MapReduce. The MapReduce is critical for your data processing needs. Traditionally, the whole file needs to read once then divided manually, but it is not convenient. With that respect, Hadoop provides the facility to read files (ignoring their size) line-for-line by using offset and key-value.

Explained the dataflow in Hadoop MapReducer

MapReduce dataflow Quick Tutorial


1. Dataflow Diagram



How a Mapreduce process in Hadoop divides input and processes it, you will learn in this post.


2. MapReduce Stages


MapReduce receives input and processes it. Here are the six stages of processing. It is helpful for your interviews and project.


MapReduce Stage-1


Take the file as input for processing purposes. Any file will consist of a group of lines. These lines containing key-value pairs of data. The whole file can be read out with this method.

MapReduce Stage-2


In the next step, the file will be in "splitting" mode. This mode will divide the file into key, value pair of data. This time key will be offset and data will be a valuable part of the program. Each line will be read individually so there is no need to split data manually.

MapReduce Stage-3


The further step is to process the value of each line with an associate from counting numbers. Each individual that is separated from a space counted with the number and that number is written with each key. This is the logic of "mapping" that programmers need to write.

MapReduce Stage-4


After that shuffling is performed and with this, each key gets associated with the group of numbers that are involved in the mapping section. Now scenario becomes key with string and value will be a list of numbers. This will go as input to the reducer.

MapReduce Stage-5


In the reducer phase, whole numbers are counted and each key associated with final counting is the sum of all numbers which leads to the final result.

MapReduce Stage-6


Output of the reducer phase will lead to the final result. This final result will have counting of individual word count. This is independent of the size of the file used for processing.


Keep Reading
  1. Big Data and Hadoop: Learn by Example

Comments

Popular posts from this blog

How to Fix datetime Import Error in Python Quickly

SQL Query: 3 Methods for Calculating Cumulative SUM

Big Data: Top Cloud Computing Interview Questions (1 of 4)