Featured Post

14 Top Data Pipeline Key Terms Explained

Image
 Here are some key terms commonly used in data pipelines 1. Data Sources Definition: Points where data originates (e.g., databases, APIs, files, IoT devices). Examples: Relational databases (PostgreSQL, MySQL), APIs, cloud storage (S3), streaming data (Kafka), and on-premise systems. 2. Data Ingestion Definition: The process of importing or collecting raw data from various sources into a system for processing or storage. Methods: Batch ingestion, real-time/streaming ingestion. 3. Data Transformation Definition: Modifying, cleaning, or enriching data to make it usable for analysis or storage. Examples: Data cleaning (removing duplicates, fixing missing values). Data enrichment (joining with other data sources). ETL (Extract, Transform, Load). ELT (Extract, Load, Transform). 4. Data Storage Definition: Locations where data is stored after ingestion and transformation. Types: Data Lakes: Store raw, unstructured, or semi-structured data (e.g., S3, Azure Data Lake). Data Warehous...

10 Tricky Apache-Storm Interview Questions

The storm is a real-time computation system. It is a flagship software from Apache foundation. Has the capability to process in-stream data. You can integrate traditional databases easily in the Storm. The tricky and highly useful interview questions given in this post for your quick reference. Bench mark for Storm is a million tuples processed per second per node.

Interview Questions

Tricky Interview Questions

1) Real uses of Storm?

A) You can use in real-time analytics, online machine learning, continuous computation, distributed RPC, ETL

2) What are different available layers on Storm?
  • Flux
  • SQL
  • Streams API
  • Trident 
3)  The real use of SQL API on top of Storm?

A) You can run SQL queries on stream data

4) Most popular integrations to Storm?
  1. HDFS
  2. Cassandra
  3. JDBC
  4. HIVE
  5. HBase
5) What are different possible Containers integration with Storm?
  1. YARN
  2. DOCKER
  3. MESOS
6) What is Local Mode?

A) Running topologies in the Local server we can say as Local Mode.

7) Where all the Events Stored in Storm?
A) Event Logger mechanism saves all events

8) What are Serializable data types in Storm?
A) Storm can serialize primitive types, strings, byte arrays, ArrayList, HashMap, and HashSet

9) What are Hooks in Storm?
A) You can place the custom code in Storm and you can run events many times

10) What is the Joining of Streams?
A) Streams from different sources you can Join on a particular join condition

References

Apache Spark Vs Apache Storm Vs Tableau

  • The storm is super past in stream processing engine for Big data analytics
  • Tableau is Data warehousing presentation tool
  • Spark is Cluster Maintenance and Fault Tolerance
Apache storm

  References

Comments

Popular posts from this blog

How to Fix datetime Import Error in Python Quickly

SQL Query: 3 Methods for Calculating Cumulative SUM

Big Data: Top Cloud Computing Interview Questions (1 of 4)