Featured Post

15 Python Tips : How to Write Code Effectively

Image
 Here are some Python tips to keep in mind that will help you write clean, efficient, and bug-free code.     Python Tips for Effective Coding 1. Code Readability and PEP 8  Always aim for clean and readable code by following PEP 8 guidelines.  Use meaningful variable names, avoid excessively long lines (stick to 79 characters), and organize imports properly. 2. Use List Comprehensions List comprehensions are concise and often faster than regular for-loops. Example: squares = [x**2 for x in range(10)] instead of creating an empty list and appending each square value. 3. Take Advantage of Python’s Built-in Libraries  Libraries like itertools, collections, math, and datetime provide powerful functions and data structures that can simplify your code.   For example, collections.Counter can quickly count elements in a list, and itertools.chain can flatten nested lists. 4. Use enumerate Instead of Range     When you need both the index and the value in a loop, enumerate is a more Pyth

Big data: Quiz-1 Hadoop Top Interview Questions

In this post, I have given a Quiz on Big data with answers. This is part-1 set of questions for your quick reference.
bigdata quiz
Photo credit: Srini
Q.1) How Hadoop achieve scaling in terms of storage?
A.By increasing the hard disk capacity of the machine
B.By increasing the RAM capacity of the machine
C.By increasing both the hard disk and RAM capacity of the machine
D.By increasing the hard disk capacity of the machine and by adding more machine

Q.2) How fault tolerance with respect to data is achieved in Hadoop?
A.By breaking the data into smaller blocks and distributing these smaller blocks into several machines
B.By adding extra nodes.
C.By breaking the data into smaller blocks and copying each block several times, and distributing these replicas across several machines. By doing this Hadoop makes sure even if the machines are failed the replica is present in some other machine
D.None of these

Q.3) In what all parameters Hadoop scales up?
A. Storage only
B. Performance only
C. Storage and performance both
D. Storage, performance and IO bandwidth

Q.4) What is the scalability limit of Hadoop?
A. NameNode’s RAM
B. NameNode’s a hard disk
C. Both Hard disk and RAM of the NameNode
D. Hadoop can scale up to any limit

Q.5) How does Hadoop do the reading faster?
A. Hadoop uses high-end machines which has lower disk latency
B. Hadoop minimizes the seek rate by reading the full block of data at once
C. By adding more machines to the cluster, so that it can read the data faster
D. By increasing the hard disk size of the machine where data is stored

Q.6) What is HDFS?
A. HDFS is a regular file system like any other file system, and you can perform any operations on HDFS
B.  HDFS is a layered file system on top of your native file system and you can do all the operations you want
C.  HDFS is a layered file system which modifies the local system in such a way that you can perform any operations
D.  HDFS is layered file system on top of your local file system which does not modify local file system and there are some restrictions with respect to the operations which you perform

Q.7) When you put the files on HDFS, what does it do?
A.  Break the file into blocks,  each block is replicated and  replicas are distributed  over the machines and NameNode updates its metadata
B.  File is replicated and is distributed across several machines and NameNode update its metadata
C.  File is broken into blocks, each block is replicated and distributed across machines and DataNode’s update its metadata
D.  File is kept as it is on the machines, along with the replicas.

Q.8) When you put the files on HDFS, where do the HDFS stores its blocks?
A.  On HDFS
B.  On NameNode’s local file system
C.  On Data Node’s local file system
D.  Blocks are placed both on NameNode’s and DataNode’s local file system so that if DataNode goes down, NameNode should be able to replicate the data from its own local file system
  
Q.9) What if the NameNode goes down?
A. Secondary NameNode takes up the charge and starts serving data nodes
B. NameNode is a single point of failure; the administrator has to manually start the NameNode. Till then HDFS is inaccessible.
C. Secondary Name Node asks one of the DataNodes to take up the charge of the NameNode, so that there is no interruption in the service
D.  None of these

Q.10). Does Hadoop efficiently solve every kind of problem?
A. Yes, it is like any other framework and is capable of solving any problem efficiently
B. Hadoop can solve those problems very efficiently where the data is independent of each other
C. Hadoop can solve only data-intensive problems efficiently
D. Hadoop can solve only computational intensive problems efficiently

Q.11) If a file is broken into blocks and distributed across the machine then how you read back the file?
A.  You will search each of the data nodes and ask the data nodes the list of blocks. Then you check each of the blocks and read the appropriate block
B.  You will ask the NameNode, and since NameNode has the meta information, it will read the data from the data nodes and give back the file to you
C.  You will ask the NameNode, and since the NameNode has the meta information, it will give you the list of data nodes which are hosting the blocks and then you go to each of the data nodes and read the block
D.  You will directly read the files from HDFS

Q.12) What is the command to copy a file from a client’s local machine to HDFS? Let’s assume a file by name “sample” is present under the location “/usr/local” directory and the client is interested to copy the file by name “sample_hdfs” on HDFS?
A.      hadoop  fs    -cp/usr/local/sample    sample_hdfs
B.      hadoop fs   -copyFromLocal   /usr/local/sample    sample_hdfs
C.      hadoop   fs   -get    sample_hdfs   /usr/local/sample
D.      hadoop   fs  -put   sample_hdfs    /usr/local/sample

Q.13) Does the following command will execute successfully or will throw an exception “Hadoop   fs  -setrep 0  sample” where the sample is a file present on HDFS?
A.  This command will not throw any exception
B.  This command might throw an exception when the size of the sample file is greater than the block size
C.  Yes this command will throw exception as you cannot have the replication factor set to 0
D.  This command will throw exception only when the size of the sample file is less than the block size

Q.14) There are two files file_1 and file_2 on HDFS under directory “foo”. What is the result of the command hadoop fs  -getMerge foo foo
A.  It will create a directory “foo” on a local file system and under this directory file_1 and file_2 will be copied
B.  It will create a file “foo” on the local file system with the contents of file_1 and file_2 merged in this file
C.  This will throw an exception as the getmerge command works only on files, not on directories
D.  This command will throw an exception as both the source and destination directories are the same. They need to be different if this operation needs to be performed. 

Comments

Popular posts from this blog

How to Fix datetime Import Error in Python Quickly

SQL Query: 3 Methods for Calculating Cumulative SUM

Python placeholder '_' Perfect Way to Use it