Featured Post

Python Set Operations Explained: From Theory to Real-Time Applications

Image
A  set  in Python is an unordered collection of unique elements. It is useful when storing distinct values and performing operations like union, intersection, or difference. Real-Time Example: Removing Duplicate Customer Emails in a Marketing Campaign Imagine you are working on an email marketing campaign for your company. You have a list of customer emails, but some are duplicated. Using a set , you can remove duplicates efficiently before sending emails. Code Example: # List of customer emails (some duplicates) customer_emails = [ "alice@example.com" , "bob@example.com" , "charlie@example.com" , "alice@example.com" , "david@example.com" , "bob@example.com" ] # Convert list to a set to remove duplicates unique_emails = set (customer_emails) # Convert back to a list (if needed) unique_email_list = list (unique_emails) # Print the unique emails print ( "Unique customer emails:" , unique_email_list) Ou...

Hadoop: How to find which file is healthy

Hadoop provides file system health check utility which is called "fsck". Basically, it checks the health of all the files under a path It also checks the health of all the files under the '/'(root).
  • BIN/HADOOP fsck / - It checks the health of all the files
  • BIN/HADOOP fsck /test/ - It checks the health of files under the path
By default fsck utility cannot do anything for under replicated blocks and over replicated blocks. Hadoop itself heal the blocks.
Healthy file checking ides

 How to find which file is healthy

  • It prints out dot for each healthy file
  • It will print a message for each file, if it is not healthy, also for under replicated blocks, over replicated blocks, mis-replicated blocks, and corrupted blocks.
  • By default fsck utility cannot do anything for under replicated blocks and over replicated blocks. Hadoop itself heal the blocks.

How to delete corrupted blocks

  • BIN/HADOOP fsck -delete block-names
  • It will delete all corrupted blocks
  • BIN/HADOOP fsck -move block-names
  • It will move corrupted blocks to /lost directory
  • Other options we can use with fsck:
    • files
    • blocks
    • locations

Comments

Popular posts from this blog

SQL Query: 3 Methods for Calculating Cumulative SUM

Big Data: Top Cloud Computing Interview Questions (1 of 4)

5 SQL Queries That Popularly Used in Data Analysis