Featured Post

How to Create a Symmetric Array in Python: A Fun Logic Exercise

Image
 Here's a Python program that says to write a Symmetric array transformation. A top interview question. Symmetric Array Transformation Problem: Write a Python function that transforms a given array into a symmetric array by mirroring it around its center. For example: Input: [1, 2, 3] Output: [1, 2, 3, 2, 1] Hints: Use slicing for the reverse part. Concatenate the original array with its mirrored part. Example def symmetric_array(arr):     """     Transforms the input array into a symmetric array by mirroring it around its center.     Parameters:     arr (list): The input array.     Returns:     list: The symmetric array.     """     # Mirror the array by concatenating the original with its reverse (excluding the last element to avoid duplication)     return arr + arr[-2::-1] # Example usage input_array = [1, 2, 3] symmetric_result = symmetric_array(input_array) print("Input Array:", input_arr...

The story Hadoop data value less in cost than ETL

Traditional data warehouse

That isn’t to say that Hadoop can’t be used for structured data that is readily available in a raw format; because it can.In addition, when you consider where data should be stored, you need to understand how data is stored today and what features characterize your persistence options. 
  • Consider your experience with storing data in a traditional data warehouse. Typically, this data goes through a lot of rigor to make it into the warehouse.
  •  Builders and consumers of warehouses have it etched in their minds that the data they are looking at in their warehouses must shine with respect to quality; subsequently, it’s cleaned up via cleansing, enrichment, matching, glossary, metadata, master data management, modeling, and other services before it’s ready for analysis. 
  • Obviously, this can be an expensive process. Because of that expense, it’s clear that the data that lands in the warehouse is deemed not just of high value, but it has a broad purpose: it’s going to go places and will be used in reports and dashboards where the accuracy of that data is key. 
Big data in Hadoop

Big Data repositories rarely undergo (at least initially) the full quality control rigors of data being injected into a warehouse, because not only is prepping data for some of the newer analytic methods characterized by Hadoop use cases cost prohibitive (which we talk about in the next chapter), but the data isn’t likely to be distributed like data warehouse data. We could say that data warehouse data is trusted enough to be “public,” while Hadoop data isn’t as trusted (public can mean vastly distributed within the company and not for external consumption), and although this will likely change in the future, today this is something that experience suggests characterizes these repositories.

Specific pieces of data have been stored based on their perceived value, and therefore any information beyond those pre-selected pieces is unavailable. This is in contrast to a Hadoop-based repository scheme where the entire business entity is likely to be stored and the fidelity of the Tweet, transaction, Facebook post, and more is kept intact. 

Data in Hadoop might seem of low value today, or its value nonquantified, but it can in fact be the key to questions yet unasked. IT departments pick and choose high-valued data and put it through rigorous cleansing and transformation processes because they know that data has a high known value per byte (a relative phrase, of course).

ETL and Big data
Stockphotos.io

Why else would a company put that data through so many quality control processes? 

Of course, since the value per byte is high, the business is willing to store it on relatively higher cost infrastructure to enable that interactive, often public, navigation with the end user communities, and the CIO is willing to invest in cleansing the data to increase its value per byte.
  • With Big Data, you should consider looking at this problem from the opposite view: With all the volume and velocity of today’s data, there’s just no way that you can afford to spend the time and resources required to cleanse and document every piece of data properly, because it’s just not going to be economical. 

What’s more, how do you know if this Big Data is even valuable? 

Are you going to go to your CIO and ask her to increase her capital expenditure (CAPEX) and operational expenditure (OPEX) costs by fourfold to quadruple the size of your warehouse on a hunch? 

For this reason, we like to characterize the initial nonanalyzed raw Big Data as having a low value per byte, and, therefore, until it’s proven otherwise, you can’t afford to take the path to the warehouse; however, given the vast amount of data, the potential for great insight (and therefore greater competitive advantage in your own market) is quite high if you can analyze all of that data.
  • The idea of cost per compute, which follows the same pattern as the value per byte ratio. If you consider the focus on the quality data in traditional systems we outlined earlier, you can conclude that the cost per compute in a traditional data warehouse is relatively high (which is fine, because it’s a proven and known higher value per byte), versus the cost of Hadoop, which is low.
Of course, other factors can indicate that certain data might be of high value yet never make its way into the warehouse, or there’s a desire for it to make its way out of the warehouse into a lower cost platform; either way, you might need to cleanse some of that data in Hadoop, and IBM can do that (a key differentiator). 

For example, unstructured data can’t be easily stored in a warehouse.

Indeed, some warehouses are built with a predefined corpus of questions in mind. Although such a warehouse provides some degree of freedom for query and mining, it could be that it’s constrained by what is in the schema (most unstructured data isn’t found here) and often by a performance envelope that can be a functional/operational hard limit. Again, as we’ll reiterate often in this book, we are not saying a Hadoop platform such as IBM InfoSphere BigInsights is a replacement for your warehouse; instead, it’s a complement.
  • A Big Data platform lets you store all of the data in its native business object format and get value out of it through massive parallelism on readily available components. For your interactive navigational needs, you’ll continue to pick and choose sources and cleanse that data and keep it in warehouses. But you can get more value out of analyzing more data (that may even initially seem unrelated) in order to paint a more robust picture of the issue at hand. 
Indeed, data might sit in Hadoop for a while, and when you discover its value, it might migrate its way into the warehouse when its value is proven and sustainable.

Comments

Popular posts from this blog

How to Fix datetime Import Error in Python Quickly

SQL Query: 3 Methods for Calculating Cumulative SUM

Big Data: Top Cloud Computing Interview Questions (1 of 4)