Hadoop is designed to process large volumes of data by connecting many commodity computers together to work in parallel.
The theoretical 1000-CPU machine described earlier would cost a very large amount of money, far more than 1,000 single-CPU or 250 quad-core machines. You May Also Like: Hadoop Eco System
Hadoop will tie these smaller and more reasonably priced machines together into a single cost-effective compute cluster.
Hadoop Vs. EXISTING TECHNIQUES
Performing computation:
On large volumes of data has been done before, usually in a distributed setting.
What makes Hadoop unique is its simplified programming model which allows the user to quickly write and test distributed systems, and its efficient, automatic distribution of data and work across machines and in turn utilizing the underlying parallelism of the CPU cores.
Grid scheduling of computers:
Can be done with existing systems such as Condor. But Condor does not automatically distribute data: a separate SAN must be managed in addition to the compute cluster.
Furthermore, collaboration between multiple compute nodes must be managed with a communication system such as MPI.
This programming model is challenging to work with and can lead to the introduction of subtle errors. ATA DISTRIBUTION.
In a Hadoop cluster:
Data is distributed to all the nodes of the cluster as it is being loaded in.
The Hadoop Distributed File System (HDFS) will split large data files into chunks which are managed by different nodes in the cluster.
In addition to this, each chunk is replicated across several machines so that a single machine failure does not result in any data being unavailable.
An active monitoring system then re-replicates the data in response to system failures which can result in partial storage.
Even though the file chunks are replicated and distributed across several machines, they form a single namespace, so their contents are universally accessible.
Data is conceptually
Record-oriented in the Hadoop programming framework. Individual input files are broken into lines or into other formats specific to the application logic.
Each process running on a node in the cluster then processes a subset of these records.
The Hadoop framework then schedules these processes in proximity to the location of data/records using knowledge from the distributed file system.
Since files are spread across the distributed file system as chunks, each computes process running on a node operates on a subset of the data.
Which data operated on by a node is chosen based on its locality to the node: most data is read from the local disk straight into the CPU, alleviating strain on network bandwidth and preventing unnecessary network transfers.
This strategy of moving computation to the data, instead of moving the data to the computation allows Hadoop to achieve high data locality which in turn results in high performance.
Related Posts