Hadoop and Big data?

Big data basically is a unstructured data. We can not process it traditional methods.

The volume, velocity, and variety of big data will bring most technologies to their knees.

Hadoop was developed because it represented the most pragmatic way to allow companies to manage huge volumes of data easily.

Hadoop was originally built by a Yahoo! engineer named Doug Cutting and is now an open source project managedby the Apache Software Foundation.

Hadoop is the framework developed by Apache to solve big data problems:

  • Hadoop Distributed File System: A reliable, high-bandwidth, low-cost, data storage cluster that facilitates the management of related files across machines.
  • MapReduce engine: A high-performance parallel/distributed data-processing implementation of the MapReduce algorithm

Hadoop is designed to process huge amounts of structured and unstructured data (terabytes to petabytes) and is implemented on racks of commodity servers as a Hadoop cluster.

I will add more in my next post. 

Related articles

About these ads

Have Something to Say? Post Your Comment

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s