All about Hadoop eco system and its popular technologies

You came up to this point since you already know about Hadoop. So, if want to solve your big data problems you need right platform.

This platform most popularly called as Hadoop eco system.

See the below hadoop eco system image I have collected it from Edureka for your quick reference. This eco system comprises of 14 technologies. My intention here is to know applications of all the technologies involved in this platform.

The below Hadoop eco system diagram well explained about total architecture. For beginners this is a good image to refer and get knowledge quickly.

Hadoop Ecosystem - EdurekaThe following are the list of technologies involved:

  1. HDFS -> Hadoop Distributed File System
  2. YARN -> Yet Another Resource Negotiator
  3. MapReduce -> Data processing using programming
  4. Spark -> In-memory Data Processing
  5. PIG, HIVE-> Data Processing Services using Query (SQL-like)
  6. HBase -> NoSQL Database
  7. Mahout, Spark MLlib -> Machine Learning
  8. Apache Drill -> SQL on Hadoop
  9. Zookeeper -> Managing Cluster
  10. Oozie -> Job Scheduling
  11. Flume, Sqoop -> Data Ingesting Services
  12. Solr & Lucene -> Searching & Indexing
  13. Ambari -> Provision, Monitor and Maintain cluster
  14. Hadoop


Let us see in detail…

  • HDFS
    • HDFS is the one, which makes it possible to store different types of large data sets (i.e. structured, unstructured and semi structured data).
  • YARN
    • The basic functions of YARN is ResourceManager and NodeManager.
  • MAP REDUCE
    • MapReduce is a software framework which helps in writing applications that processes large data sets using distributed and parallel algorithms inside Hadoop environment.
  • SPARK
    • In memory data processing
  • HIVE
    • Facebook created HIVE for people who are fluent with SQL. Thus, HIVE makes them feel at home while working in a Hadoop Ecosystem.
  • PIG
    • PIG has two parts: Pig Latin, the language and the pig runtime, for the execution environment. You can better understand it as Java and JVM.
    • It supports pig latin language, which has SQL like command structure.
  • HBASE
    • NOSQL database
  • MAHOUT
    • Mahout provides an environment for creating machine learning applications which are scalable
  • SPARK
    • Apache Spark is a framework for real time data analytics in a distributed computing environment.
  • DRILL
    • It is a replica of Google Dremel.
    • It supports different kinds NoSQL databases and file systems, which is a powerful feature of Drill.  For example: Azure Blob Storage, Google Cloud Storage, HBase, MongoDB, MapR-DB HDFS, MapR-FS, Amazon S3, Swift, NAS and local files.
  • ZOOKEEPER
    • Apache Zookeeper is the coordinator of any Hadoop job which includes a combination of various services in a Hadoop Ecosystem.
  • OOZIE
    • Consider Apache Oozie as a clock and alarm service inside Hadoop Ecosystem. For Apache jobs, Oozie has been just like a scheduler.
  • FLUME
    • The Flume is a service which helps in ingesting unstructured and semi-structured data into HDFS.
  • SQOOP
    • Flume only ingests unstructured data or semi-structured data into HDFS.
    • While Sqoop can import as well as export structured data from RDBMS or Enterprise data warehouses to HDFS or vice versa.
  • SOLAR & LUCENE
    • Apache Solr and Apache Lucene are the two services which are used for searching and indexing in Hadoop Ecosystem.
  • AMBARI
    • Ambari is an Apache Software Foundation Project which aims at making Hadoop ecosystem more manageable.
  • HADOOP
    • It is the overall architecture of Hadoop system

Top 3 Hadoop platforms

  1. IBM
  2. Cloudera
  3. AWS

Still another 7 popular platforms are so popular. You can read here.

Advertisements

Author: Srini

Experienced software developer. Skills in Development, Coding, Testing and Debugging. Good Data analytic skills (Data Warehousing and BI). Also skills in Mainframe.

Comments are closed.