Beyond HDFS to store massive Hadoop data

The file system that is so popular in Hadoop eco system is HDFS. This is also called Hadoop File System. You may ask a question that in real-time the data storing in HDFS is expensive or not. Before I get you there, I want to share how HDFS file system works in the context of Hadoop.

“Big Data and Hadoop – Training by Edureka Become a Hadoop Expert by mastering MapReduce, Yarn, Pig, Hive, HBase, Oozie, Flume and Sqoop while working on industry based Use-cases and Projects. Know More!”

The popular file formats that Hadoop supports: Some common storage formats for Hadoop include:

  • Plain text storage (eg, CSV, TSV files)
  • Sequence Files
  • Avro
  • Parquet

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,740 other followers

HDFS can store any type of data, including text data in binary format, including even image or audio files. HDFS was initially and currently developed to be used by MapReduce. So, the file format that fits to the MapReduce or Hive workload is usually used. 

One challenge with implementing HDFS is achieving availability and scalability at the same time. You may have a large amount of data that can’t fit on a single physical machine disk, so it’s necessary to distribute the data among multiple machines. HDFS can do this automatically and transparently while providing a user-friendly interface to developers. HDFS achieves these two main points:

  • High scalability

  • High availability

HDFS snapshot

It copies a data in the filesystem at some point in time. A snapshot can be taken for a subtree or the entire filesystem. Snapshot can usually be used for data backup for protection against some failures or disaster recovery, and snapshot is read-only data, because it is meaningless if you can modify the snapshot data after it is created.

HDFS snapshot was designed to copy data efficiently, and the main effectiveness of making HDFS snapshot includes:

  • Creating a snapshot takes constant time order O(1), excluding the inode lookup time, because it does not copy actual data but only makes a reference.

  • Additional memory is used only when the original data is modified. The size of additional memory is proportional to the number of modifications.

  • The modifications are recorded as the collection in reverse chronological order. The current data is not modified any more, and the snapshot data is computed by subtracting the modifications from the current data.

HDFS

In real-time creating HDFS cluster in HDOOP eco system is expensive. So can go for Cloud storage:

Amazon EMR: Amazon Elastic MapReduce is a cloud service for Hadoop. It provides an easy way to create Hadoop clusters on EC2 instances and to access HDFS or S3. You can use major distributions on Amazon EMR such as Hortonworks Data Platform, and MapR distributions.

The launching process is automated and simplified by Amazon EMR, and HDFS can be used to store intermediate data generated while running a job on an Amazon EMR cluster. Only input and final output are put on S3, which is the best practice for using EMR storage

Treasure Data Service: Treasure Data is a fully managed cloud data platform. You can easily import any type of data on a storage system managed by Treasure Data, which uses HDFS and S3 internally, but encapsulates their detail. You do not have to pay attention to these storage systems.

Treasure Data mainly uses Hive and Presto as its analytics platform. You can write SQL to analyze what is imported on a Treasure Data storage service. Treasure Data is using HDFS and S3 as its backend and makes use of their advantages respectively. If you do not want to do any operation on HDFS, Treasure Data can be a best choice.

Azure Blob Storage: Azure Blob Storage is a cloud storage service provided by Microsoft. The combination of Azure Blob Storage and HDInsight provides a full-featured HDFS compatible storage system. A user who is used to HDFS can seamlessly use Azure Blob Storage. A lot of Hadoop ecosystems can operate directly on the data that Azure Blob Storage manages. Azure Blob Storage is optimized to be used by a computation layer such as HDInsight, and it provides various types of interfaces, such as PowerShell and of course Hadoop HDFS commands. The developers who are already comfortable using Hadoop can get started easily with Azure Blob Storage

Advertisements

Author: Srini

Experienced software developer. Skills in Development, Coding, Testing and Debugging. Good Data analytic skills (Data Warehousing and BI). Also skills in Mainframe.