Hadoop Tutorial: Part 4 - Write Operations in HDFS



In the previous tutorial we get to know about the read operations in HDFS, now lets move on further and see how write happens in HDFS.


 source: Hadoop The Definitive Guide
Here we are considering the case that we are going to create a new file, write data to it and will close the file.
Now in writing a data to HDFS there are seven steps involved. These seven steps are:

  • Step 1: The client creates the file by calling create() method on DistributedFileSystem.
  • Step 2:   DistributedFileSystem makes an RPC call to the namenode to create a new file in the filesystem’s namespace, with no blocks associated with it.
  • The namenode performs various checks to make sure the file doesn't already exist and that the client has the right permissions to create the file. If these checks pass, the namenode makes a record of the new file; otherwise, file creation fails and the client is thrown an  IOException. The DistributedFileSystem returns an FSDataOutputStream for the client to start writing data to. Just as in the read case, FSDataOutputStream wraps a DFSOutputStream, which handles communication with the datanodes and namenode.
  • Step 3: As the client writes data, DFSOutputStream splits it into packets, which it writes to an internal queue, called the  data queue. The data queue is consumed by the  DataStreamer,  which  is  responsible  for  asking  the  namenode  to  allocate  new  blocks  by picking a list of suitable datanodes to store the replicas. The list of datanodes forms a pipeline, and here we’ll assume the replication level is three, so there are three nodes in the pipeline. The DataStreamer streams the packets to the first datanode in the pipeline, which stores the packet and forwards it to the second datanode in the pipeline.
  • Step 4: Similarly, the second datanode stores the packet and forwards it to the third (and last) datanode in the pipeline.
  • Step 5: DFSOutputStream also maintains an internal queue of packets that are waiting to be acknowledged by datanodes, called the ack queue. A packet is removed from the ack queue only when it has been acknowledged by all the datanodes in the pipeline.
  • Step 6: When the client has finished writing data, it calls close() on the stream. 
  • Step 7: This action flushes all the remaining packets to the datanode pipeline and waits for acknowledgments before contacting the namenode to signal that the file is complete The  namenode  already  knows  which blocks  the  file  is  made  up  of  (via  DataStreamer asking for block allocations), so it only has to wait for blocks to be minimally replicated before returning successfully.

So what happens when datanode fails while data is being written to it ?


If such a condition occurs then the following actions are taken which are transparent to the client writing the data.
  • First, the pipeline is closed, and any packets in the ack queue are added to the front of the data queue so that datanodes that are downstream from the failed node will not miss any packets.
  • The current block on the good datanodes is given a new identity, which is communicated to the namenode, so that  the partial  block on the failed datanode will  be deleted if the failed datanode recovers later on. 
  • The failed datanode is removed from the pipeline, and the remainder of the block’s data is written to the two good datanodes in the pipeline.
  • The namenode notices that the block is under-replicated, and it arranges for a further replica to be created on another node. Subsequent blocks are then treated as normal.

It’s possible, but unlikely, that multiple datanodes fail while a block is being written. As long as dfs.replication.min replicas (which default to one) are written, the write will succeed, and the block will be asynchronously replicated across the cluster until its target replication factor is reached (dfs.replication, which defaults to three).

Hence these are the processes that are followed by the client, the namenode and datanodes while writing the file to HDFS. Now as we know about HDFS, replication factor, read operations and write operations. So next we will be going through the map-reduce which is the next second part of Hadoop. In between we will see lots of questions that are important or may be left while explaining HDFS. I will be covering those in details. Till then keep reading :)

Let me know if you have any doubts in understanding anything into the comment section and i will be really glad to answer your questions :)



If you like what you just read and want to continue your learning on BIGDATA you can subscribe to our Email and Like our facebook page




Find Comments below or Add one

Anonymous said...

Hello, you can explain: Why bloc size in Hadoop is large?
Thanks.

Nhật Thiện said...

Good post!

Post a Comment