International Journal of Scientific and Research Publications

IJSRP, Volume 5, Issue 7, July 2015 Edition [ISSN 2250-3153]

Internals of Hadoop Application Framework and Distributed File System
      Saminath.V, Sangeetha.M.S
Abstract: Hadoop is an open-source framework to storing and processing of Big data in a distributed environment. Big data is collection of complex and large volume of structured and un-structured data. Hadoop stores data throughout clusters located in geographically different machines and distribute workload using parallel computing. MapReduce is software framework derived on Java, to analyze the large scale data. MapReduce uses Distributed Data processing model. HDFS is another component in Hadoop, storing large volume of data. Google File system supports immense amount of data stored into distributed data nodes, each node has redundant data storage maintained to avoid lost. This paper explains the HDFS, details of jobs node cluster environment, stack layered component on Hadoop framework, various Application development on Hadoop.

Reference this Research Paper (copy & paste below code):

Saminath.V, Sangeetha.M.S (2018); Internals of Hadoop Application Framework and Distributed File System; Int J Sci Res Publ 5(7) (ISSN: 2250-3153).
©️ Copyright 2011-2022 IJSRP - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.