Since now we are done with the Hadoop Installation. Lets Start working on it's Shell Commands. If you know Unix Shell Commands it will be really easy for you to understand it. It is quite similar to the unix shell commands but not all of the Unix command is going to work here in Hadoop.
You have to prefix every Hadoop Command with hadoop fs which represents hadoop filesystem. Now Hadoop File System shell is invoked by bin/hadoop fs
http://bigdataplanet.info/p/what-is-big-data.html
Here http is the scheme, bigdataplanet is authority and /p is the path and this path is pointing to the article what-is-big-data.html
Similarly For HDFS the scheme is hdfs, and for the local filesystem the scheme is file.
Here the scheme and authority is optional because it will take the default we have put in the Configuration file.
So lets get started and see the commands:
mkdir
Create directory in the given path:
hadoop fs -mkdir <paths>
Ex: hadoop fs -mkdir /user/deepak/dir1
ls
List the file for the given path:
hadoop fs -ls <args>
Ex: hadoop fs -ls /user/deepak
lsr
Similar to lsr in Unix shell.
hadoop fs -lsr <args>
Ex: hadoop fs -lsr /user/deepak
touchz
Creates file in the given path:
hadoop fs -touchz <path[filename]>
Ex: hadoop fs -touchz /user/deepak/dir1/abc.txt
cat
Same as unix cat command:
hadoop fs -cat <path[filename]>
Ex: hadoop fs -cat /user/deepak/dir1/abc.txt
cp
Copy files from source to destination. This command allows multiple sources as well in which case the destination must be a directory.
hadoop fs -cp <source> <dest>
Ex: hadoop fs -cp /user/deepak/dir1/abc.txt /user/deepak/dir2
put
Copy single src, or multiple srcs from local file system to the destination filesystem. Also reads input from stdin and writes to destination filesystem.
hadoop fs -put <source:localFile> <destination>
Ex: hadoop fs -put /home/hduser/def.txt /user/deepak/dir1
get
Copy files to the local file system
hadoop fs -get <source> <dest:localFileSystem>
Ex: hadoop fs -get /user/deepak/dir1 /home/hduser/def.txt
copyFromLocal
Similar to put except that the source is limited to local files.
hadoop fs -copyFromLocal <src:localFileSystem> <dest:Hdfs>
Ex: hadoop fs -put /home/hduser/def.txt /user/deepak/dir1
copyToLocal
Similar to get except that the destination is limited to local files.
hadoop fs -copyToLocal <src:Hdfs> <dest:localFileSystem>
Ex: hadoop fs -get /user/deepak/dir1 /home/hduser/def.txt
mv
Move file from source to destination. Except moving files across filesystem is not permitted.
hadoop fs -mv <src> <dest>
Ex: hadoop fs -mv /user/deepak/dir1/abc.txt /user/deepak/dir2
rm
Remove files specified as argument. Deletes directory only when it is empty
hadoop fs -rm <arg>
Ex: hadoop fs -rm /user/deepak/dir1/abc.txt
rmr
Recursive version of delete.
hadoop fs -rmr <arg>
Ex: hadoop fs -rmr /user/deepak/
stat
Retruns the stat inofrmation on the path.
hadoop fs -stat <path>
Ex: hadoop fs -stat /user/deepak/dir1
tail
Similar to tail in Unix Command.
hadoop fs -tail <path[filename]>
Ex: hadoop fs -tail /user/deepak/dir1/abc.txt
test
Test comes with the following options:
-e check to see if the file exists. Return 0 if true.
-z check to see if the file is zero length. Return 0 if true
-d check return 1 if the path is directory else return 0.
hadoop fs -test -[ezd]<path>
Ex: hadoop fs -test -e /user/deepak/dir1/abc.txt
text
Takes a source file and outputs the file in text format. The allowed formats are zip and TextRecordInputStream.
hadoop fs -text <src>
Ex: hadoop fs -text /user/deepak/dir1/abc.txt
du
Display the aggregate length of a file.
hadoop fs -du <path>
Ex: hadoop fs -du /user/deepak/dir1/abc.txt
dus
Displays the summary of a file length.
hadoop fs -dus <args>
Ex: hadoop fs -dus /user/deepak/dir1/abc.txt
expung
Empty the trash.
hadoop fs -expunge
chgrp
Change group association of files. With -R, make the change recursively through the directory structure. The user must be the owner of files, or else a super-user.
hadoop fs -chgrp [-R] GROUP <path>
chmod
Change the permissions of files. With -R, make the change recursively through the directory structure. The user must be the owner of the file, or else a super-user.
hadoop fs -chmod [-R] <MODE[,MODE] | OCTALMODE> <path>
chown
Change the owner of files. With -R, make the change recursively through the directory structure. The user must be a super-user.
hadoop fs -chown [-R] [OWNER][:[GROUP]] <path>
getmerge
Takes a source directory and a destination file as input and concatenates files in src into the destination local file. Optionally addnl can be set to enable adding a newline character at the end of each file.
hadoop fs -getmerge <src> <localdst> [addnl]
setrep
Changes the replication factor of a file. -R option is for recursively increasing the replication factor of files within a directory.
hadoop fs -setrep [-R] <path>
Well these are all the hadoop commands that you will need for the HDFS manipulation. Try this and let me know if you face any problem in the comment box.
If you like what you just read and want to continue your learning on BIGDATA you can subscribe to our Email and Like our facebook page
Find Comments below or Add one
This Tutorial is very infomative.
can you add MR tutorial so that will be well appreciated
Post a comment