• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer navigation

The Geek Diary

  • OS
    • Linux
    • CentOS/RHEL
    • VCS
  • Interview Questions
  • Database
    • MariaDB
  • DevOps
    • Docker
    • Shell Scripting
  • Big Data
    • Hadoop
    • Cloudera
    • Hortonworks HDP

How to run Hadoop without using SSH

by admin

The start-all.sh and stop-all.sh scripts in the hadoop/bin directory will use SSH to launch some of the Hadoop daemons. If for some reason SSH is not available on the server, please follow the steps below to run Hadoop without using SSH.

The goal is to modify all “hadoop-daemons.sh” with “hadoop-daemon.sh“. The “hadoop-daemons.sh” simply runs “hadoop-daemon.sh” through SSH.

1. Modify start-dfs.sh script:

from:

${bin}/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode $nameStartOpt
${bin}/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode $dataStartOpt
${bin}/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts masters start secondarynamenode

to:

${bin}/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode $nameStartOpt
${bin}/hadoop-daemon.sh --config $HADOOP_CONF_DIR start datanode $dataStartOpt
${bin}/hadoop-daemon.sh --config $HADOOP_CONF_DIR --hosts masters start secondarynamenode

2. Modify stop-dfs.sh script:

from:

${bin}/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop namenode
${bin}/hadoop-daemons.sh --config $HADOOP_CONF_DIR stop datanode
${bin}/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts masters stop secondarynamenode

to:

${bin}/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop namenode
${bin}/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop datanode
${bin}/hadoop-daemon.sh --config $HADOOP_CONF_DIR --hosts masters stop secondarynamenode

3. Modify start-mapred.sh script:

from:

${bin}/hadoop-daemon.sh --config $HADOOP_CONF_DIR start jobtracker
${bin}/hadoop-daemons.sh --config $HADOOP_CONF_DIR start tasktracker

to:

${bin}/hadoop-daemon.sh --config $HADOOP_CONF_DIR start jobtracker
${bin}/hadoop-daemon.sh --config $HADOOP_CONF_DIR start tasktracker

4. Modify stop-mapred.sh script:

from:

${bin}/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop jobtracker
${bin}/hadoop-daemons.sh --config $HADOOP_CONF_DIR stop tasktracker

to:

${bin}/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop jobtracker
${bin}/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop tasktracker

Note that after this change, start-all.sh and stop-all.sh will not start/stop any other Hadoop nodes outside of this server remotely. All other remote slaves must be started/stopped manually, directly on those servers.

Filed Under: Hadoop

Some more articles you might also be interested in …

  1. Converting Many Small Files To A Sequence File In HDFS
  2. HDPCA Exam Objective – Add an HDP service to a cluster using Ambari
  3. HDPCA Exam Objective – Configure HDFS ACLs
  4. HDPCA Exam Objective – Install and configure Ranger
  5. CCA 131 – Set up a local CDH repository
  6. HDPCA Exam Objective – Recover a snapshot
  7. CCA 131 – Rebalance the cluster
  8. Preparing for the HDPCA (HDP Certified Administrator) Exam
  9. HDPCA Exam Objective – Configure the Capacity Scheduler
  10. HDPCA Exam Objective – Decommission a node (datanode)

You May Also Like

Primary Sidebar

Recent Posts

  • Vanilla OS 2 Released: A New Era for Linux Enthusiasts
  • mk Command Examples
  • mixxx Command Examples
  • mix Command Examples

© 2025 · The Geek Diary

  • Archives
  • Contact Us
  • Copyright