• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer navigation

The Geek Diary

  • OS
    • Linux
    • CentOS/RHEL
    • VCS
  • Interview Questions
  • Database
    • MariaDB
  • DevOps
    • Docker
    • Shell Scripting
  • Big Data
    • Hadoop
    • Cloudera
    • Hortonworks HDP

How To Modify Hadoop Log Level

by admin

By default, Hadoop’s log level is set to INFO. This can be too much for most instances, as it will generate huge log files, even in an environment with low to moderate traffic. Changing the root logger in log4j.properties file in Hadoop will not change the log level.

Follow the steps below for changing the log level of Hadoop.

1. Shut down Hadoop if it is still running.

2. Open the [hadoop_home]/bin/hadoop-daemon.sh file. Look for the following line:

export HADOOP_ROOT_LOGGER="INFO,DRFA"

3. Modify that line to a lower or higher log level (WARN should be sufficient if the goal is to limit the size of the log file rather than debugging an issue).

export HADOOP_ROOT_LOGGER="WARN,DRFA"

4. Save the file and start Hadoop.

Filed Under: Hadoop

Some more articles you might also be interested in …

  1. HDPCA Exam Objective – Configure NameNode HA
  2. Preparing for the HDPCA (HDP Certified Administrator) Exam
  3. CCA 131 – Rebalance the cluster
  4. CCA 131 – Install CDH using Cloudera Manager
  5. CCA 131 – Add a new node to an existing cluster
  6. HDPCA Exam Objective – Change the configuration of a service using Ambari
  7. How to Configure Hive Authorization Using Apache Ranger
  8. CCA 131 – Add a service using Cloudera Manager
  9. HDPCA Exam Objective – Install and configure Knox
  10. HDPCA Exam Objective – Add a new node to an existing cluster

You May Also Like

Primary Sidebar

Recent Posts

  • Vanilla OS 2 Released: A New Era for Linux Enthusiasts
  • mk Command Examples
  • mixxx Command Examples
  • mix Command Examples

© 2025 · The Geek Diary

  • Archives
  • Contact Us
  • Copyright