• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer navigation

The Geek Diary

  • OS
    • Linux
    • CentOS/RHEL
    • VCS
  • Interview Questions
  • Database
    • MariaDB
  • DevOps
    • Docker
    • Shell Scripting
  • Big Data
    • Hadoop
    • Cloudera
    • Hortonworks HDP

How To Modify Hadoop Log Level

by admin

By default, Hadoop’s log level is set to INFO. This can be too much for most instances, as it will generate huge log files, even in an environment with low to moderate traffic. Changing the root logger in log4j.properties file in Hadoop will not change the log level.

Follow the steps below for changing the log level of Hadoop.

1. Shut down Hadoop if it is still running.

2. Open the [hadoop_home]/bin/hadoop-daemon.sh file. Look for the following line:

export HADOOP_ROOT_LOGGER="INFO,DRFA"

3. Modify that line to a lower or higher log level (WARN should be sufficient if the goal is to limit the size of the log file rather than debugging an issue).

export HADOOP_ROOT_LOGGER="WARN,DRFA"

4. Save the file and start Hadoop.

Filed Under: Hadoop

Some more articles you might also be interested in …

  1. How to Create HDFS policies in Ranger
  2. CCA131 – Create an HDFS user’s home directory
  3. HDPCA Exam Objective – Create a home directory for a user and configure permissions
  4. HDPCA Exam Objective – Change the configuration of a service using Ambari
  5. CCA 131 – Perform OS-level configuration for Hadoop installation
  6. CCA 131 – Commission/decommission a node
  7. HDPCA Exam Objective – Add an HDP service to a cluster using Ambari
  8. HDPCA Exam Objective – Configure HDFS ACLs
  9. CCA 131 – Set up a local CDH repository
  10. HDPCA Exam Objective – Decommission a node (NodeManager)

You May Also Like

Primary Sidebar

Recent Posts

  • “glab issue” Command Examples
  • “glab auth” Command Examples
  • “glab alias” Command Examples
  • gixy Command Examples

© 2023 · The Geek Diary

  • Archives
  • Contact Us
  • Copyright