Before taking the HDPCA exam, you can get the feel of the exam by using the HDPCA practice exam on AWS cloud. The practice exam is very similar to the actual exam. You can perform 6 tasks for practice on this machine. The recommended instance configuration in AWS is m3.2xlarge which has 30 GB of memory and 8 vCPUs. You can opt for the spot instances if you are on a budget.
The details of the setup in the HDPCA practice exam AWS machine are as below:
- A five-node HDP cluster named horton is installed with various HDP components.
- You are currently logged in to an Ubuntu instance as a user named horton. As the horton user, you can SSH (passwordless) onto any of the nodes in the cluster as the root user. The root password is hadoop.
- The five nodes in the cluster are CentOS servers named namenode, resourcemanager, hiveserver, node1 and node2.
- node1 can be seen by the other nodes in the cluster, but it is not actually a part of the cluster. The node2 instance is a part of the cluster, and it only has the Metrics Monitor component installed.
- Ambari is available at http://namenode:8080. The username and password for Ambari are both admin.
TASK 01: Start Services
Start all of the installed services on the cluster if they are not currently running.
TASK 02: Commission a New Node
Add node1 to the horton cluster, given the following details:
- The required SSH private key is located in the file id_rsa found in the /home/horton/Desktop folder
- Install the DataNode, NodeManager, and Client services on node1
TASK 03: Install Services
Add Storm and Kafka to the cluster. Install all of the services on the hiveserver node, and install the Supervisor process on node2.
TASK 04: Rack Awareness
Configure rack awareness, with namenode and hiveserver on rack 01, and the other nodes on rack 02
TASK 05: ResourceManager High Availability
Setup the ResourceManager to be highly available by configuring node2 with an additional ResourceManager.
TASK 06: Decommission a Node
Decommission node1 as both a DataNode and a NodeManager.