Cloudera Enterprise 5.15.x | Other versions

Deploying MapReduce v2 (YARN) on a Cluster

  Important:
  • Follow these command-line instructions on systems that do not use Cloudera Manager.
  • This information applies specifically to CDH 5.15.0. See Cloudera Documentation for information specific to other releases.

This section describes configuration tasks for YARN clusters only, and is specifically tailored for administrators who have installed YARN from packages.

  Important:

Do the following tasks after you have configured and deployed HDFS:

Continue reading:

  1. About MapReduce v2 (YARN)
  2. Step 1: Configure Properties for YARN Clusters
  3. Step 2: Configure YARN daemons
  4. Step 3: Configure the JobHistory Server
  5. Step 4: Configure the Staging Directory
  6. Step 5: If Necessary, Deploy your Custom Configuration to your Entire Cluster
  7. Step 6: If Necessary, Start HDFS on Every Host in the Cluster
  8. Step 7: If Necessary, Create the HDFS /tmp Directory
  9. Step 8: Create the history Directory and Set Permissions
  10. Step 9: Start YARN and the MapReduce JobHistory Server
  11. Step 10: Create a Home Directory for each MapReduce User
  12. Step 11: Configure the Hadoop Daemons to Run at Startup
  Note: Running Services

Use the service command to start, stop, and restart CDH components, instead of running scripts in /etc/init.d directly. The service command creates a predictable environment by setting the current working directory to / and removing most environment variables (passing only LANG and TERM). With /etc/init.d, existing environment variables remain in force and can produce unpredictable results. When you install CDH from packages, service is installed as part of the Linux Standard Base (LSB).

About MapReduce v2 (YARN)

The default installation in CDH 5 is MapReduce 2.x (MRv2) built on the YARN framework. In this document we usually refer to this new version as YARN. The fundamental idea of MRv2's YARN architecture is to split up the two primary responsibilities of the JobTracker — resource management and job scheduling/monitoring — into separate daemons: a global ResourceManager (RM) and per-application ApplicationMasters (AM). With MRv2, the ResourceManager (RM) and per-host NodeManagers (NM), form the data-computation framework. The ResourceManager service effectively replaces the functions of the JobTracker, and NodeManagers run on worker hosts instead of TaskTracker daemons. The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to run and monitor the tasks. For details of the new architecture, see Apache Hadoop NextGen MapReduce (YARN).

See also Selecting Appropriate JAR files for Your Jobs.

  Important:

Make sure you are not trying to run MRv1 and YARN on the same set of hosts at the same time. This is not recommended, especially in a cluster that is not managed by Cloudera Manager; it will degrade performance and may result in an unstable cluster deployment.

  • If you have installed YARN from packages, follow the instructions below to deploy it. (To deploy MRv1 instead, see Deploying MapReduce v1 (MRv1) on a Cluster.)
  • If you have installed CDH 5 from tarballs, the default deployment is YARN. Keep in mind that the instructions on this page are tailored for a deployment following installation from packages.

Step 1: Configure Properties for YARN Clusters

  Note:

Edit these files in the custom directory you created when you copied the Hadoop configuration. When you have finished, you will push this configuration to all the hosts in the cluster; see Step 5.

Property

Configuration File

Description

mapreduce.framework.name

mapred-site.xml

If you plan on running YARN, you must set this property to the value of yarn.

Sample Configuration:

mapred-site.xml:

<property>
 <name>mapreduce.framework.name</name>
 <value>yarn</value>
</property>

Step 2: Configure YARN daemons

Configure the following services: ResourceManager (on a dedicated host) and NodeManager (on every host where you plan to run MapReduce v2 jobs).

The following table shows the most important properties that you must configure for your cluster in yarn-site.xml

Property

Recommended value

Description

yarn.nodemanager.aux-services

mapreduce_shuffle

Shuffle service that needs to be set for Map Reduce applications.

yarn.resourcemanager.hostname

resourcemanager.company.com

The following properties will be set to their default ports on this host:
yarn.resourcemanager.
address,
yarn.resourcemanager.
admin.address, 
yarn.resourcemanager.
scheduler.address,
yarn.resourcemanager.
resource-tracker.address,
yarn.resourcemanager.
webapp.address
yarn.application.classpath

$HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/*, $HADOOP_COMMON_HOME/lib/*, $HADOOP_HDFS_HOME/*, $HADOOP_HDFS_HOME/lib/*, $HADOOP_MAPRED_HOME/*, $HADOOP_MAPRED_HOME/lib/*, $HADOOP_YARN_HOME/*, $HADOOP_YARN_HOME/lib/*

Classpath for typical applications.

yarn.log.aggregation-enable

true

 

Next, you need to specify, create, and assign the correct permissions to the local directories where you want the YARN daemons to store data.

You specify the directories by configuring the following two properties in the yarn-site.xml file on all cluster hosts:

Property

Description

yarn.nodemanager.local-dirs

Specifies the URIs of the directories where the NodeManager stores its localized files. All of the files required for running a particular YARN application will be put here for the duration of the application run. Cloudera recommends that this property specify a directory on each of the JBOD mount points; for example, file:///data/1/yarn/local through /data/N/yarn/local.

yarn.nodemanager.log-dirs

Specifies the URIs of the directories where the NodeManager stores container log files. Cloudera recommends that this property specify a directory on each of the JBOD mount points; for example, file:///data/1/yarn/logs through file:///data/N/yarn/logs.

yarn.nodemanager.remote-app-log-dir

Specifies the URI of the directory where logs are aggregated. Set the value to either hdfs://namenode-host.company.com:8020/var/log/hadoop-yarn/apps, using the fully qualified domain name of your NameNode host, or hdfs:/var/log/hadoop-yarn/apps.

Here is an example configuration:

yarn-site.xml:

  <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>resourcemanager.company.com</value>
  </property>   
  <property>
    <description>Classpath for typical applications.</description>
    <name>yarn.application.classpath</name>
    <value>
        $HADOOP_CONF_DIR,
        $HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,
        $HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,
        $HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,
        $HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*
    </value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
  <property>
    <name>yarn.nodemanager.local-dirs</name>
    <value>file:///data/1/yarn/local,file:///data/2/yarn/local,file:///data/3/yarn/local</value>
  </property>
  <property>
    <name>yarn.nodemanager.log-dirs</name>
    <value>file:///data/1/yarn/logs,file:///data/2/yarn/logs,file:///data/3/yarn/logs</value>
  </property>
  <property>
    <name>yarn.log.aggregation-enable</name>
    <value>true</value> 
  </property>  
  <property>
    <description>Where to aggregate logs</description>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>hdfs://<namenode-host.company.com>:8020/var/log/hadoop-yarn/apps</value>
  </property>

After specifying these directories in the yarn-site.xml file, you must create the directories and assign the correct file permissions to them on each host in your cluster.

In the following instructions, local path examples are used to represent Hadoop parameters. Change the path examples to match your configuration.

To configure local storage directories for use by YARN:

  1. Create the yarn.nodemanager.local-dirs local directories:
    $ sudo mkdir -p /data/1/yarn/local /data/2/yarn/local /data/3/yarn/local /data/4/yarn/local
  2. Create the yarn.nodemanager.log-dirs local directories:
    $ sudo mkdir -p /data/1/yarn/logs /data/2/yarn/logs /data/3/yarn/logs /data/4/yarn/logs
  3. Configure the owner of the yarn.nodemanager.local-dirs directory to be the yarn user:
    $ sudo chown -R yarn:yarn /data/1/yarn/local /data/2/yarn/local /data/3/yarn/local /data/4/yarn/local
  4. Configure the owner of the yarn.nodemanager.log-dirs directory to be the yarn user:
    $ sudo chown -R yarn:yarn /data/1/yarn/logs /data/2/yarn/logs /data/3/yarn/logs /data/4/yarn/logs

Here is a summary of the correct owner and permissions of the local directories:

Directory

Owner

Permissions

yarn.nodemanager.local-dirs

yarn:yarn

drwxr-xr-x

yarn.nodemanager.log-dirs

yarn:yarn

drwxr-xr-x

Step 3: Configure the JobHistory Server

If you have decided to run YARN on your cluster instead of MRv1, you should also run the MapReduce JobHistory Server. The following table shows the most important properties that you must configure in mapred-site.xml.

Property

Recommended value

Description

mapreduce.jobhistory.address

historyserver.company.com:10020

The address of the JobHistory Server host:port

mapreduce.jobhistory.webapp.address

historyserver.company.com:19888

The address of the JobHistory Server web application host:port

In addition, make sure proxying is enabled for the mapred user; configure the following properties in core-site.xml:

Property

Recommended value

Description

hadoop.proxyuser.mapred.groups

*

Allows the mapreduser to move files belonging to users in these groups

hadoop.proxyuser.mapred.hosts

*

Allows the mapreduser to move files belonging on these hosts

Step 4: Configure the Staging Directory

YARN requires a staging directory for temporary files created by running jobs. By default it creates /tmp/hadoop-yarn/staging with restrictive permissions that may prevent your users from running jobs. To forestall this, you should configure and create the staging directory yourself; in the example that follows we use /user:

  1. Configure yarn.app.mapreduce.am.staging-dir in mapred-site.xml:
    <property>
        <name>yarn.app.mapreduce.am.staging-dir</name>
        <value>/user</value>
    </property>
  2. Once HDFS is up and running, you will create this directory and a history subdirectory under it (see Step 8).

Alternatively, you can do the following:

  1. Configure mapreduce.jobhistory.intermediate-done-dir and mapreduce.jobhistory.done-dir in mapred-site.xml.
  2. Create these two directories.
  3. Set permissions on mapreduce.jobhistory.intermediate-done-dir to 1777.
  4. Set permissions on mapreduce.jobhistory.done-dir to 750.

If you configure mapreduce.jobhistory.intermediate-done-dir and mapreduce.jobhistory.done-dir as above, you can skip Step 8.

Step 5: If Necessary, Deploy your Custom Configuration to your Entire Cluster

Deploy the configuration if you have not already done so.

Step 6: If Necessary, Start HDFS on Every Host in the Cluster

Start HDFS if you have not already done so.

Step 7: If Necessary, Create the HDFS /tmp Directory

Create the /tmp Directory if you have not already done so.

  Important:

If you do not create /tmp with the permissions as shown below, you might have problems with CDH components later. Specifically, if you do not create /tmp yourself, another process might create it automatically with restrictive permissions that prevent your other applications from using it.

Step 8: Create the history Directory and Set Permissions

This is a subdirectory of the staging directory you configured in Step 4. In this example we're using /user/history. Create it and set permissions as follows:

sudo -u hdfs hadoop fs -mkdir -p /user/history
sudo -u hdfs hadoop fs -chmod -R 1777 /user/history
sudo -u hdfs hadoop fs -chown mapred:hadoop /user/history

Step 9: Start YARN and the MapReduce JobHistory Server

To start YARN, start the ResourceManager and NodeManager services:

  Note:

Make sure you always start ResourceManager before starting NodeManager services.

On the ResourceManager system:

$ sudo service hadoop-yarn-resourcemanager start

On each NodeManager system (typically the same ones where DataNode service runs):

$ sudo service hadoop-yarn-nodemanager start

To start the MapReduce JobHistory Server

On the MapReduce JobHistory Server system:

$ sudo service hadoop-mapreduce-historyserver start

Step 10: Create a Home Directory for each MapReduce User

Create a home directory on the NameNode for each MapReduce user. For example:

$ sudo -u hdfs hadoop fs -mkdir  /user/<user>
$ sudo -u hdfs hadoop fs -chown <user> /user/<user>

where <user> is the Linux username of each user.

Alternatively, you can log in as each Linux user (or write a script to do so) and create the home directory as follows:

sudo -u hdfs hadoop fs -mkdir /user/$USER
sudo -u hdfs hadoop fs -chown $USER /user/$USER

Step 11: Configure the Hadoop Daemons to Run at Startup

Page generated May 18, 2018.