Configuring TLS/SSL for HttpFS
- This configuration process can be completed using either Cloudera Manager or the command-line instructions.
- This information applies specifically to CDH 5.15.0. If you use an earlier version of CDH, see the documentation for that version located at Cloudera Documentation.
Using Cloudera Manager
Minimum Required Role: Configurator (also provided by Cluster Administrator, Full Administrator)
- Go to the HDFS service
- Click the Configuration tab.
- Select .
- .
- Edit the following TLS/SSL properties according to your cluster configuration:
Table 1. HttpFS TLS/SSL Properties Property Description Use TLS/SSL Use TLS/SSL for HttpFS. HttpFS Keystore File Location of the keystore file used by the HttpFS role for TLS/SSL. Default: /var/run/hadoop-httpfs/.keystore. Note that the default location for the keystore file is on non-persistent disk.
HttpFS Keystore Password Password of the keystore used by the HttpFS role for TLS/SSL. If the keystore password has a percent sign, it must be escaped. For example, for a password that is pass%word, use pass%%word.
HttpFS TLS/SSL Certificate Trust Store File The location on disk of the truststore, in .jks format, used to confirm the authenticity of TLS/SSL servers that HttpFS might connect to. This is used when HttpFS is the client in a TLS/SSL connection. HttpFS TLS/SSL Certificate Trust Store Password The password for the HttpFS TLS/SSL Certificate Trust Store File. This password is not required to access the truststore; this field can be left blank. If the truststore password has a percent sign, it must be escaped. For example, for a password that is pass%word, use pass%%word.
- Click Save Changes.
- Restart the HDFS service.
Connect to the HttpFS Web UI using TLS/SSL (HTTPS)
Use https://<httpfs_server_hostname>:14000/webhdfs/v1/, though most browsers should automatically redirect you if you use http://<httpfs_server_hostname>:14000/webhdfs/v1/
Using the Command Line
- Stop HttpFS by running
sudo /sbin/service hadoop-httpfs stop
- To enable TLS/SSL, change which configuration the HttpFS server should work with using the alternatives command.
Note: The alternatives command is only available on RHEL systems. For SLES, Ubuntu and Debian systems, the command is update-alternatives.For RHEL systems, to use TLS/SSL:
alternatives --set hadoop-httpfs-tomcat-conf /etc/hadoop-httpfs/tomcat-conf.https
Important:The HTTPFS_TLS/SSL_KEYSTORE_PASS variable must be the same as the password used when creating the keystore file. If you used a password other than password, you'll have to change the value of the HTTPFS_TLS/SSL_KEYSTORE_PASS variable in /etc/hadoop-httpfs/conf/httpfs-env.sh.
- Start HttpFS by running
sudo /sbin/service hadoop-httpfs start
Connect to the HttpFS Web UI using TLS/SSL (HTTPS)
Use https://<httpfs_server_hostname>:14000/webhdfs/v1/, though most browsers should automatically redirect you if you use http://<httpfs_server_hostname>:14000/webhdfs/v1/
If using a Self-Signed Certificate, your browser will warn you that it cannot verify the certificate or something similar. You will probably have to add your certificate as an exception.
<< Spark Encryption | ©2016 Cloudera, Inc. All rights reserved | Encrypted Shuffle and Encrypted Web UIs >> |
Terms and Conditions Privacy Policy |