Any help please.
When doing sandbox tutorial I receive,
500 Connection refused, when I try to access HDFS Files menu thru ambari,
In viewing services noticed the namenode is stopped and when I try to start I get,
stderr: /var/lib/ambari-agent/data/errors-219.txt
Traceback (most recent call last):
File “/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py”, line 317, in <module>
NameNode().execute()
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 219, in execute
method(env)
File “/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py”, line 82, in start
namenode(action=”start”, rolling_restart=rolling_restart, env=env)
File “/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py”, line 89, in thunk
return fn(*args, **kwargs)
File “/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py”, line 86, in namenode
create_log_dir=True
File “/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py”, line 276, in service
environment=hadoop_env_exports
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 154, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 152, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 118, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 260, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 70, in inner
result = function(command, **kwargs)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of ‘ambari-sudo.sh su hdfs -l -s /bin/bash -c ‘ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh –config /usr/hdp/current/hadoop-client/conf start namenode” returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sandbox.hortonworks.com.out
stdout: /var/lib/ambari-agent/data/output-219.txt
2015-12-04 22:24:15,985 – Group[‘hadoop’] {}
2015-12-04 22:24:15,986 – Group[‘users’] {}
2015-12-04 22:24:15,986 – Group[‘zeppelin’] {}
2015-12-04 22:24:15,986 – Group[‘knox’] {}
2015-12-04 22:24:15,986 – Group[‘ranger’] {}
2015-12-04 22:24:15,987 – Group[‘spark’] {}
2015-12-04 22:24:15,987 – User[‘oozie’] {‘gid’: ‘hadoop’, ‘groups’: [‘users’]}
2015-12-04 22:24:15,987 – User[‘hive’] {‘gid’: ‘hadoop’, ‘groups’: [‘hadoop’]}
2015-12-04 22:24:15,987 – User[‘zeppelin’] {‘gid’: ‘hadoop’, ‘groups’: [‘hadoop’]}
2015-12-04 22:24:15,988 – User[‘ambari-qa’] {‘gid’: ‘hadoop’, ‘groups’: [‘users’]}
2015-12-04 22:24:15,989 – User[‘flume’] {‘gid’: ‘hadoop’, ‘groups’: [‘hadoop’]}
2015-12-04 22:24:15,989 – User[‘hdfs’] {‘gid’: ‘hadoop’, ‘groups’: [‘hadoop’]}
2015-12-04 22:24:15,990 – User[‘knox’] {‘gid’: ‘hadoop’, ‘groups’: [‘hadoop’]}
2015-12-04 22:24:15,990 – User[‘ranger’] {‘gid’: ‘hadoop’, ‘groups’: [‘ranger’]}
2015-12-04 22:24:15,990 – User[‘storm’] {‘gid’: ‘hadoop’, ‘groups’: [‘hadoop’]}
2015-12-04 22:24:15,991 – User[‘spark’] {‘gid’: ‘hadoop’, ‘groups’: [‘hadoop’]}
2015-12-04 22:24:15,991 – User[‘mapred’] {‘gid’: ‘hadoop’, ‘groups’: [‘hadoop’]}
2015-12-04 22:24:15,992 – User[‘hbase’] {‘gid’: ‘hadoop’, ‘groups’: [‘hadoop’]}
2015-12-04 22:24:15,992 – User[‘tez’] {‘gid’: ‘hadoop’, ‘groups’: [‘users’]}
2015-12-04 22:24:15,993 – User[‘zookeeper’] {‘gid’: ‘hadoop’, ‘groups’: [‘hadoop’]}
2015-12-04 22:24:15,993 – User[‘kafka’] {‘gid’: ‘hadoop’, ‘groups’: [‘hadoop’]}
2015-12-04 22:24:15,993 – User[‘falcon’] {‘gid’: ‘hadoop’, ‘groups’: [‘users’]}
2015-12-04 22:24:15,994 – User[‘sqoop’] {‘gid’: ‘hadoop’, ‘groups’: [‘hadoop’]}
2015-12-04 22:24:15,994 – User[‘yarn’] {‘gid’: ‘hadoop’, ‘groups’: [‘hadoop’]}
2015-12-04 22:24:15,995 – User[‘hcat’] {‘gid’: ‘hadoop’, ‘groups’: [‘hadoop’]}
2015-12-04 22:24:15,995 – User[‘ams’] {‘gid’: ‘hadoop’, ‘groups’: [‘hadoop’]}
2015-12-04 22:24:15,996 – User[‘atlas’] {‘gid’: ‘hadoop’, ‘groups’: [‘hadoop’]}
2015-12-04 22:24:15,997 – File[‘/var/lib/ambari-agent/tmp/changeUid.sh’] {‘content’: StaticFile(‘changeToSecureUid.sh’), ‘mode’: 0555}
2015-12-04 22:24:15,999 – Execute[‘/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa’] {‘not_if’: ‘(test $(id -u ambari-qa) -gt 1000) || (false)’}
2015-12-04 22:24:16,004 – Skipping Execute[‘/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa’] due to not_if
2015-12-04 22:24:16,005 – Directory[‘/tmp/hbase-hbase’] {‘owner’: ‘hbase’, ‘recursive’: True, ‘mode’: 0775, ‘cd_access’: ‘a’}
2015-12-04 22:24:16,007 – File[‘/var/lib/ambari-agent/tmp/changeUid.sh’] {‘content’: StaticFile(‘changeToSecureUid.sh’), ‘mode’: 0555}
2015-12-04 22:24:16,008 – Execute[‘/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase’] {‘not_if’: ‘(test $(id -u hbase) -gt 1000) || (false)’}
2015-12-04 22:24:16,012 – Skipping Execute[‘/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase’] due to not_if
2015-12-04 22:24:16,012 – Group[‘hdfs’] {‘ignore_failures’: False}
2015-12-04 22:24:16,013 – User[‘hdfs’] {‘ignore_failures’: False, ‘groups’: [‘hadoop’, ‘hdfs’]}
2015-12-04 22:24:16,013 – Directory[‘/etc/hadoop’] {‘mode’: 0755}
2015-12-04 22:24:16,025 – File[‘/usr/hdp/current/hadoop-client/conf/hadoop-env.sh’] {‘content’: InlineTemplate(…), ‘owner’: ‘hdfs’, ‘group’: ‘hadoop’}
2015-12-04 22:24:16,025 – Directory[‘/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘mode’: 0777}
2015-12-04 22:24:16,039 – Execute[(‘setenforce’, ‘0’)] {‘not_if’: ‘(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)’, ‘sudo’: True, ‘only_if’: ‘test -f /selinux/enforce’}
2015-12-04 22:24:16,045 – Skipping Execute[(‘setenforce’, ‘0’)] due to not_if
2015-12-04 22:24:16,046 – Directory[‘/var/log/hadoop’] {‘owner’: ‘root’, ‘mode’: 0775, ‘group’: ‘hadoop’, ‘recursive’: True, ‘cd_access’: ‘a’}
2015-12-04 22:24:16,048 – Directory[‘/var/run/hadoop’] {‘owner’: ‘root’, ‘group’: ‘root’, ‘recursive’: True, ‘cd_access’: ‘a’}
2015-12-04 22:24:16,048 – Changing owner for /var/run/hadoop from 507 to root
2015-12-04 22:24:16,048 – Changing group for /var/run/hadoop from 503 to root
2015-12-04 22:24:16,048 – Directory[‘/tmp/hadoop-hdfs’] {‘owner’: ‘hdfs’, ‘recursive’: True, ‘cd_access’: ‘a’}
2015-12-04 22:24:16,054 – File[‘/usr/hdp/current/hadoop-client/conf/commons-logging.properties’] {‘content’: Template(‘commons-logging.properties.j2’), ‘owner’: ‘hdfs’}
2015-12-04 22:24:16,055 – File[‘/usr/hdp/current/hadoop-client/conf/health_check’] {‘content’: Template(‘health_check.j2’), ‘owner’: ‘hdfs’}
2015-12-04 22:24:16,056 – File[‘/usr/hdp/current/hadoop-client/conf/log4j.properties’] {‘content’: …, ‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘mode’: 0644}
2015-12-04 22:24:16,062 – File[‘/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties’] {‘content’: Template(‘hadoop-metrics2.properties.j2’), ‘owner’: ‘hdfs’}
2015-12-04 22:24:16,063 – File[‘/usr/hdp/current/hadoop-client/conf/task-log4j.properties’] {‘content’: StaticFile(‘task-log4j.properties’), ‘mode’: 0755}
2015-12-04 22:24:16,064 – File[‘/usr/hdp/current/hadoop-client/conf/configuration.xsl’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’}
2015-12-04 22:24:16,068 – File[‘/etc/hadoop/conf/topology_mappings.data’] {‘owner’: ‘hdfs’, ‘content’: Template(‘topology_mappings.data.j2’), ‘only_if’: ‘test -d /etc/hadoop/conf’, ‘group’: ‘hadoop’}
2015-12-04 22:24:16,072 – File[‘/etc/hadoop/conf/topology_script.py’] {‘content’: StaticFile(‘topology_script.py’), ‘only_if’: ‘test -d /etc/hadoop/conf’, ‘mode’: 0755}
2015-12-04 22:24:16,221 – Directory[‘/etc/security/limits.d’] {‘owner’: ‘root’, ‘group’: ‘root’, ‘recursive’: True}
2015-12-04 22:24:16,225 – File[‘/etc/security/limits.d/hdfs.conf’] {‘content’: Template(‘hdfs.conf.j2’), ‘owner’: ‘root’, ‘group’: ‘root’, ‘mode’: 0644}
2015-12-04 22:24:16,226 – XmlConfig[‘hadoop-policy.xml’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘conf_dir’: ‘/usr/hdp/current/hadoop-client/conf’, ‘configuration_attributes’: {}, ‘configurations’: …}
2015-12-04 22:24:16,234 – Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2015-12-04 22:24:16,235 – File[‘/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: None, ‘encoding’: ‘UTF-8’}
2015-12-04 22:24:16,242 – XmlConfig[‘ssl-client.xml’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘conf_dir’: ‘/usr/hdp/current/hadoop-client/conf’, ‘configuration_attributes’: {}, ‘configurations’: …}
2015-12-04 22:24:16,248 – Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2015-12-04 22:24:16,249 – File[‘/usr/hdp/current/hadoop-client/conf/ssl-client.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: None, ‘encoding’: ‘UTF-8’}
2015-12-04 22:24:16,254 – Directory[‘/usr/hdp/current/hadoop-client/conf/secure’] {‘owner’: ‘root’, ‘group’: ‘hadoop’, ‘recursive’: True, ‘cd_access’: ‘a’}
2015-12-04 22:24:16,260 – XmlConfig[‘ssl-client.xml’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘conf_dir’: ‘/usr/hdp/current/hadoop-client/conf/secure’, ‘configuration_attributes’: {}, ‘configurations’: …}
2015-12-04 22:24:16,267 – Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2015-12-04 22:24:16,268 – File[‘/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: None, ‘encoding’: ‘UTF-8’}
2015-12-04 22:24:16,273 – XmlConfig[‘ssl-server.xml’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘conf_dir’: ‘/usr/hdp/current/hadoop-client/conf’, ‘configuration_attributes’: {}, ‘configurations’: …}
2015-12-04 22:24:16,279 – Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2015-12-04 22:24:16,280 – File[‘/usr/hdp/current/hadoop-client/conf/ssl-server.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: None, ‘encoding’: ‘UTF-8’}
2015-12-04 22:24:16,286 – XmlConfig[‘hdfs-site.xml’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘conf_dir’: ‘/usr/hdp/current/hadoop-client/conf’, ‘configuration_attributes’: {‘final’: {‘dfs.support.append’: ‘true’, ‘dfs.datanode.data.dir’: ‘true’, ‘dfs.namenode.http-address’: ‘true’, ‘dfs.namenode.name.dir’: ‘true’, ‘dfs.webhdfs.enabled’: ‘true’, ‘dfs.datanode.failed.volumes.tolerated’: ‘true’}}, ‘configurations’: …}
2015-12-04 22:24:16,292 – Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2015-12-04 22:24:16,292 – File[‘/usr/hdp/current/hadoop-client/conf/hdfs-site.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: None, ‘encoding’: ‘UTF-8’}
2015-12-04 22:24:16,328 – XmlConfig[‘core-site.xml’] {‘group’: ‘hadoop’, ‘conf_dir’: ‘/usr/hdp/current/hadoop-client/conf’, ‘mode’: 0644, ‘configuration_attributes’: {‘final’: {‘fs.defaultFS’: ‘true’}}, ‘owner’: ‘hdfs’, ‘configurations’: …}
2015-12-04 22:24:16,335 – Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2015-12-04 22:24:16,335 – File[‘/usr/hdp/current/hadoop-client/conf/core-site.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: 0644, ‘encoding’: ‘UTF-8’}
2015-12-04 22:24:16,356 – File[‘/usr/hdp/current/hadoop-client/conf/slaves’] {‘content’: Template(‘slaves.j2’), ‘owner’: ‘hdfs’}
2015-12-04 22:24:16,357 – Directory[‘/hadoop/hdfs/namenode’] {‘owner’: ‘hdfs’, ‘recursive’: True, ‘group’: ‘hadoop’, ‘mode’: 0755, ‘cd_access’: ‘a’}
2015-12-04 22:24:16,358 – File[‘/var/lib/ambari-agent/tmp/mysql-connector-java.jar’] {‘content’: DownloadSource(‘http://sandbox.hortonworks.com:8080/resources//mysql-jdbc-driver.jar’), ‘mode’: 0644}
2015-12-04 22:24:16,358 – Not downloading the file from http://sandbox.hortonworks.com:8080/resources//mysql-jdbc-driver.jar, because /var/lib/ambari-agent/tmp/mysql-jdbc-driver.jar already exists
2015-12-04 22:24:16,364 – Execute[(‘cp’, ‘–remove-destination’, ‘/var/lib/ambari-agent/tmp/mysql-connector-java.jar’, ‘/usr/hdp/current/hadoop-client/lib/mysql-connector-java.jar’)] {‘path’: [‘/bin’, ‘/usr/bin/’], ‘sudo’: True}
2015-12-04 22:24:16,378 – File[‘/usr/hdp/current/hadoop-client/lib/mysql-connector-java.jar’] {‘mode’: 0644}
2015-12-04 22:24:16,444 – amb_ranger_admin user already exists, using existing user from configurations.
2015-12-04 22:24:16,465 – Hdfs Repository exist
2015-12-04 22:24:16,467 – File[‘/usr/hdp/current/hadoop-client/conf/ranger-security.xml’] {‘content’: InlineTemplate(…), ‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘mode’: 0644}
2015-12-04 22:24:16,469 – Writing File[‘/usr/hdp/current/hadoop-client/conf/ranger-security.xml’] because contents don’t match
2015-12-04 22:24:16,469 – Directory[‘/etc/ranger/Sandbox_hadoop’] {‘owner’: ‘hdfs’, ‘cd_access’: ‘a’, ‘group’: ‘hadoop’, ‘recursive’: True, ‘mode’: 0775}
2015-12-04 22:24:16,469 – Directory[‘/etc/ranger/Sandbox_hadoop/policycache’] {‘owner’: ‘hdfs’, ‘recursive’: True, ‘group’: ‘hadoop’, ‘mode’: 0775, ‘cd_access’: ‘a’}
2015-12-04 22:24:16,470 – File[‘/etc/ranger/Sandbox_hadoop/policycache/hdfs_Sandbox_hadoop.json’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘mode’: 0644}
2015-12-04 22:24:16,470 – XmlConfig[‘ranger-hdfs-audit.xml’] {‘group’: ‘hadoop’, ‘conf_dir’: ‘/usr/hdp/current/hadoop-client/conf’, ‘mode’: 0744, ‘configuration_attributes’: {}, ‘owner’: ‘hdfs’, ‘configurations’: …}
2015-12-04 22:24:16,477 – Generating config: /usr/hdp/current/hadoop-client/conf/ranger-hdfs-audit.xml
2015-12-04 22:24:16,477 – File[‘/usr/hdp/current/hadoop-client/conf/ranger-hdfs-audit.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: 0744, ‘encoding’: ‘UTF-8’}
2015-12-04 22:24:16,489 – XmlConfig[‘ranger-hdfs-security.xml’] {‘group’: ‘hadoop’, ‘conf_dir’: ‘/usr/hdp/current/hadoop-client/conf’, ‘mode’: 0744, ‘configuration_attributes’: {}, ‘owner’: ‘hdfs’, ‘configurations’: …}
2015-12-04 22:24:16,495 – Generating config: /usr/hdp/current/hadoop-client/conf/ranger-hdfs-security.xml
2015-12-04 22:24:16,495 – File[‘/usr/hdp/current/hadoop-client/conf/ranger-hdfs-security.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: 0744, ‘encoding’: ‘UTF-8’}
2015-12-04 22:24:16,502 – XmlConfig[‘ranger-policymgr-ssl.xml’] {‘group’: ‘hadoop’, ‘conf_dir’: ‘/usr/hdp/current/hadoop-client/conf’, ‘mode’: 0744, ‘configuration_attributes’: {}, ‘owner’: ‘hdfs’, ‘configurations’: …}
2015-12-04 22:24:16,508 – Generating config: /usr/hdp/current/hadoop-client/conf/ranger-policymgr-ssl.xml
2015-12-04 22:24:16,508 – File[‘/usr/hdp/current/hadoop-client/conf/ranger-policymgr-ssl.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: 0744, ‘encoding’: ‘UTF-8’}
2015-12-04 22:24:16,514 – Execute[(‘/usr/hdp/2.3.2.0-2950/ranger-hdfs-plugin/ranger_credential_helper.py’, ‘-l’, ‘/usr/hdp/2.3.2.0-2950/ranger-hdfs-plugin/install/lib/*’, ‘-f’, ‘/etc/ranger/Sandbox_hadoop/cred.jceks’, ‘-k’, ‘auditDBCred’, ‘-v’, [PROTECTED], ‘-c’, ‘1’)] {‘logoutput’: True, ‘environment’: {‘JAVA_HOME’: ‘/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91.x86_64’}, ‘sudo’: True}
Using Java:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91.x86_64/bin/java
Alias auditDBCred created successfully!
2015-12-04 22:24:17,262 – Execute[(‘/usr/hdp/2.3.2.0-2950/ranger-hdfs-plugin/ranger_credential_helper.py’, ‘-l’, ‘/usr/hdp/2.3.2.0-2950/ranger-hdfs-plugin/install/lib/*’, ‘-f’, ‘/etc/ranger/Sandbox_hadoop/cred.jceks’, ‘-k’, ‘sslKeyStore’, ‘-v’, [PROTECTED], ‘-c’, ‘1’)] {‘logoutput’: True, ‘environment’: {‘JAVA_HOME’: ‘/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91.x86_64’}, ‘sudo’: True}
Using Java:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91.x86_64/bin/java
Alias sslKeyStore created successfully!
2015-12-04 22:24:18,024 – Execute[(‘/usr/hdp/2.3.2.0-2950/ranger-hdfs-plugin/ranger_credential_helper.py’, ‘-l’, ‘/usr/hdp/2.3.2.0-2950/ranger-hdfs-plugin/install/lib/*’, ‘-f’, ‘/etc/ranger/Sandbox_hadoop/cred.jceks’, ‘-k’, ‘sslTrustStore’, ‘-v’, [PROTECTED], ‘-c’, ‘1’)] {‘logoutput’: True, ‘environment’: {‘JAVA_HOME’: ‘/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91.x86_64’}, ‘sudo’: True}
Using Java:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91.x86_64/bin/java
Alias sslTrustStore created successfully!
2015-12-04 22:24:18,712 – File[‘/etc/ranger/Sandbox_hadoop/cred.jceks’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘mode’: 0640}
/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2015-12-04 22:24:18,713 – Directory[‘/hadoop/hdfs/namenode/namenode-formatted/’] {‘recursive’: True}
2015-12-04 22:24:18,715 – File[‘/etc/hadoop/conf/dfs.exclude’] {‘owner’: ‘hdfs’, ‘content’: Template(‘exclude_hosts_list.j2’), ‘group’: ‘hadoop’}
2015-12-04 22:24:18,716 – Directory[‘/var/run/hadoop’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘mode’: 0755}
2015-12-04 22:24:18,716 – Changing owner for /var/run/hadoop from 0 to hdfs
2015-12-04 22:24:18,717 – Changing group for /var/run/hadoop from 0 to hadoop
2015-12-04 22:24:18,717 – Directory[‘/var/run/hadoop/hdfs’] {‘owner’: ‘hdfs’, ‘recursive’: True}
2015-12-04 22:24:18,717 – Directory[‘/var/log/hadoop/hdfs’] {‘owner’: ‘hdfs’, ‘recursive’: True}
2015-12-04 22:24:18,718 – File[‘/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid’] {‘action’: [‘delete’], ‘not_if’: ‘ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid’}
2015-12-04 22:24:18,728 – Deleting File[‘/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid’]
2015-12-04 22:24:18,728 – Execute[‘ambari-sudo.sh su hdfs -l -s /bin/bash -c ‘ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh –config /usr/hdp/current/hadoop-client/conf start namenode”] {‘environment’: {‘HADOOP_LIBEXEC_DIR’: ‘/usr/hdp/current/hadoop-client/libexec’}, ‘not_if’: ‘ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid’}