Quantcast
Channel: Hortonworks » All Replies
Viewing all articles
Browse latest Browse all 3435

Error starting namenode from Ambari.

$
0
0

Traceback (most recent call last):
File “/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py”, line 317, in <module>
NameNode().execute()
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 216, in execute
method(env)
File “/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py”, line 82, in start
namenode(action=”start”, rolling_restart=rolling_restart, env=env)
File “/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py”, line 89, in thunk
return fn(*args, **kwargs)
File “/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py”, line 86, in namenode
create_log_dir=True
File “/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py”, line 276, in service
environment=hadoop_env_exports
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 154, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 152, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 118, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 260, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 70, in inner
result = function(command, **kwargs)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of ‘ambari-sudo.sh su hdfs -l -s /bin/bash -c ‘ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh –config /usr/hdp/current/hadoop-client/conf start namenode” returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-pp-hdp-m.out
stdout: /var/lib/ambari-agent/data/output-56.txt

2015-12-03 03:08:31,485 – Group[‘hadoop’] {}
2015-12-03 03:08:31,488 – Group[‘users’] {}
2015-12-03 03:08:31,489 – User[‘zookeeper’] {‘gid’: ‘hadoop’, ‘groups’: [u’hadoop’]}
2015-12-03 03:08:31,490 – User[‘ams’] {‘gid’: ‘hadoop’, ‘groups’: [u’hadoop’]}
2015-12-03 03:08:31,491 – User[‘ambari-qa’] {‘gid’: ‘hadoop’, ‘groups’: [u’users’]}
2015-12-03 03:08:31,492 – User[‘hdfs’] {‘gid’: ‘hadoop’, ‘groups’: [u’hadoop’]}
2015-12-03 03:08:31,493 – User[‘yarn’] {‘gid’: ‘hadoop’, ‘groups’: [u’hadoop’]}
2015-12-03 03:08:31,494 – User[‘mapred’] {‘gid’: ‘hadoop’, ‘groups’: [u’hadoop’]}
2015-12-03 03:08:31,495 – File[‘/var/lib/ambari-agent/tmp/changeUid.sh’] {‘content’: StaticFile(‘changeToSecureUid.sh’), ‘mode’: 0555}
2015-12-03 03:08:31,497 – Execute[‘/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa’] {‘not_if’: ‘(test $(id -u ambari-qa) -gt 1000) || (false)’}
2015-12-03 03:08:31,566 – Skipping Execute[‘/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa’] due to not_if
2015-12-03 03:08:31,568 – Group[‘hdfs’] {‘ignore_failures’: False}
2015-12-03 03:08:31,569 – User[‘hdfs’] {‘ignore_failures’: False, ‘groups’: [u’hadoop’, u’hdfs’]}
2015-12-03 03:08:31,570 – Directory[‘/etc/hadoop’] {‘mode’: 0755}
2015-12-03 03:08:31,590 – File[‘/usr/hdp/current/hadoop-client/conf/hadoop-env.sh’] {‘content’: InlineTemplate(…), ‘owner’: ‘hdfs’, ‘group’: ‘hadoop’}
2015-12-03 03:08:31,591 – Directory[‘/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘mode’: 0777}
2015-12-03 03:08:31,621 – Execute[(‘setenforce’, ‘0’)] {‘not_if’: ‘(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)’, ‘sudo’: True, ‘only_if’: ‘test -f /selinux/enforce’}
2015-12-03 03:08:31,659 – Skipping Execute[(‘setenforce’, ‘0’)] due to not_if
2015-12-03 03:08:31,660 – Directory[‘/var/log/hadoop’] {‘owner’: ‘root’, ‘mode’: 0775, ‘group’: ‘hadoop’, ‘recursive’: True, ‘cd_access’: ‘a’}
2015-12-03 03:08:31,667 – Directory[‘/var/run/hadoop’] {‘owner’: ‘root’, ‘group’: ‘root’, ‘recursive’: True, ‘cd_access’: ‘a’}
2015-12-03 03:08:31,668 – Changing owner for /var/run/hadoop from 1004 to root
2015-12-03 03:08:31,668 – Changing group for /var/run/hadoop from 1001 to root
2015-12-03 03:08:31,668 – Directory[‘/tmp/hadoop-hdfs’] {‘owner’: ‘hdfs’, ‘recursive’: True, ‘cd_access’: ‘a’}
2015-12-03 03:08:31,676 – File[‘/usr/hdp/current/hadoop-client/conf/commons-logging.properties’] {‘content’: Template(‘commons-logging.properties.j2’), ‘owner’: ‘hdfs’}
2015-12-03 03:08:31,679 – File[‘/usr/hdp/current/hadoop-client/conf/health_check’] {‘content’: Template(‘health_check.j2’), ‘owner’: ‘hdfs’}
2015-12-03 03:08:31,679 – File[‘/usr/hdp/current/hadoop-client/conf/log4j.properties’] {‘content’: …, ‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘mode’: 0644}
2015-12-03 03:08:31,692 – File[‘/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties’] {‘content’: Template(‘hadoop-metrics2.properties.j2’), ‘owner’: ‘hdfs’}
2015-12-03 03:08:31,693 – File[‘/usr/hdp/current/hadoop-client/conf/task-log4j.properties’] {‘content’: StaticFile(‘task-log4j.properties’), ‘mode’: 0755}
2015-12-03 03:08:31,693 – File[‘/usr/hdp/current/hadoop-client/conf/configuration.xsl’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’}
2015-12-03 03:08:31,700 – File[‘/etc/hadoop/conf/topology_mappings.data’] {‘owner’: ‘hdfs’, ‘content’: Template(‘topology_mappings.data.j2’), ‘only_if’: ‘test -d /etc/hadoop/conf’, ‘group’: ‘hadoop’}
2015-12-03 03:08:31,717 – File[‘/etc/hadoop/conf/topology_script.py’] {‘content’: StaticFile(‘topology_script.py’), ‘only_if’: ‘test -d /etc/hadoop/conf’, ‘mode’: 0755}
2015-12-03 03:08:32,020 – Directory[‘/etc/security/limits.d’] {‘owner’: ‘root’, ‘group’: ‘root’, ‘recursive’: True}
2015-12-03 03:08:32,031 – File[‘/etc/security/limits.d/hdfs.conf’] {‘content’: Template(‘hdfs.conf.j2’), ‘owner’: ‘root’, ‘group’: ‘root’, ‘mode’: 0644}
2015-12-03 03:08:32,033 – XmlConfig[‘hadoop-policy.xml’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘conf_dir’: ‘/usr/hdp/current/hadoop-client/conf’, ‘configuration_attributes’: {}, ‘configurations’: …}
2015-12-03 03:08:32,046 – Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2015-12-03 03:08:32,047 – File[‘/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: None, ‘encoding’: ‘UTF-8’}
2015-12-03 03:08:32,062 – XmlConfig[‘ssl-client.xml’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘conf_dir’: ‘/usr/hdp/current/hadoop-client/conf’, ‘configuration_attributes’: {}, ‘configurations’: …}
2015-12-03 03:08:32,072 – Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2015-12-03 03:08:32,073 – File[‘/usr/hdp/current/hadoop-client/conf/ssl-client.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: None, ‘encoding’: ‘UTF-8’}
2015-12-03 03:08:32,080 – Directory[‘/usr/hdp/current/hadoop-client/conf/secure’] {‘owner’: ‘root’, ‘group’: ‘hadoop’, ‘recursive’: True, ‘cd_access’: ‘a’}
2015-12-03 03:08:32,081 – XmlConfig[‘ssl-client.xml’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘conf_dir’: ‘/usr/hdp/current/hadoop-client/conf/secure’, ‘configuration_attributes’: {}, ‘configurations’: …}
2015-12-03 03:08:32,091 – Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2015-12-03 03:08:32,091 – File[‘/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: None, ‘encoding’: ‘UTF-8’}
2015-12-03 03:08:32,098 – XmlConfig[‘ssl-server.xml’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘conf_dir’: ‘/usr/hdp/current/hadoop-client/conf’, ‘configuration_attributes’: {}, ‘configurations’: …}
2015-12-03 03:08:32,108 – Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2015-12-03 03:08:32,109 – File[‘/usr/hdp/current/hadoop-client/conf/ssl-server.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: None, ‘encoding’: ‘UTF-8’}
2015-12-03 03:08:32,117 – XmlConfig[‘hdfs-site.xml’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘conf_dir’: ‘/usr/hdp/current/hadoop-client/conf’, ‘configuration_attributes’: {}, ‘configurations’: …}
2015-12-03 03:08:32,127 – Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2015-12-03 03:08:32,127 – File[‘/usr/hdp/current/hadoop-client/conf/hdfs-site.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: None, ‘encoding’: ‘UTF-8’}
2015-12-03 03:08:32,175 – XmlConfig[‘core-site.xml’] {‘group’: ‘hadoop’, ‘conf_dir’: ‘/usr/hdp/current/hadoop-client/conf’, ‘mode’: 0644, ‘configuration_attributes’: {}, ‘owner’: ‘hdfs’, ‘configurations’: …}
2015-12-03 03:08:32,186 – Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2015-12-03 03:08:32,187 – File[‘/usr/hdp/current/hadoop-client/conf/core-site.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: 0644, ‘encoding’: ‘UTF-8’}
2015-12-03 03:08:32,211 – File[‘/usr/hdp/current/hadoop-client/conf/slaves’] {‘content’: Template(‘slaves.j2’), ‘owner’: ‘hdfs’}
2015-12-03 03:08:32,212 – Directory[‘/hadoop/hdfs/namenode’] {‘owner’: ‘hdfs’, ‘recursive’: True, ‘group’: ‘hadoop’, ‘mode’: 0755, ‘cd_access’: ‘a’}
2015-12-03 03:08:32,213 – Ranger admin not installed
/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2015-12-03 03:08:32,213 – Directory[‘/hadoop/hdfs/namenode/namenode-formatted/’] {‘recursive’: True}
2015-12-03 03:08:32,215 – File[‘/etc/hadoop/conf/dfs.exclude’] {‘owner’: ‘hdfs’, ‘content’: Template(‘exclude_hosts_list.j2’), ‘group’: ‘hadoop’}
2015-12-03 03:08:32,216 – Directory[‘/var/run/hadoop’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘mode’: 0755}
2015-12-03 03:08:32,217 – Changing owner for /var/run/hadoop from 0 to hdfs
2015-12-03 03:08:32,217 – Changing group for /var/run/hadoop from 0 to hadoop
2015-12-03 03:08:32,217 – Directory[‘/var/run/hadoop/hdfs’] {‘owner’: ‘hdfs’, ‘recursive’: True}
2015-12-03 03:08:32,217 – Directory[‘/var/log/hadoop/hdfs’] {‘owner’: ‘hdfs’, ‘recursive’: True}
2015-12-03 03:08:32,218 – File[‘/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid’] {‘action’: [‘delete’], ‘not_if’: ‘ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid’}
2015-12-03 03:08:32,258 – Deleting File[‘/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid’]
2015-12-03 03:08:32,260 – Execute[‘ambari-sudo.sh su hdfs -l -s /bin/bash -c ‘ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh –config /usr/hdp/current/hadoop-client/conf start namenode”] {‘environment’: {‘HADOOP_LIBEXEC_DIR’: ‘/usr/hdp/current/hadoop-client/libexec’}, ‘not_if’: ‘ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid’}


Viewing all articles
Browse latest Browse all 3435

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>