Quantcast
Channel: Hortonworks » All Replies
Viewing all 3435 articles
Browse latest View live

Reply To: ambari agent istall fails with ssh error

$
0
0

oops… the listing above shows an sshKey file … i put this there manually during my testing… the prior run is below.

root@ambari:/var/run/ambari-server/bootstrap/5# ls
bootstrap.err bootstrap.out horton1.laptop.local.done horton1.laptop.local.log
root@ambari:/var/run/ambari-server/bootstrap/5# more bootstrap.err
INFO:root:BootStrapping hosts [‘horton1.laptop.local’] using /usr/lib/python2.6/site-packages/ambari_server cluster prim
ary OS: ubuntu14 with user ‘root’ sshKey File /var/run/ambari-server/bootstrap/5/sshKey password File null using tmp dir
/var/run/ambari-server/bootstrap/5 ambari: ambari.laptop.local; server_port: 8080; ambari version: 1.7.0
INFO:root:Executing parallel bootstrap
ERROR:root:ERROR: Bootstrap of host horton1.laptop.local fails because previous action finished with non-zero exit code
(255)
ERROR MESSAGE: Permission denied (publickey,password).

STDOUT:
Permission denied (publickey,password).

INFO:root:Finished parallel bootstrap
root@ambari:/var/run/ambari-server/bootstrap/5#


Falcon fails to identify S3N storage

$
0
0

Great tutorial. Incremental Backup of Data from HDP to Azure using Falcon for Disaster Recovery and Burst capacity

I tried to replace the Azure blob with an S3 compatible storage path. cleansedEmailFeed is unable to upload the files

2015-02-14 15:40:59,790 WARN JavaActionExecutor:546 – SERVER[sandbox.hortonworks.com] USER[ambari-qa] GROUP[-] TOKEN[] APP[FALCON_FEED_RETENTION_cleansedEmailFeed] JOB[0000014-150214112053995-oozie-oozi-W] ACTION[0000014-150214112053995-oozie-oozi-W@eviction] Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.JavaMain], main() threw exception, org.apache.falcon.FalconException: Exception creating FileSystem:No FileSystem for scheme: s3n
2015-02-14 15:40:59,790 WARN JavaActionExecutor:546 – SERVER[sandbox.hortonworks.com] USER[ambari-qa] GROUP[-] TOKEN[] APP[FALCON_FEED_RETENTION_cleansedEmailFeed] JOB[0000014-150214112053995-oozie-oozi-W] ACTION[0000014-150214112053995-oozie-oozi-W@eviction] Launcher exception: org.apache.falcon.FalconException: Exception creating FileSystem:No FileSystem for scheme: s3n

Any help would be appreciated.

Reply To: Failed to create cluster with Ambari's REST API

Reply To: No repository for ambari 1.7

$
0
0

Looks like you are using Red Hat Satellite? That .repo and the base urls are correct so maybe the .repo is not used at all and you need the repos registered in RHN?

“This system is receiving updates from RHN Classic or Red Hat Satellite.”

Reply To: Ambari-Server sync-ldap not working

$
0
0

Hi, Is your Ambari Server setup for HTTPS? And/or have you changed the Ambari Server port?

If you can't beat 'em, out-app 'em?

$
0
0

Is Microsoft looking to Android vendors like Cyanogen and Samsung to give the Redmond

Will HDP run on Ubuntu 14.04?

$
0
0

I am having a heck of a time getting things installed on Ubuntu 12.04. There are missing packages, missing dependencies, etc. There are so many issues that I’ve spent hours just getting to page 7 on the Ambari install guide. i was going to install on 14.04 but thought better of it and down graded to 12. Now that seems like a bad idea.

Is anybody successfully running on the latest version of Ubuntu?

If you can't beat 'em, out-app 'em?

$
0
0

Is Microsoft looking to Android vendors like Cyanogen and Samsung to give the Redmond


Reply To: Install/Startup Guide for VirtualBox

Problem installing HDP2.2

$
0
0

Hi team
I am not able to install HDP-2.2 using local repositories in centos6. It throws following error related to hdp-select package. please advice what am I doing worng ?

stderr: /var/lib/ambari-agent/data/errors-28.txt

2015-02-14 23:32:23,316 – Error while executing command ‘install':
Traceback (most recent call last):
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 123, in execute
method(env)
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py”, line 34, in hook
install_packages()
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/shared_initialization.py”, line 63, in install_packages
Package(packages)
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 149, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 115, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py”, line 40, in action_install
self.install_package(package_name)
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py”, line 36, in install_package
shell.checked_call(cmd)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 36, in checked_call
return _call(command, logoutput, True, cwd, env, preexec_fn, user, wait_for_finish, timeout, path)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 102, in _call
raise Fail(err_msg)
Fail: Execution of ‘/usr/bin/yum -d 0 -e 0 -y install hdp-select’ returned 1. Error: Nothing to do

stdout: /var/lib/ambari-agent/data/output-28.txt

2015-02-14 23:32:22,085 – Group[‘hadoop’] {‘ignore_failures': False}
2015-02-14 23:32:22,086 – Modifying group hadoop
2015-02-14 23:32:22,115 – Group[‘nobody’] {‘ignore_failures': False}
2015-02-14 23:32:22,116 – Modifying group nobody
2015-02-14 23:32:22,142 – Group[‘users’] {‘ignore_failures': False}
2015-02-14 23:32:22,142 – Modifying group users
2015-02-14 23:32:22,163 – Group[‘nagios’] {‘ignore_failures': False}
2015-02-14 23:32:22,164 – Modifying group nagios
2015-02-14 23:32:22,187 – Group[‘knox’] {‘ignore_failures': False}
2015-02-14 23:32:22,188 – Modifying group knox
2015-02-14 23:32:22,210 – User[‘nobody’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’nobody’]}
2015-02-14 23:32:22,211 – Modifying user nobody
2015-02-14 23:32:22,221 – User[‘hive’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-02-14 23:32:22,222 – Modifying user hive
2015-02-14 23:32:22,233 – User[‘oozie’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’users’]}
2015-02-14 23:32:22,233 – Modifying user oozie
2015-02-14 23:32:22,245 – User[‘nagios’] {‘gid': ‘nagios’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-02-14 23:32:22,245 – Modifying user nagios
2015-02-14 23:32:22,257 – User[‘ambari-qa’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’users’]}
2015-02-14 23:32:22,258 – Modifying user ambari-qa
2015-02-14 23:32:22,268 – User[‘flume’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-02-14 23:32:22,269 – Modifying user flume
2015-02-14 23:32:22,280 – User[‘hdfs’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-02-14 23:32:22,280 – Modifying user hdfs
2015-02-14 23:32:22,293 – User[‘knox’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-02-14 23:32:22,294 – Modifying user knox
2015-02-14 23:32:22,305 – User[‘storm’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-02-14 23:32:22,305 – Modifying user storm
2015-02-14 23:32:22,316 – User[‘mapred’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-02-14 23:32:22,316 – Modifying user mapred
2015-02-14 23:32:22,327 – User[‘hbase’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-02-14 23:32:22,328 – Modifying user hbase
2015-02-14 23:32:22,339 – User[‘tez’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’users’]}
2015-02-14 23:32:22,340 – Modifying user tez
2015-02-14 23:32:22,351 – User[‘zookeeper’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-02-14 23:32:22,352 – Modifying user zookeeper
2015-02-14 23:32:22,364 – User[‘kafka’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-02-14 23:32:22,365 – Modifying user kafka
2015-02-14 23:32:22,377 – User[‘falcon’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-02-14 23:32:22,378 – Modifying user falcon
2015-02-14 23:32:22,394 – User[‘sqoop’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-02-14 23:32:22,395 – Modifying user sqoop
2015-02-14 23:32:22,412 – User[‘yarn’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-02-14 23:32:22,412 – Modifying user yarn
2015-02-14 23:32:22,430 – User[‘hcat’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-02-14 23:32:22,430 – Modifying user hcat
2015-02-14 23:32:22,445 – File[‘/var/lib/ambari-agent/data/tmp/changeUid.sh’] {‘content': StaticFile(‘changeToSecureUid.sh’), ‘mode': 0555}
2015-02-14 23:32:22,447 – Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null’] {‘not_if': ‘test $(id -u ambari-qa) -gt 1000′}
2015-02-14 23:32:22,458 – Skipping Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null’] due to not_if
2015-02-14 23:32:22,459 – File[‘/var/lib/ambari-agent/data/tmp/changeUid.sh’] {‘content': StaticFile(‘changeToSecureUid.sh’), ‘mode': 0555}
2015-02-14 23:32:22,460 – Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/hadoop/hbase 2>/dev/null’] {‘not_if': ‘test $(id -u hbase) -gt 1000′}
2015-02-14 23:32:22,469 – Skipping Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/hadoop/hbase 2>/dev/null’] due to not_if
2015-02-14 23:32:22,470 – Directory[‘/etc/hadoop/conf.empty’] {‘owner': ‘root’, ‘group': ‘root’, ‘recursive': True}
2015-02-14 23:32:22,470 – Link[‘/etc/hadoop/conf’] {‘not_if': ‘ls /etc/hadoop/conf’, ‘to': ‘/etc/hadoop/conf.empty’}
2015-02-14 23:32:22,479 – Skipping Link[‘/etc/hadoop/conf’] due to not_if
2015-02-14 23:32:22,491 – File[‘/etc/hadoop/conf/hadoop-env.sh’] {‘content': InlineTemplate(…), ‘owner': ‘hdfs’}
2015-02-14 23:32:22,505 – Repository[‘HDP-2.2′] {‘base_url': ‘http://mn/install/essentials/HDP/centos6/’, ‘action': [‘create’], ‘components': [u’HDP’, ‘main’], ‘repo_template': ‘repo_suse_rhel.j2′, ‘repo_file_name': ‘HDP’, ‘mirror_list': None}
2015-02-14 23:32:22,513 – File[‘/etc/yum.repos.d/HDP.repo’] {‘content': Template(‘repo_suse_rhel.j2′)}
2015-02-14 23:32:22,514 – Repository[‘HDP-UTILS-1.1.0.20′] {‘base_url': ‘http://mn/install/essentials/HDP-UTILS-1.1.0.17/repos/centos6′, ‘action': [‘create’], ‘components': [u’HDP-UTILS’, ‘main’], ‘repo_template': ‘repo_suse_rhel.j2′, ‘repo_file_name': ‘HDP-UTILS’, ‘mirror_list': None}
2015-02-14 23:32:22,516 – File[‘/etc/yum.repos.d/HDP-UTILS.repo’] {‘content': Template(‘repo_suse_rhel.j2′)}
2015-02-14 23:32:22,517 – Package[‘unzip’] {}
2015-02-14 23:32:22,643 – Skipping installing existent package unzip
2015-02-14 23:32:22,643 – Package[‘curl’] {}
2015-02-14 23:32:22,819 – Skipping installing existent package curl
2015-02-14 23:32:22,820 – Package[‘hdp-select’] {}
2015-02-14 23:32:22,965 – Installing package hdp-select (‘/usr/bin/yum -d 0 -e 0 -y install hdp-select’)
2015-02-14 23:32:23,316 – Error while executing command ‘install':
Traceback (most recent call last):
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 123, in execute
method(env)
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py”, line 34, in hook
install_packages()
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/shared_initialization.py”, line 63, in install_packages
Package(packages)
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 149, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 115, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py”, line 40, in action_install
self.install_package(package_name)
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py”, line 36, in install_package
shell.checked_call(cmd)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 36, in checked_call
return _call(command, logoutput, True, cwd, env, preexec_fn, user, wait_for_finish, timeout, path)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 102, in _call
raise Fail(err_msg)
Fail: Execution of ‘/usr/bin/yum -d 0 -e 0 -y install hdp-select’ returned 1. Error: Nothing to do

Reply To: Install fails on brand new 2012 R2 server

Reply To: Problem installing HDP2.2

$
0
0

During the Install Wizard, on the step where you Select Stack, did you expand the Advanced Repository Options and enter the Base URLs for your local repositories?

Is there an HDP.repo file in /etc/yum.repos.d? Can you paste the contents?

Reply To: historyserver fails to jara

$
0
0

Hi.

I’m having the same issue. I’ve checked all the filesystem, and i can find the library:

find / -iname “leveldbjn*jar”
/usr/hdp/2.2.0.0-2041/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar
/usr/hdp/2.2.0.0-2041/hadoop-yarn/lib/leveldbjni-all-1.8.jar
/usr/hdp/2.2.0.0-2041/hadoop/client/leveldbjni-all.jar
/usr/hdp/2.2.0.0-2041/hadoop/client/leveldbjni-all-1.8.jar

But, the TimeLine server fails with the same error:
15/02/16 14:07:50 INFO applicationhistoryservice.ApplicationHistoryServer: registered UNIX signal handlers for [TERM, HUP, INT]
15/02/16 14:07:51 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
15/02/16 14:07:51 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
15/02/16 14:07:51 INFO impl.MetricsSystemImpl: ApplicationHistoryServer metrics system started
15/02/16 14:07:51 FATAL applicationhistoryservice.ApplicationHistoryServer: Error starting ApplicationHistoryServer
java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in java.library.path, no leveldbjni in java.library.path, /tmp/libleveldbjni-64-1-6808737154577847504.8: /tmp/libleveldbjni-64-1-6808737154577847504.8: failed to map segment from shared object: Operation not permitted]
at org.fusesource.hawtjni.runtime.Library.doLoad(Library.java:182)
at org.fusesource.hawtjni.runtime.Library.load(Library.java:140)
at org.fusesource.leveldbjni.JniDBFactory.<clinit>(JniDBFactory.java:48)
at org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore.serviceInit(LeveldbTimelineStore.java:202)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.serviceInit(ApplicationHistoryServer.java:99)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.launchAppHistoryServer(ApplicationHistoryServer.java:157)
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.main(ApplicationHistoryServer.java:167)
15/02/16 14:07:51 INFO util.ExitUtil: Exiting with status -1
15/02/16 14:07:51 INFO impl.MetricsSystemImpl: Stopping ApplicationHistoryServer metrics system…
15/02/16 14:07:51 INFO impl.MetricsSystemImpl: ApplicationHistoryServer metrics system stopped.
15/02/16 14:07:51 INFO impl.MetricsSystemImpl: ApplicationHistoryServer metrics system shutdown complete.
15/02/16 14:07:51 INFO applicationhistoryservice.ApplicationHistoryServer: SHUTDOWN_MSG:
Store

Any help would be appreciated

how to install kafka?

Reply To: HA Namenodes and DNS?

$
0
0

Hi Alex,

The settings which reference the service name of the NameNode are in the hdfs-site.xml. I would expect that your clients outside of the cluster require these files to know what to connect with. If you’re connecting over webhdfs then you would need to use hadoop-httpfs which acts like a proxy between the NameNodes and routes traffic accordingly.

Let me know if you have any questions,

Dave


help me to create hive QL table:

$
0
0

this is my line in log file to parse and create table in hiveQL:
28-Jan-2015 23:50:14.784 client 172.31.1.145#39324: UDP: query: imap.XXXX.com IN AAAA response: NOERROR +A imap.XXXX.com. 10 IN CNAME casarray1.XXXX.com.;

Reply To: Will HDP run on Ubuntu 14.04?

$
0
0

ambari yes…. clients, not so far…follow my thread as well…i tried a 12.04 client but got a version mismatch with the ambari server which is 14.04 …noe more day of haitting my head against the wall.

Reply To: Install fails on brand new 2012 R2 server

$
0
0

We have to use our own JDK/JRE and cannot install the ORACLE JDK/JRE

Has Hortonworks tested their installation process on Windows using the IBM JRE can concluded it is or is not supported?

Reply To: Filebrowse Hue

$
0
0

How can I do that please.
I tried this but still unable to upload:

[root@m1 ~]# sudo hadoop fs -chmod -R 777 /tmp
sudo: /etc/sudo.conf is world writable
sudo: /usr/libexec/sudoers.so must be only be writable by owner
sudo: fatal error, unable to load plugins

Thank you Dave!

Ambari agent installation failing

$
0
0

Hi,

I’m trying to register 9 hosts using Ambari for HDP 2.2. Out of 9, 8 got registered but for one I’m getting the below error :

==========================
Creating target directory…
==========================

Command start time 2015-02-16 17:10:11

Warning: Permanently added ‘l1039lab.sss.se.scania.com’ (RSA) to the list of known hosts.
Connection to l1039lab.sss.se.scania.com closed.
SSH command execution finished
host=l1039lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-16 17:10:11

==========================
Copying common functions script…
==========================

Command start time 2015-02-16 17:10:11

scp /usr/lib/python2.6/site-packages/ambari_commons
host=l1039lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-16 17:10:12

==========================
Copying OS type check script…
==========================

Command start time 2015-02-16 17:10:12

scp /usr/lib/python2.6/site-packages/ambari_server/os_check_type.py
host=l1039lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-16 17:10:12

==========================
Running OS type check…
==========================

Command start time 2015-02-16 17:10:12
Cluster primary/cluster OS type is redhat6 and local/current OS type is redhat6

Connection to l1039lab.sss.se.scania.com closed.
SSH command execution finished
host=l1039lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-16 17:10:13

==========================
Checking ‘sudo’ package on remote host…
==========================

Command start time 2015-02-16 17:10:13
sudo-1.8.6p3-12.el6.x86_64

Connection to l1039lab.sss.se.scania.com closed.
SSH command execution finished
host=l1039lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-16 17:10:13

==========================
Copying repo file to ‘tmp’ folder…
==========================

Command start time 2015-02-16 17:10:13

scp /etc/yum.repos.d/ambari.repo
host=l1039lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-16 17:10:14

==========================
Moving file to repo dir…
==========================

Command start time 2015-02-16 17:10:14

Connection to l1039lab.sss.se.scania.com closed.
SSH command execution finished
host=l1039lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-16 17:10:14

==========================
Copying setup script file…
==========================

Command start time 2015-02-16 17:10:14

scp /usr/lib/python2.6/site-packages/ambari_server/setupAgent.py
host=l1039lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-16 17:10:15

==========================
Running setup agent script…
==========================

Command start time 2015-02-16 17:10:15
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
/bin/sh: /usr/sbin/ambari-agent: No such file or directory
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Restarting ambari-agent
Verifying Python version compatibility…
Using python /usr/bin/python2.6
ambari-agent is not running. No PID found at /var/run/ambari-agent/ambari-agent.pid
Verifying Python version compatibility…
Using python /usr/bin/python2.6
Checking for previously running Ambari Agent…
Starting ambari-agent
Verifying ambari-agent process status…
ERROR: ambari-agent start failed. For more details, see /var/log/ambari-agent/ambari-agent.out:
====================
from Controller import AGENT_AUTO_RESTART_EXIT_CODE
File “/usr/lib/python2.6/site-packages/ambari_agent/Controller.py”, line 35, in <module>
from Heartbeat import Heartbeat
File “/usr/lib/python2.6/site-packages/ambari_agent/Heartbeat.py”, line 29, in <module>
from HostInfo import HostInfo
File “/usr/lib/python2.6/site-packages/ambari_agent/HostInfo.py”, line 32, in <module>
from PackagesAnalyzer import PackagesAnalyzer
File “/usr/lib/python2.6/site-packages/ambari_agent/PackagesAnalyzer.py”, line 28, in <module>
from ambari_commons import OSCheck, OSConst, Firewall
ImportError: No module named ambari_commons
====================
Agent out at: /var/log/ambari-agent/ambari-agent.out
Agent log at: /var/log/ambari-agent/ambari-agent.log
tail: cannot open `/var/log/ambari-agent/ambari-agent.log’ for reading: No such file or directory
tail: cannot open `/var/log/ambari-agent/ambari-agent.log’ for reading: No such file or directory
tail: cannot open `/var/log/ambari-agent/ambari-agent.log’ for reading: No such file or directory

Connection to l1039lab.sss.se.scania.com closed.
SSH command execution finished
host=l1039lab.sss.se.scania.com, exitcode=255
Command end time 2015-02-16 17:10:38

ERROR: Bootstrap of host l1039lab.sss.se.scania.com fails because previous action finished with non-zero exit code (255)
ERROR MESSAGE: tcgetattr: Invalid argument
Connection to l1039lab.sss.se.scania.com closed.

STDOUT: This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
/bin/sh: /usr/sbin/ambari-agent: No such file or directory
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Restarting ambari-agent
Verifying Python version compatibility…
Using python /usr/bin/python2.6
ambari-agent is not running. No PID found at /var/run/ambari-agent/ambari-agent.pid
Verifying Python version compatibility…
Using python /usr/bin/python2.6
Checking for previously running Ambari Agent…
Starting ambari-agent
Verifying ambari-agent process status…
ERROR: ambari-agent start failed. For more details, see /var/log/ambari-agent/ambari-agent.out:
====================
from Controller import AGENT_AUTO_RESTART_EXIT_CODE
File “/usr/lib/python2.6/site-packages/ambari_agent/Controller.py”, line 35, in <module>
from Heartbeat import Heartbeat
File “/usr/lib/python2.6/site-packages/ambari_agent/Heartbeat.py”, line 29, in <module>
from HostInfo import HostInfo
File “/usr/lib/python2.6/site-packages/ambari_agent/HostInfo.py”, line 32, in <module>
from PackagesAnalyzer import PackagesAnalyzer
File “/usr/lib/python2.6/site-packages/ambari_agent/PackagesAnalyzer.py”, line 28, in <module>
from ambari_commons import OSCheck, OSConst, Firewall
ImportError: No module named ambari_commons
====================
Agent out at: /var/log/ambari-agent/ambari-agent.out
Agent log at: /var/log/ambari-agent/ambari-agent.log
tail: cannot open `/var/log/ambari-agent/ambari-agent.log’ for reading: No such file or directory
tail: cannot open `/var/log/ambari-agent/ambari-agent.log’ for reading: No such file or directory
tail: cannot open `/var/log/ambari-agent/ambari-agent.log’ for reading: No such file or directory

Connection to l1039lab.sss.se.scania.com closed.

Note :

This machine had a previous ONLY(i.e NO HDP etc.) Ambari SERVER installation and I referred link + manual deletion of directories having word ‘ambari’.

What shall I do for the clean uninstall of Ambari and resolve the above issue so that the machine can be registered ?

Thanks and regards !

Viewing all 3435 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>