Hello Steve,
i am currently also working on this solution. Can you tell me how you configured the services in the KNOX topology xml?
Did you only use the HIVE? or is there a special one fpr Phoenix.
Thanks for the help!
Hello Steve,
i am currently also working on this solution. Can you tell me how you configured the services in the KNOX topology xml?
Did you only use the HIVE? or is there a special one fpr Phoenix.
Thanks for the help!
Hello,
I have managed to get httpFS / Job view to work, but Hue Beeswax & HCatalog no longer work after the migration to hue 2.6.1-2 on my secured cluster.
It behaves as if no hue principal was used, but a default “krbtgt/LOCALDOMAIN”.
However, I am using the same hue.ini as before, with hue_principal & hue_keytab set to the proper values.
Any idea what could be wrong?
The error trace is :
[04/May/2015 09:19:04 +0000] middleware INFO Processing exception: Could not start SASL: Error in sasl_client_start (-1) SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Server krbtgt/LOCALDOMAIN@HADOOP.DEV not found in Kerberos database) (code THRIFTTRANSPORT): TTransportException('Could not start SASL: Error in sasl_client_start (-1) SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Server krbtgt/LOCALDOMAIN@HADOOP.DEV not found in Kerberos database)',): Traceback (most recent call last): File "/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py", line 100, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/hue/apps/hcatalog/src/hcatalog/views.py", line 54, in index database = _get_current_db(request, _get_last_database(request)) File "/usr/lib/hue/apps/hcatalog/src/hcatalog/views.py", line 724, in _get_last_database dbs = dbms.get(request.user).get_databases() File "/usr/lib/hue/apps/beeswax/src/beeswax/server/dbms.py", line 110, in get_databases return self.client.get_databases() File "/usr/lib/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py", line 744, in get_databases return [table[col] for table in self._client.get_databases()] File "/usr/lib/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py", line 443, in get_databases res = self.call(self._client.GetSchemas, req) File "/usr/lib/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py", line 406, in call session = self.open_session(self.user) File "/usr/lib/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py", line 380, in open_session res = self._client.OpenSession(req) File "/usr/lib/hue/desktop/core/src/desktop/lib/thrift_util.py", line 329, in wrapper raise StructuredThriftTransportException(e, error_code=502) StructuredThriftTransportException: Could not start SASL: Error in sasl_client_start (-1) SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Server krbtgt/LOCALDOMAIN@HADOOP.DEV not found in Kerberos database) (code THRIFTTRANSPORT): TTransportException('Could not start SASL: Error in sasl_client_start (-1) SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Server krbtgt/LOCALDOMAIN@HADOOP.DEV not found in Kerberos database)',)
Hi Jon,
My ha.zookeeper.quorum config seems fine. Meanwhile, one of my two Namenode has started as Active, so I can still work without the standby namenode.
Maybe I’ll find a solution later. For now, I can continue my tests.
Thank you anyway for your help.
Orlando
I have a complex text file to parse and load for analysis.
I started off with a simple Hive query to parse a text file and load as a table in HDFS.
name_area.txt
arun:salem
anand:vnr
Cheeli:guntur
Hive Query
CREATE TABLE test(
name STRING,
area STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
WITH SERDEPROPERTIES ("input.regex" = "^(.*):(.*)$","output.format.string" = "%1$s %2$s")
LOCATION '/user/name_area.txt';
The file is copied to HDFS.
When i execute the query, I am getting the following exception.
NoReverseMatch at /beeswax/execute/6
Reverse for ‘execute_parameterized_query’ with arguments ‘(6,)’ and keyword arguments ‘{}’ not found.
Request Method: POST
Request URL: http://192.168.58.128:8000/beeswax/execute/6
Django Version: 1.2.3
Exception Type: NoReverseMatch
Exception Value:
Reverse for ‘execute_parameterized_query’ with arguments ‘(6,)’ and keyword arguments ‘{}’ not found.
Exception Location: /usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/urlresolvers.py in reverse, line 297
Python Executable: /usr/bin/python2.6
Python Version: 2.6.6
Python Path: [”, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pip-0.6.3-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Babel-0.9.6-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/BabelDjango-0.2.2-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Beaker-1.4.2-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.7.2-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Markdown-2.0.3-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/MarkupSafe-0.9.3-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/MySQL_python-1.2.3c1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Paste-1.7.2-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/PyYAML-3.09-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Pygments-1.3.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/South-0.7-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Spawning-0.9.6-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Twisted-8.2.0-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/anyjson-0.3.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/avro-1.5.0-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/billiard-2.7.3.28-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/celery-3.0.19-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/configobj-4.6.0-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/django_auth_ldap-1.2.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/django_celery-3.0.17-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/django_extensions-0.5-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/django_nose-0.5-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/elementtree-1.2.6_20050316-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/enum-0.4.4-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/eventlet-0.9.14-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/greenlet-0.3.1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/httplib2-0.8-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/importlib-1.0.2-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/kerberos-1.1.1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/kombu-2.5.10-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/lockfile-0.8-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/lxml-3.3.5-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/moxy-1.0.0-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/openpyxl-1.6.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/ordereddict-1.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pam-0.1.3-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/processing-0.52-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pyOpenSSL-0.13-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pycrypto-2.6-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pysqlite-2.5.5-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/python_daemon-1.5.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/python_dateutil-2.0-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/python_ldap-2.3.13-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pytidylib-0.2.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/requests-2.2.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/requests_kerberos-0.4-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/sasl-0.1.1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/sh-1.08-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/simplejson-2.0.9-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/threadframe-0.2-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/thrift-0.9.1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/urllib2_kerberos-0.1.6-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/xlrd-0.9.0-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/zope.interface-3.5.2-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/desktop/core/src’, ‘/usr/lib/hue/desktop/libs/hadoop/src’, ‘/usr/lib/hue/desktop/libs/liboozie/src’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages’, ‘/usr/lib/hue/apps/about/src’, ‘/usr/lib/hue/apps/beeswax/src’, ‘/usr/lib/hue/apps/filebrowser/src’, ‘/usr/lib/hue/apps/hcatalog/src’, ‘/usr/lib/hue/apps/help/src’, ‘/usr/lib/hue/apps/jobbrowser/src’, ‘/usr/lib/hue/apps/jobsub/src’, ‘/usr/lib/hue/apps/oozie/src’, ‘/usr/lib/hue/apps/pig/src’, ‘/usr/lib/hue/apps/proxy/src’, ‘/usr/lib/hue/apps/useradmin/src’, ‘/usr/lib/hue/build/env/bin’, ‘/usr/lib64/python2.6′, ‘/usr/lib64/python2.6/plat-linux2′, ‘/usr/lib64/python2.6/lib-dynload’, ‘/usr/lib64/python2.6/site-packages’, ‘/usr/lib/python2.6/site-packages’, ‘/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info’, ‘/usr/lib/hue/apps/beeswax/gen-py’, ‘/usr/lib/hue’, ‘/usr/lib64/python26.zip’, ‘/usr/lib64/python2.6/lib-tk’, ‘/usr/lib64/python2.6/lib-old’, ‘/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info’, ‘/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info’, ‘/usr/lib/hue/apps/beeswax/src/beeswax/../../gen-py’, ‘/usr/lib/hue/apps/jobbrowser/src/jobbrowser/../../gen-py’, ‘/usr/lib/hue/apps/proxy/src/proxy/../../gen-py’]
Server time: Fri, 24 Apr 2015 07:37:07 -0700
Appreciate your help on this.
Hi,
I am planning to use Falcon in our project to get source and target file names(for ETL) dynamically with the use of feed name.
I have few question. It will be very helpful if some one can guide.
1. I have set Oozie workflows which I am planning to run through Falcon job. All these Oozie workflows should run one after other. I know that on falcon process I can schedule job. But how can I set dependency between these jobs to make it run sequentially. I don’t want to schedule all my falcon jobs.
2. Is there a option in feed file to start some job once feed is received for today? Once all my source files are received I want to start the falcon job. I don’t want to schedule it at specific time.
3. How to tag meta data(schema of the file) in feed file?
I am facing similar issue . Could you Sove it?
The SELinux is disabled and still while starting the services from ambari facing the following error.
[The services get installed but refuse to start (and thereafter let check-scripts run).]
The error logs have the following:
Error while executing command ‘start':
Traceback (most recent call last):
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 214, in execute
method(env)
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py”, line 32, in hook
setup_hadoop()
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/shared_initialization.py”, line 32, in setup_hadoop
sudo=True,
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 152, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 118, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 274, in action_run
raise ex
Fail: Execution of ‘setenforce 0′ returned 1. setenforce: SELinux is disabled
Hi,
I managed to install a 5 nodes cluster, basically following this blog: https://martin.atlassian.net/wiki/pages/viewpage.action?pageId=34832444
Ambari did the job of deploying the software to my nodes. Nodes and Services are running.
However, I expected to see a gui, like in the sandbox. I am not sure, if this is not supposed or if there is a mistake in my installation.
For example, in the sandbox I can create a table by uploading a file via the web ui. In my installation I don’t find such a option.
I further miss the ui for pig, jobbrowser, beeswax, etc.
They all are accessible by the same port (8000) in the sandbox. What do I have to do, to accomplish this in my own installation?
Thanks,
Br,
Tobe
William,
Make sure you’ve set your “Ranger DB Host”, and “External URL” to the FQDN of the appropriate hosts. The Ranger DB Host needs to be the FQDN of the MySQL or Oracle server that you’re using for audit logging, and the External URL needs to be set to the FQDN of the server running the Ranger Admin service.
I am having the same issue.. I get :
AttributeError at /about/
‘str’ object has no attribute ‘get’
Request Method: GET
Request URL: http://<MYIPAddress>:8000/about/
Django Version: 1.2.3
Exception Type: AttributeError
Exception Value:
‘str’ object has no attribute ‘get’
Exception Location: /usr/lib/hue/desktop/core/src/desktop/lib/conf.py in _get_data_and_presence, line 124
Python Executable: /usr/bin/python2.6
Python Version: 2.6.6
Python Path: [”, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pip-0.6.3-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Babel-0.9.6-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/BabelDjango-0.2.2-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.7.2-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Markdown-2.0.3-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/MarkupSafe-0.9.3-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/MySQL_python-1.2.3c1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Paste-1.7.2-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/PyYAML-3.09-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Pygments-1.3.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/South-0.7-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Spawning-0.9.6-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/avro-1.5.0-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/configobj-4.6.0-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/django_auth_ldap-1.0.7-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/django_extensions-0.5-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/django_nose-0.5-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/elementtree-1.2.6_20050316-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/enum-0.4.4-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/eventlet-0.9.14-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/greenlet-0.3.1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/happybase-0.6-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/kerberos-1.1.1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/lockfile-0.8-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/lxml-2.2.2-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/moxy-1.0.0-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pam-0.1.3-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pyOpenSSL-0.13-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pycrypto-2.6-py2.6-lin
I upgraded from Ambari 1.6 to 2.0.
YARN apptimeline server is not starting because of following errors.
stderr: /var/lib/ambari-agent/data/errors-1417.txt
2015-05-04 11:02:19,254 – Error while executing command ‘start':
Traceback (most recent call last):
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 214, in execute
method(env)
File “/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py”, line 58, in start
service(‘timelineserver’, action=’start’)
File “/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service.py”, line 59, in service
initial_wait=5
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 152, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 118, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 274, in action_run
raise ex
Fail: Execution of ‘ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps -p cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid
>/dev/null 2>&1′ returned 1.
stdout: /var/lib/ambari-agent/data/output-1417.txt
2015-05-04 11:01:49,315 – u”Directory[‘/var/lib/ambari-agent/data/tmp/AMBARI-artifacts/’]” {‘recursive': True}
2015-05-04 11:01:49,491 – u”File[‘/var/lib/ambari-agent/data/tmp/AMBARI-artifacts//UnlimitedJCEPolicyJDK7.zip’]” {‘content': DownloadSource(‘http://hwnn:8080/resources//UnlimitedJCEPolicyJDK7.zip’)}
2015-05-04 11:01:49,582 – Not downloading the file from http://hwnn:8080/resources//UnlimitedJCEPolicyJDK7.zip, because /var/lib/ambari-agent/data/tmp/UnlimitedJCEPolicyJDK7.zip already exists
2015-05-04 11:01:49,723 – u”Group[‘hadoop’]” {‘ignore_failures': False}
2015-05-04 11:01:49,723 – Modifying group hadoop
2015-05-04 11:01:49,767 – u”Group[‘nobody’]” {‘ignore_failures': False}
2015-05-04 11:01:49,768 – Modifying group nobody
2015-05-04 11:01:49,811 – u”Group[‘users’]” {‘ignore_failures': False}
2015-05-04 11:01:49,812 – Modifying group users
2015-05-04 11:01:49,857 – u”User[‘hive’]” {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-05-04 11:01:49,857 – Modifying user hive
2015-05-04 11:01:49,903 – u”User[‘oozie’]” {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’users’]}
2015-05-04 11:01:49,904 – Modifying user oozie
2015-05-04 11:01:49,950 – u”User[‘nobody’]” {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’nobody’]}
2015-05-04 11:01:49,951 – Modifying user nobody
2015-05-04 11:01:49,994 – u”User[‘ambari-qa’]” {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’users’]}
2015-05-04 11:01:49,995 – Modifying user ambari-qa
2015-05-04 11:01:50,039 – u”User[‘hdfs’]” {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-05-04 11:01:50,039 – Modifying user hdfs
2015-05-04 11:01:50,086 – u”User[‘storm’]” {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-05-04 11:01:50,086 – Modifying user storm
2015-05-04 11:01:50,130 – u”User[‘mapred’]” {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-05-04 11:01:50,131 – Modifying user mapred
2015-05-04 11:01:50,177 – u”User[‘hbase’]” {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-05-04 11:01:50,178 – Modifying user hbase
2015-05-04 11:01:50,224 – u”User[‘tez’]” {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’users’]}
2015-05-04 11:01:50,224 – Modifying user tez
2015-05-04 11:01:50,268 – u”User[‘zookeeper’]” {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-05-04 11:01:50,269 – Modifying user zookeeper
2015-05-04 11:01:50,315 – u”User[‘falcon’]” {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-05-04 11:01:50,315 – Modifying user falcon
2015-05-04 11:01:50,359 – u”User[‘sqoop’]” {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-05-04 11:01:50,360 – Modifying user sqoop
2015-05-04 11:01:50,407 – u”User[‘yarn’]” {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-05-04 11:01:50,407 – Modifying user yarn
2015-05-04 11:01:50,453 – u”User[‘hcat’]” {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-05-04 11:01:50,453 – Modifying user hcat
2015-05-04 11:01:50,500 – u”File[‘/var/lib/ambari-agent/data/tmp/changeUid.sh’]” {‘content': StaticFile(‘changeToSecureUid.sh’), ‘mode': 0555}
2015-05-04 11:01:50,789 – u”Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa’]” {‘not_if': ‘(test $(id -u ambari-qa) -gt 1000) || (false)’}
2015-05-04 11:01:50,834 – Skipping u”Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa’]” due to not_if
2015-05-04 11:01:50,834 – u”Directory[‘/hadoop/hbase’]” {‘owner': ‘hbase’, ‘recursive': True, ‘mode': 0775, ‘cd_access': ‘a’}
2015-05-04 11:01:51,155 – u”File[‘/var/lib/ambari-agent/data/tmp/changeUid.sh’]” {‘content': StaticFile(‘changeToSecureUid.sh’), ‘mode': 0555}
2015-05-04 11:01:51,430 – u”Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/hadoop/hbase’]” {‘not_if': ‘(test $(id -u hbase) -gt 1000) || (false)’}
2015-05-04 11:01:51,473 – Skipping u”Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/hadoop/hbase’]” due to not_if
2015-05-04 11:01:51,473 – u”Group[‘hdfs’]” {‘ignore_failures': False}
2015-05-04 11:01:51,474 – Modifying group hdfs
2015-05-04 11:01:51,516 – u”User[‘hdfs’]” {‘ignore_failures': False, ‘groups': [u’hadoop’, ‘hadoop’, ‘hdfs’, u’hdfs’]}
2015-05-04 11:01:51,517 – Modifying user hdfs
2015-05-04 11:01:51,560 – u”Directory[‘/etc/hadoop’]” {‘mode': 0755}
2015-05-04 11:01:51,711 – u”Directory[‘/etc/hadoop/conf.empty’]” {‘owner': ‘root’, ‘group': ‘hadoop’, ‘recursive': True}
2015-05-04 11:01:51,859 – u”Link[‘/etc/hadoop/conf’]” {‘not_if': ‘ls /etc/hadoop/conf’, ‘to': ‘/etc/hadoop/conf.empty’}
2015-05-04 11:01:51,905 – Skipping u”Link[‘/etc/hadoop/conf’]” due to not_if
2015-05-04 11:01:51,916 – u”File[‘/etc/hadoop/conf/hadoop-env.sh’]” {‘content': InlineTemplate(…), ‘owner': ‘hdfs’, ‘group': ‘hadoop’}
2015-05-04 11:01:52,164 – u”Execute[‘(‘setenforce’, ‘0’)’]” {‘sudo': True, ‘only_if': ‘test -f /selinux/enforce’}
2015-05-04 11:01:52,272 – u”Directory[‘/var/log/hadoop’]” {‘owner': ‘root’, ‘mode': 0775, ‘group': ‘hadoop’, ‘recursive': True, ‘cd_access': ‘a’}
2015-05-04 11:01:52,673 – u”Directory[‘/var/run/hadoop’]” {‘owner': ‘root’, ‘group': ‘root’, ‘recursive': True, ‘cd_access': ‘a’}
2015-05-04 11:01:53,080 – u”Directory[‘/tmp/hadoop-hdfs’]” {‘owner': ‘hdfs’, ‘recursive': True, ‘cd_access': ‘a’}
2015-05-04 11:01:53,416 – u”File[‘/etc/hadoop/conf/commons-logging.properties’]” {‘content': Template(‘commons-logging.properties.j2′), ‘owner': ‘hdfs’}
2015-05-04 11:01:53,644 – u”File[‘/etc/hadoop/conf/health_check’]” {‘content': Template(‘health_check-v2.j2′), ‘owner': ‘hdfs’}
2015-05-04 11:01:53,870 – u”File[‘/etc/hadoop/conf/log4j.properties’]” {‘content': ‘…’, ‘owner': ‘hdfs’, ‘group': ‘hadoop’, ‘mode': 0644}
2015-05-04 11:01:54,116 – u”File[‘/etc/hadoop/conf/hadoop-metrics2.properties’]” {‘content': Template(‘hadoop-metrics2.properties.j2′), ‘owner': ‘hdfs’}
2015-05-04 11:01:54,345 – u”File[‘/etc/hadoop/conf/task-log4j.properties’]” {‘content': StaticFile(‘task-log4j.properties’), ‘mode': 0755}
2015-05-04 11:01:54,637 – u”File[‘/etc/hadoop/conf/configuration.xsl’]” {‘owner': ‘hdfs’, ‘group': ‘hadoop’}
2015-05-04 11:01:55,004 – u”Directory[‘/var/run/hadoop-yarn’]” {‘owner': ‘yarn’, ‘group': ‘hadoop’, ‘recursive': True, ‘cd_access': ‘a’}
2015-05-04 11:01:55,509 – u”Directory[‘/var/run/hadoop-yarn/yarn’]” {‘owner': ‘yarn’, ‘group': ‘hadoop’, ‘recursive': True, ‘cd_access': ‘a’}
2015-05-04 11:01:55,995 – u”Directory[‘/var/log/hadoop-yarn/yarn’]” {‘owner': ‘yarn’, ‘group': ‘hadoop’, ‘recursive': True, ‘cd_access': ‘a’}
2015-05-04 11:01:56,490 – u”Directory[‘/var/run/hadoop-mapreduce’]” {‘owner': ‘mapred’, ‘group': ‘hadoop’, ‘recursive': True, ‘cd_access': ‘a’}
2015-05-04 11:01:56,900 – u”Directory[‘/var/run/hadoop-mapreduce/mapred’]” {‘owner': ‘mapred’, ‘group': ‘hadoop’, ‘recursive': True, ‘cd_access': ‘a’}
2015-05-04 11:01:57,395 – u”Directory[‘/var/log/hadoop-mapreduce’]” {‘owner': ‘mapred’, ‘group': ‘hadoop’, ‘recursive': True, ‘cd_access': ‘a’}
2015-05-04 11:01:57,797 – u”Directory[‘/var/log/hadoop-mapreduce/mapred’]” {‘owner': ‘mapred’, ‘group': ‘hadoop’, ‘recursive': True, ‘cd_access': ‘a’}
2015-05-04 11:01:58,284 – u”Directory[‘/var/log/hadoop-yarn’]” {‘owner': ‘yarn’, ‘ignore_failures': True, ‘recursive': True, ‘cd_access': ‘a’}
2015-05-04 11:01:58,688 – u”XmlConfig[‘core-site.xml’]” {‘group': ‘hadoop’, ‘conf_dir': ‘/etc/hadoop/conf’, ‘mode': 0644, ‘configuration_attributes': {}, ‘owner': ‘hdfs’, ‘configurations': …}
2015-05-04 11:01:58,707 – Generating config: /etc/hadoop/conf/core-site.xml
2015-05-04 11:01:58,707 – u”File[‘/etc/hadoop/conf/core-site.xml’]” {‘owner': ‘hdfs’, ‘content': InlineTemplate(…), ‘group': ‘hadoop’, ‘mode': 0644, ‘encoding': ‘UTF-8′}
2015-05-04 11:01:58,908 – Writing u”File[‘/etc/hadoop/conf/core-site.xml’]” because contents don’t match
2015-05-04 11:01:59,085 – u”XmlConfig[‘mapred-site.xml’]” {‘group': ‘hadoop’, ‘conf_dir': ‘/etc/hadoop/conf’, ‘mode': 0644, ‘configuration_attributes': {}, ‘owner': ‘yarn’, ‘configurations': …}
2015-05-04 11:01:59,139 – Generating config: /etc/hadoop/conf/mapred-site.xml
2015-05-04 11:01:59,139 – u”File[‘/etc/hadoop/conf/mapred-site.xml’]” {‘owner': ‘yarn’, ‘content': InlineTemplate(…), ‘group': ‘hadoop’, ‘mode': 0644, ‘encoding': ‘UTF-8′}
2015-05-04 11:01:59,637 – Writing u”File[‘/etc/hadoop/conf/mapred-site.xml’]” because contents don’t match
2015-05-04 11:01:59,863 – Changing owner for /etc/hadoop/conf/mapred-site.xml from 1009 to yarn
2015-05-04 11:01:59,996 – u”XmlConfig[‘yarn-site.xml’]” {‘group': ‘hadoop’, ‘conf_dir': ‘/etc/hadoop/conf’, ‘mode': 0644, ‘configuration_attributes': {}, ‘owner': ‘yarn’, ‘configurations': …}
2015-05-04 11:02:00,041 – Generating config: /etc/hadoop/conf/yarn-site.xml
2015-05-04 11:02:00,041 – u”File[‘/etc/hadoop/conf/yarn-site.xml’]” {‘owner': ‘yarn’, ‘content': InlineTemplate(…), ‘group': ‘hadoop’, ‘mode': 0644, ‘encoding': ‘UTF-8′}
2015-05-04 11:02:00,539 – Writing u”File[‘/etc/hadoop/conf/yarn-site.xml’]” because contents don’t match
2015-05-04 11:02:00,836 – u”XmlConfig[‘capacity-scheduler.xml’]” {‘group': ‘hadoop’, ‘conf_dir': ‘/etc/hadoop/conf’, ‘mode': 0644, ‘configuration_attributes': {}, ‘owner': ‘yarn’, ‘configurations': …}
2015-05-04 11:02:00,852 – Generating config: /etc/hadoop/conf/capacity-scheduler.xml
2015-05-04 11:02:00,853 – u”File[‘/etc/hadoop/conf/capacity-scheduler.xml’]” {‘owner': ‘yarn’, ‘content': InlineTemplate(…), ‘group': ‘hadoop’, ‘mode': 0644, ‘encoding': ‘UTF-8′}
2015-05-04 11:02:01,147 – Writing u”File[‘/etc/hadoop/conf/capacity-scheduler.xml’]” because contents don’t match
2015-05-04 11:02:01,458 – Changing owner for /etc/hadoop/conf/capacity-scheduler.xml from 1008 to yarn
2015-05-04 11:02:01,502 – u”Directory[‘/var/log/hadoop-yarn/timeline’]” {‘owner': ‘yarn’, ‘group': ‘hadoop’, ‘recursive': True, ‘cd_access': ‘a’}
2015-05-04 11:02:02,096 – u”File[‘/etc/hadoop/conf/yarn.exclude’]” {‘owner': ‘yarn’, ‘group': ‘hadoop’}
2015-05-04 11:02:02,511 – u”File[‘/etc/security/limits.d/yarn.conf’]” {‘content': Template(‘yarn.conf.j2′), ‘mode': 0644}
2015-05-04 11:02:02,979 – u”File[‘/etc/security/limits.d/mapreduce.conf’]” {‘content': Template(‘mapreduce.conf.j2′), ‘mode': 0644}
2015-05-04 11:02:03,578 – u”File[‘/etc/hadoop/conf/yarn-env.sh’]” {‘content': InlineTemplate(…), ‘owner': ‘yarn’, ‘group': ‘hadoop’, ‘mode': 0755}
2015-05-04 11:02:04,200 – u”File[‘/etc/hadoop/conf/mapred-env.sh’]” {‘content': InlineTemplate(…), ‘owner': ‘hdfs’}
2015-05-04 11:02:05,088 – u”File[‘/etc/hadoop/conf/taskcontroller.cfg’]” {‘content': Template(‘taskcontroller.cfg.j2′), ‘owner': ‘hdfs’}
2015-05-04 11:02:05,600 – u”XmlConfig[‘mapred-site.xml’]” {‘owner': ‘mapred’, ‘group': ‘hadoop’, ‘conf_dir': ‘/etc/hadoop/conf’, ‘configuration_attributes': {}, ‘configurations': …}
2015-05-04 11:02:05,930 – Generating config: /etc/hadoop/conf/mapred-site.xml
2015-05-04 11:02:05,931 – u”File[‘/etc/hadoop/conf/mapred-site.xml’]” {‘owner': ‘mapred’, ‘content': InlineTemplate(…), ‘group': ‘hadoop’, ‘mode': None, ‘encoding': ‘UTF-8′}
2015-05-04 11:02:07,147 – Writing u”File[‘/etc/hadoop/conf/mapred-site.xml’]” because contents don’t match
2015-05-04 11:02:08,301 – Changing owner for /etc/hadoop/conf/mapred-site.xml from 1007 to mapred
2015-05-04 11:02:08,960 – u”XmlConfig[‘capacity-scheduler.xml’]” {‘owner': ‘hdfs’, ‘group': ‘hadoop’, ‘conf_dir': ‘/etc/hadoop/conf’, ‘configuration_attributes': {}, ‘configurations': …}
2015-05-04 11:02:09,001 – Generating config: /etc/hadoop/conf/capacity-scheduler.xml
2015-05-04 11:02:09,002 – u”File[‘/etc/hadoop/conf/capacity-scheduler.xml’]” {‘owner': ‘hdfs’, ‘content': InlineTemplate(…), ‘group': ‘hadoop’, ‘mode': None, ‘encoding': ‘UTF-8′}
2015-05-04 11:02:10,579 – Writing u”File[‘/etc/hadoop/conf/capacity-scheduler.xml’]” because contents don’t match
2015-05-04 11:02:10,949 – Changing owner for /etc/hadoop/conf/capacity-scheduler.xml from 1007 to hdfs
2015-05-04 11:02:11,081 – u”File[‘/etc/hadoop/conf/ssl-client.xml.example’]” {‘owner': ‘mapred’, ‘group': ‘hadoop’}
2015-05-04 11:02:11,618 – u”File[‘/etc/hadoop/conf/ssl-server.xml.example’]” {‘owner': ‘mapred’, ‘group': ‘hadoop’}
2015-05-04 11:02:12,126 – u”File[‘/var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid’]” {‘action': [‘delete’], ‘not_if': ‘ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps -p cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid
>/dev/null 2>&1′}
2015-05-04 11:02:12,357 – Deleting u”File[‘/var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid’]”
2015-05-04 11:02:12,445 – u”Execute[‘ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh –config /etc/hadoop/conf start timelineserver’]” {‘not_if': ‘ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps -p cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid
>/dev/null 2>&1′, ‘user': ‘yarn’}
2015-05-04 11:02:13,996 – u”Execute[‘ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps -p cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid
>/dev/null 2>&1′]” {‘initial_wait': 5, ‘not_if': ‘ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps -p cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid
>/dev/null 2>&1′, ‘user': ‘yarn’}
2015-05-04 11:02:19,254 – Error while executing command ‘start':
Traceback (most recent call last):
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 214, in execute
method(env)
File “/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py”, line 58, in start
service(‘timelineserver’, action=’start’)
File “/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service.py”, line 59, in service
initial_wait=5
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 152, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 118, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 274, in action_run
raise ex
Fail: Execution of ‘ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps -p cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid
>/dev/null 2>&1′ returned 1.
Did anyone try this? I know vanilla Hadoop works on CentOS 7, so any problems would probably come from Ambari itself.
ERROR sqoop.ConnFactory: Sqoop could not found specified connection manager class org.apache.sqoop.teradata.TeradataConnManager. Please check that you’ve specified the class correctly.
Stdoutput 15/05/05 16:29:50 ERROR tool.BaseSqoopTool: Got error creating database manager: java.io.IOException: java.lang.ClassNotFoundException: org.apache.sqoop.teradata.TeradataConnManager
Stdoutput at org.apache.sqoop.ConnFactory.getManager(ConnFactory.java:166)
The following command runs just fine from command line but fails via oozie. I cannot seem to get oozie to see the jars (my guess). I have job.properties file configured with oozie.use.system.libpath=true. oozie-site.xml has oozie.service.WorkflowAppService.system.libpath = /user/${user.name}/share/lib and I am running the command as user oozie. I have added the jars to hdfs://nn/user/oozie/share/lib/sqoop but no luck. Help?
WORKFLOW.XML relevant snippet
<start to=”SyncFDMMaster”/>
<action name=”SyncFDMMaster”>
<shell xmlns=”uri:oozie:shell-action:0.1″>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
<property>
<name>sqoop.connection.factories</name>
<value>org.apache.sqoop.teradata.TeradataConnManager</value>
</property>
</configuration>
<exec>/bin/bash</exec>
<argument>syncmaster.sh</argument>
<file>syncmaster.sh</file>
job.properties
nameNode=hdfs://xxxx:8020
jobTracker=xxxx:8050
queueName=workflows
oozie.use.system.libpath=true
oozie.wf.application.path=${nameNode}/user/oozie/jobs/l48_fdm_mastersync
I’ve missed the stack upgrade fro HDP 2.1 to 2.2 for Ambari…
Done that, unfortunately now Ambari is even more broken as before as it is now unable to even stop half of the services. Still trying to figure out why…
meanwhile I found out, that it is HUE I was talking about – and that I have to install it manually as ambari did not.
Hi Paul,
Thanks for your reply~ Changing the DB Host to FQDN do the trick.
Actually I also change the Ranger external url with FQDN, now it work great in Ambari 2.0.
For Detail:
Ambari > Ranger > Configs > DB setting > Ranger DB host , Setting this with FQDN.
Ambari > Ranger > Configs > Ranger Settings > External URL , Setting this with FQDN.
I have facing the same error. What should the value of hdp.version property be set to ? I tried multiple values but it seems the hadoop-lzo-0.6.0.${hdp.version}.jar file is not present in any folder under /usr/hdp and that seems to be the cause of the error.
Any suggestions on how to proceed?
Hello again,
I have create a new table with a similar schema in the same database and everything is working fine.
However, when I try to move the data from the old table to the new one, I get the same error…
Any hint would be very appreciated.
Thanks.
Orlando
The issue was resolved after modify the /etc/hadoop/conf/mapred-site.xml file as specified in the below link
https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started
Hi !
Now that in HDP 2.2.4.2 Ganglia was replaced with Ambari AMS, how do we monitor the Hadoop cluster ? Could you show a simple example on how to create a graph for the HBase blockcache evictions metric for example ? If there’s no way and we have to use something like OpenTSDB, could you show an example on how to integrate Ambari AMS with OpenTSDB ?
Have a nice day,
Dani