Quantcast
Channel: Hortonworks » All Replies
Viewing all 3435 articles
Browse latest View live

Reply To: Unable to Start Hive Service

$
0
0

Hi Rob,
Can you share how to check that ? When running java -version, here’s the output I am getting
java version “1.7.0_85”
OpenJDK Runtime Environment (rhel-2.6.1.3.el6_7-x86_64 u85-b01)
OpenJDK 64-Bit Server VM (build 24.85-b03, mixed mode)

-Mitch


PW for SSH to Hive Shell Rejected

$
0
0

I am following through the main HDP Sandbox tut and am on Lab 2 (http://hortonworks.com/hadoop-tutorial/hello-world-an-introduction-to-hadoop-hcatalog-hive-and-pig/#section_8). There is a note after Step 2.4: Define an ORC Table in Hive, that you should be able to SSH in via the following command sequence:

ssh root@127.0.0.1 -p 2222
** When prompted, Root pwd is hadoop

I replaced 127.0.0.1 with my Azure Virtual IP Address and entered hadoop as the password but continue to receive Permission denied, please try again.. Help?

Reply To: PW for SSH to Hive Shell Rejected

$
0
0

Hi Kuan,

Since you are on Azure you have provided your own password when setting up the cluster which you will need to use.

 

Regards,

Robert

Error in Running Hive Server in Ambari

$
0
0

<pre class=”stderr”>Traceback (most recent call last):
File “/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py”, line 185, in <module>
HiveServer().execute()
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 218, in execute
method(env)
File “/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py”, line 83, in start
self.configure(env) # FOR SECURITY
File “/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py”, line 54, in configure
hive(name=’hiveserver2′)
File “/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py”, line 89, in thunk
return fn(*args, **kwargs)
File “/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py”, line 127, in hive
mode=params.webhcat_hdfs_user_mode
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 154, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 152, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 118, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 391, in action_create_on_execute
self.action_delayed(“create”)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 388, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 244, in action_delayed
self._assert_valid()
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 228, in _assert_valid
self.target_status = self._get_file_status(target)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 284, in _get_file_status
list_status = self.util.run_command(target, ‘GETFILESTATUS’, method=’GET’, ignore_status_codes=[‘404’], assertable_result=False)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 189, in run_command
_, out, err = get_user_call_output(cmd, user=self.run_user, logoutput=self.logoutput, quiet=False)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py”, line 49, in get_user_call_output
func_result = func(shell.as_user(command_string, user), **call_kwargs)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 70, in inner
result = function(command, **kwargs)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of ‘ambari-sudo.sh su hdfs -l -s /bin/bash -c ‘curl -sS -L -w ‘”‘”‘%{http_code}'”‘”‘ -X GET ‘”‘”‘http://namenode1.internal.dezyre.com:50070/webhdfs/v1/user/hcat?op=GETFILESTATUS&user.name=hdfs'”‘”‘ 1>/tmp/tmpLqxvZK 2>/tmp/tmpDnkSA1” returned 7.
<h5><span id=”i18n-187″>stdout</span>:   <span class=”muted”> /var/lib/ambari-agent/data/output-1107.txt </span></h5>
<pre class=”stdout”>2015-10-14 12:45:25,375 – Directory[‘/var/lib/ambari-agent/data/tmp/AMBARI-artifacts/’] {‘recursive’: True}
2015-10-14 12:45:25,378 – File[‘/var/lib/ambari-agent/data/tmp/AMBARI-artifacts//jce_policy-8.zip’] {‘content’: DownloadSource(‘http://gateway1.internal.dezyre.com:8080/resources//jce_policy-8.zip’)}
2015-10-14 12:45:25,378 – Not downloading the file from http://gateway1.internal.dezyre.com:8080/resources//jce_policy-8.zip, because /var/lib/ambari-agent/data/tmp/jce_policy-8.zip already exists
2015-10-14 12:45:25,378 – Group[‘hadoop’] {‘ignore_failures’: False}
2015-10-14 12:45:25,380 – Group[‘users’] {‘ignore_failures’: False}
2015-10-14 12:45:25,380 – User[‘hive’] {‘gid’: ‘hadoop’, ‘ignore_failures’: False, ‘groups’: [u’hadoop’]}
2015-10-14 12:45:25,381 – User[‘zookeeper’] {‘gid’: ‘hadoop’, ‘ignore_failures’: False, ‘groups’: [u’hadoop’]}
2015-10-14 12:45:25,381 – User[‘ams’] {‘gid’: ‘hadoop’, ‘ignore_failures’: False, ‘groups’: [u’hadoop’]}
2015-10-14 12:45:25,382 – User[‘ambari-qa’] {‘gid’: ‘hadoop’, ‘ignore_failures’: False, ‘groups’: [u’users’]}
2015-10-14 12:45:25,383 – User[‘flume’] {‘gid’: ‘hadoop’, ‘ignore_failures’: False, ‘groups’: [u’hadoop’]}
2015-10-14 12:45:25,383 – User[‘tez’] {‘gid’: ‘hadoop’, ‘ignore_failures’: False, ‘groups’: [u’users’]}
2015-10-14 12:45:25,384 – User[‘hdfs’] {‘gid’: ‘hadoop’, ‘ignore_failures’: False, ‘groups’: [u’hadoop’]}
2015-10-14 12:45:25,385 – User[‘sqoop’] {‘gid’: ‘hadoop’, ‘ignore_failures’: False, ‘groups’: [u’hadoop’]}
2015-10-14 12:45:25,385 – User[‘yarn’] {‘gid’: ‘hadoop’, ‘ignore_failures’: False, ‘groups’: [u’hadoop’]}
2015-10-14 12:45:25,386 – User[‘hcat’] {‘gid’: ‘hadoop’, ‘ignore_failures’: False, ‘groups’: [u’hadoop’]}
2015-10-14 12:45:25,387 – User[‘mapred’] {‘gid’: ‘hadoop’, ‘ignore_failures’: False, ‘groups’: [u’hadoop’]}
2015-10-14 12:45:25,389 – File[‘/var/lib/ambari-agent/data/tmp/changeUid.sh’] {‘content’: StaticFile(‘changeToSecureUid.sh’), ‘mode’: 0555}
2015-10-14 12:45:25,390 – Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa’] {‘not_if’: ‘(test $(id -u ambari-qa) -gt 1000) || (false)’}
2015-10-14 12:45:25,415 – Skipping Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa’] due to not_if
2015-10-14 12:45:25,416 – Group[‘hdfs’] {‘ignore_failures’: False}
2015-10-14 12:45:25,417 – User[‘hdfs’] {‘ignore_failures’: False, ‘groups’: [u’hadoop’, u’hdfs’]}
2015-10-14 12:45:25,417 – Directory[‘/etc/hadoop’] {‘mode’: 0755}
2015-10-14 12:45:25,438 – File[‘/usr/hdp/current/hadoop-client/conf/hadoop-env.sh’] {‘content’: InlineTemplate(…), ‘owner’: ‘hdfs’, ‘group’: ‘hadoop’}
2015-10-14 12:45:25,454 – Execute[(‘setenforce’, ‘0’)] {‘not_if’: ‘(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)’, ‘sudo’: True, ‘only_if’: ‘test -f /selinux/enforce’}
2015-10-14 12:45:25,484 – Skipping Execute[(‘setenforce’, ‘0’)] due to not_if
2015-10-14 12:45:25,485 – Directory[‘/var/log/hadoop’] {‘owner’: ‘root’, ‘mode’: 0775, ‘group’: ‘hadoop’, ‘recursive’: True, ‘cd_access’: ‘a’}
2015-10-14 12:45:25,489 – Directory[‘/var/run/hadoop’] {‘owner’: ‘root’, ‘group’: ‘root’, ‘recursive’: True, ‘cd_access’: ‘a’}
2015-10-14 12:45:25,490 – Directory[‘/tmp/hadoop-hdfs’] {‘owner’: ‘hdfs’, ‘recursive’: True, ‘cd_access’: ‘a’}
2015-10-14 12:45:25,499 – File[‘/usr/hdp/current/hadoop-client/conf/commons-logging.properties’] {‘content’: Template(‘commons-logging.properties.j2’), ‘owner’: ‘hdfs’}
2015-10-14 12:45:25,502 – File[‘/usr/hdp/current/hadoop-client/conf/health_check’] {‘content’: Template(‘health_check.j2’), ‘owner’: ‘hdfs’}
2015-10-14 12:45:25,503 – File[‘/usr/hdp/current/hadoop-client/conf/log4j.properties’] {‘content’: …, ‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘mode’: 0644}
2015-10-14 12:45:25,514 – File[‘/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties’] {‘content’: Template(‘hadoop-metrics2.properties.j2’), ‘owner’: ‘hdfs’}
2015-10-14 12:45:25,515 – File[‘/usr/hdp/current/hadoop-client/conf/task-log4j.properties’] {‘content’: StaticFile(‘task-log4j.properties’), ‘mode’: 0755}
2015-10-14 12:45:25,516 – File[‘/usr/hdp/current/hadoop-client/conf/configuration.xsl’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’}
2015-10-14 12:45:25,523 – File[‘/etc/hadoop/conf/topology_mappings.data’] {‘owner’: ‘hdfs’, ‘content’: Template(‘topology_mappings.data.j2’), ‘only_if’: ‘test -d /etc/hadoop/conf’, ‘group’: ‘hadoop’}
2015-10-14 12:45:25,545 – File[‘/etc/hadoop/conf/topology_script.py’] {‘content’: StaticFile(‘topology_script.py’), ‘only_if’: ‘test -d /etc/hadoop/conf’, ‘mode’: 0755}
2015-10-14 12:45:25,910 – HdfsResource[‘/user/hcat’] {‘security_enabled’: False, ‘hadoop_bin_dir’: ‘/usr/hdp/current/hadoop-client/bin’, ‘keytab’: [EMPTY], ‘default_fs’: ‘hdfs://namenode1.internal.dezyre.com:8020’, ‘hdfs_site’: …, ‘kinit_path_local’: ‘kinit’, ‘principal_name’: ‘missing_principal’, ‘user’: ‘hdfs’, ‘owner’: ‘hcat’, ‘hadoop_conf_dir’: ‘/usr/hdp/current/hadoop-client/conf’, ‘type’: ‘directory’, ‘action’: [‘create_on_execute’], ‘mode’: 0755}
2015-10-14 12:45:25,913 – checked_call[‘ambari-sudo.sh su hdfs -l -s /bin/bash -c ‘curl -sS -L -w ‘”‘”‘%{http_code}'”‘”‘ -X GET ‘”‘”‘http://namenode1.internal.dezyre.com:50070/webhdfs/v1/user/hcat?op=GETFILESTATUS&user.name=hdfs'”‘”‘ 1>/tmp/tmpLqxvZK 2>/tmp/tmpDnkSA1”] {‘logoutput’: None, ‘quiet’: False}

Reply To: Connect Hadoop HDP2 to ETL tool Datastage

$
0
0

Hello,

I have to connect hortonworks hadoop with the data stage 11.3.1 server engine, in order to read the files. Could you let me know where can I find appropriate documentation for configuration etc?

History Server and HiveServer2 will not start

$
0
0

I am running HDP version 2.3 with Ambari 2.1. I have 6 servers that are running CentOS 7.1. I have everything installed but when I try to start the History Server and the HiveServer2 I get an almost identical error message:

resource_management.core.exceptions.Fail: Execution of ‘ambari-sudo.sh su hdfs -l -s /bin/bash -c ‘curl -sS -L -w ‘”‘”‘%{http_code}'”‘”‘ -X PUT -T /usr/hdp/2.3.0.0-2557/hadoop/mapreduce.tar.gz ‘”‘”‘http://cshadoop.boisestate.edu:50070/webhdfs/v1/hdp/apps/2.3.0.0-2557/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444′”‘”‘ 1>/tmp/tmpgSzsFx 2>/tmp/tmpwZ_9y9” returned 55.

The only difference between the two is the name of the files in /tmp. I am fairly new to this so please let me know what additional information would be useful.

Below is the full dump from the History Server error:

Traceback (most recent call last):
File “/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py”, line 168, in <module>
HistoryServer().execute()
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 218, in execute
method(env)
File “/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py”, line 95, in start
resource_created = copy_to_hdfs(“mapreduce”, params.user_group, params.hdfs_user)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/functions/copy_tarball.py”, line 193, in copy_to_hdfs
mode=0444
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 154, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 152, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 118, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 391, in action_create_on_execute
self.action_delayed(“create”)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 388, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 247, in action_delayed
self._create_resource()
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 261, in _create_resource
self._create_file(self.main_resource.resource.target, source=self.main_resource.resource.source, mode=self.mode)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 311, in _create_file
self.util.run_command(target, ‘CREATE’, method=’PUT’, overwrite=True, assertable_result=False, file_to_put=source, **kwargs)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 189, in run_command
_, out, err = get_user_call_output(cmd, user=self.run_user, logoutput=self.logoutput, quiet=False)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py”, line 49, in get_user_call_output
func_result = func(shell.as_user(command_string, user), **call_kwargs)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 70, in inner
result = function(command, **kwargs)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of ‘ambari-sudo.sh su hdfs -l -s /bin/bash -c ‘curl -sS -L -w ‘”‘”‘%{http_code}'”‘”‘ -X PUT -T /usr/hdp/2.3.0.0-2557/hadoop/mapreduce.tar.gz ‘”‘”‘http://cshadoop.boisestate.edu:50070/webhdfs/v1/hdp/apps/2.3.0.0-2557/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444′”‘”‘ 1>/tmp/tmpironHk 2>/tmp/tmpscaJPq” returned 55.

Reply To: error running hive query "S020 Data storage error"

Reply To: Welcome to the new Hortonworks Sandbox!


Reply To: ODBC Driver for HDP 2.3 on OS X 10.11 El Capitan

Reply To: error running hive query "S020 Data storage error"

$
0
0

Another common error I receive is “VERTEX_FAILURE”

I’m just trying to follow the first DAG instruction on sec 2.6 of the Labs to walk through the HDP Sandbox… Using A4 instance on Azure:

Vertex failed, vertexName=Map 1, vertexId=vertex_1444753022295_0009_1_00, diagnostics=
Task failed, taskId=task_1444753022295_0009_1_00_000000, diagnostics=
TaskAttempt 0 failed, info=
Error: Failure while running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:157)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:345)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
at org.apache.tez.runtime.library.common.sort.impl.PipelinedSorter.(PipelinedSorter.java:161)
at org.apache.tez.runtime.library.common.sort.impl.PipelinedSorter.(PipelinedSorter.java:117)
at org.apache.tez.runtime.library.output.OrderedPartitionedKVOutput.start(OrderedPartitionedKVOutput.java:142)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:141)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:147)
… 14 more
TaskAttempt 1 failed, info=
Error: Failure while running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:157)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:345)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
at java.security.AccessController.doPrivil

Reply To: Spark manual upgrade to 1.4.1 (HDP – 2.3.2)

$
0
0

Ram,

 

There are two ways to upgrade. You can upgrade HDP to HDP 2.3.2 with Ambari 2.1.2 and that will automatically upgrade Spark to 1.4.1

The second way is to manually only upgrade Spark to 1.4.1

What is the OS you are on? Can you ensure the repo URL you use in 1.3.2 (step 3.b) is from section 1.3.3 and for your OS?

 

 

 

 

Reply To: Spark builds differences between HDP and community

$
0
0

Which version of Spark/HDP do you see this? On Spark 1.4.1/HDP 2.3.2 the said file doesn’t have this issue.

Reply To: Log Rotation for .out files of hadoop processes

$
0
0

Hello Arpan,

There is nothing built in to Hadoop for configuring log rotation of the .out files. These files are only written at daemon startup, and there is very little data written into these files, so there isn’t a strong reason to rotate them. If you still want to rotate them, then a possible solution would be to use external tools, such as logrotate.

http://linuxcommand.org/man_pages/logrotate8.html

Thank you,
–Chris Nauroth

Reply To: Hue: Misconfigurations

$
0
0

Hi Dave,

Thanks for the response. I did change the localhost to FQDN, stopped and restarted Hue but there are still misconfiguration issues as follows:

<b>hadoop.hdfs_clusters.default.webhdfs_url Current value: http://ec2-54-173-68-129.compute-1.amazonaws.com:50070/webhdfs/v1/</b&gt;

<b>Failed to access filesystem root</b>

<b>hcatalog.templeton_url Current value: http://ec2-54-173-68-129.compute-1.amazonaws.com:50111/templeton/v1/</b&gt;

<b>HTTPConnectionPool(host=’ec2-54-173-68-129.compute-1.amazonaws.com’, port=50111): Max retries exceeded with url: /templeton/v1/status?user.name=hue&doAs=hue (Caused by : [Errno 111] Connection refused)</b>

<b>Oozie Editor/Dashboard The app won’t work without a running Oozie server</b>

Also Oozie if running according to Ambari. What else do I need to look at?

Thank you!

Reply To: Configuration errors of java heap/tez container/yarn resource manager

$
0
0

<div class=”bbp-reply-content”>

<pre class=”message-body”>Running SANDBOX on AZURE basic A4(8 cores,14 gb) PAY AS YOU GO subscription.
______________________________________________.
While running in hive user view the execution of
Following commands

“SELECT truckid,avg(mpg) avgmpg FROM truck_mileage GROUP BY truckid;”
<div class=”wp-geshi-highlight-wrap5″>
<div class=”wp-geshi-highlight-wrap4″>
<div class=”wp-geshi-highlight-wrap3″>
<div class=”wp-geshi-highlight-wrap2″>
<div class=”wp-geshi-highlight-wrap”>
<div class=”wp-geshi-highlight”>
<div class=”scala”>
<pre class=”de1″>CREATE TABLE DriverMileage STORED AS ORC AS SELECT driverid, sum<span class=”br0″>(</span>miles<span class=”br0″>)</span> totmiles FROM truck<span class=”sy0″>_</span>mileage GROUP BY driverid<span class=”sy0″>;</span>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<pre class=”message-body”> are not executing and throwing an error of (s020 as shown below).
___________________________________________________
I did tweaked the configuration properties a little(after looking
at errors) but still the same errors before and after changes made

NAMENODE Heapsize to 1gb(from 256 to 1gb) in hdfs
Datanode Heapsize to 1gb(from 256 to 1gb) in hdfs
Tez container size 512mb(from 250 to 512 mb) in hive
Yarn container minimum container 512mb(from 250mb to 512mb) in yarn
_______________________________________
As this VM SANDBOX is running as specified by hortonworks
recommendations why still the errors.

needed help , can somebody please assist
_________________________________________________________
S020 DATA Storage Error with following log info.

org.apache.ambari.view.PersistenceException: Caught exception trying to store view entity org.apache.ambari.view.hive.resources.jobs.viewJobs.JobImpl@69d

org.apache.ambari.view.PersistenceException: Caught exception trying to store view entity org.apache.ambari.view.hive.resources.jobs.viewJobs.JobImpl@69d
at org.apache.ambari.server.view.persistence.DataStoreImpl.throwPersistenceException(DataStoreImpl.java:648)
at org.apache.ambari.server.view.persistence.DataStoreImpl.store(DataStoreImpl.java:148)
at org.apache.ambari.view.hive.persistence.DataStoreStorage.store(DataStoreStorage.java:76)
at org.apache.ambari.view.hive.resources.CRUDResourceManager.save(CRUDResourceManager.java:117)
at org.apache.ambari.view.hive.resources.PersonalCRUDResourceManager.save(PersonalCRUDResourceManager.java:68)
at org.apache.ambari.view.hive.resources.jobs.viewJobs.JobResourceManager.saveIfModified(JobResourceManager.java:69)
at org.apache.ambari.view.hive.resources.jobs.viewJobs.JobResourceManager.read(JobResourceManager.java:80)
at org.apache.ambari.view.hive.resources.jobs.viewJobs.JobResourceManager.readController(JobResourceManager.java:95)
at org.apache.ambari.view.hive.resources.jobs.JobService.getOne(JobService.java:104)
at sun.reflect.GeneratedMethodAccessor320.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:540)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:715)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:770)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1496)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:113)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:54)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:45)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ambari.server.security.authorization.AmbariAuthorizationFilter.doFilter(AmbariAuthorizationFilter.java:182)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilter(BasicAuthenticationFilter.java:150)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.apache.ambari.server.api.MethodOverrideFilter.doFilter(MethodOverrideFilter.java:72)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.apache.ambari.server.api.AmbariPersistFilter.doFilter(AmbariPersistFilter.java:47)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:82)
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:294)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:429)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:209)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:198)
at org.apache.ambari.server.controller.AmbariHandlerList.handle(AmbariHandlerList.java:132)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:370)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:971)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1033)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: The value for the statusMessage field of the JobImpl entity can not exceed 3200 characters. Given value = Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1444675374870_0001_1_00, diagnostics=[Task failed, taskId=task_1444675374870_0001_1_00_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:157)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:345)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
at org.apache.tez.runtime.library.common.sort.impl.PipelinedSorter.(PipelinedSorter.java:161)
at org.apache.tez.runtime.library.common.sort.impl.PipelinedSorter.(PipelinedSorter.java:117)
at org.apache.tez.runtime.library.output.OrderedPartitionedKVOutput.start(OrderedPartitionedKVOutput.java:142)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:141)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:147)
… 14 more
], TaskAttempt 1 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:157)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:345)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
at org.apache.tez.runtime.library.common.sort.impl.PipelinedSorter.(PipelinedSorter.java:161)
at org.apache.tez.runtime.library.common.sort.impl.PipelinedSorter.(PipelinedSorter.java:117)
at org.apache.tez.runtime.library.output.OrderedPartitionedKVOutput.start(OrderedPartitionedKVOutput.java:142)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:141)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:147)
… 14 more
], TaskAttempt 2 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:157)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:345)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
at org.apache.tez.runtime.library.common.sort.impl.PipelinedSorter.(PipelinedSorter.java:161)
at org.apache.tez.runtime.library.common.sort.impl.PipelinedSorter.(PipelinedSorter.java:117)
at org.apache.tez.runtime.library.output.OrderedPartitionedKVOutput.start(OrderedPartitionedKVOutput.java:142)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:141)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:147)
… 14 more
], TaskAttempt 3 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:157)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:345)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
at org.apache.tez.runtime.library.common.sort.impl.PipelinedSorter.(PipelinedSorter.java:161)
at org.apache.tez.runtime.library.common.sort.impl.PipelinedSorter.(PipelinedSorter.java:117)
at org.apache.tez.runtime.library.output.OrderedPartitionedKVOutput.start(OrderedPartitionedKVOutput.java:142)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:141)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:147)
… 14 more
]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:0, Vertex vertex_1444675374870_0001_1_00 [Map 1] killed/failed due to:null]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1444675374870_0001_1_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:1, Vertex vertex_1444675374870_0001_1_01 [Reducer 2] killed/failed due to:null]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
at org.apache.ambari.server.view.persistence.DataStoreImpl.checkStringValue(DataStoreImpl.java:634)
at org.apache.ambari.server.view.persistence.DataStoreImpl.persistEntity(DataStoreImpl.java:448)
at org.apache.ambari.server.view.persistence.DataStoreImpl.store(DataStoreImpl.java:144)
… 91 more

</div>


Reply To: Welcome to the new Hortonworks Sandbox!

$
0
0

Mr Kuan Butts,

No still trying to solve the issues . I am still trying to solve the issue posted in the forum. I posted on main page, posted it here , and created a separate thread too.

I tried both versions suggested by Hortonworks on azure (i.e classic ,Resource manager) both having the same issues(so2o data storage error).  I am still waiting . One moderator replied , asking for stacktrace i appended the info. But still waiting ,  if you know the solution please let me know.

Reply To: Hue: Misconfigurations

$
0
0

I am also getting this exception if I click the File Browser icon in Hue. Please note I am signed in to Hue with a username = hdfs:

WebHdfsException at /filebrowser/

SecurityException: Failed to obtain user group information: org.apache.hadoop.security.authorize.AuthorizationException: User: hue is not allowed to impersonate hdfs (error 403)

Reply To: Welcome to the new Hortonworks Sandbox!

$
0
0

Thanks siri t for writing back. No luck on my end either – been at it for a few days but the problem is I haven’t made it through the tutorial yet so I am unable to debug the errors as I still don’t really have a solid grasp of what is going on…

Wish there was a place I knew of where people can get help on this stuff. Found and am going to a Meetup for Hadoop users, maybe will have luck there.

Coonection Refused Erro

$
0
0

I have installed sandbox 2.3 on wondows 10. While booting the VM I get following error. Any suggestion how to resolve it.

Call From sandbox.hortonworks.com/10.0.2.15 to sandbox.hortonworks.com:8020 failed on connection exception: java.net.ConnectException:

Reply To: Hive query logging from all client

$
0
0

Neil,

did you get an answer for your questions? i have a similar issue now.

Viewing all 3435 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>