Quantcast
Channel: Hortonworks » All Replies
Viewing all articles
Browse latest Browse all 3435

Smoke test fails – HDP 2.2.6.0 and Windows

$
0
0

Hello,

I installed HDP 2.2 on Windows server 2012 R2 and services are running ok.
It is a single node installation with only basic options, without knox, using Derby db.
When trying to start smoke test there are errors below.
Can you comment please.

Thank you,
Matjaz

Microsoft Windows [Version 6.3.9600]
(c) 2013 Microsoft Corporation. All rights reserved.

C:\Users\SAP>%HADOOP_NODE%\status_local_hdp_services
‘C:\hdp\\status_local_hdp_services’ is not recognized as an internal or external command,
operable program or batch file.

C:\Users\SAP>%HADOOP_NODE%\start_local_hdp_services
starting datanode
starting derbyserver
starting falcon
starting hbrest
starting hiveserver2
starting jobhistoryserver
starting logviewer
starting master
starting metastore
starting namenode
starting nimbus
starting nodemanager
starting oozieservice
starting regionserver
starting resourcemanager
starting secondarynamenode
starting supervisor
starting templeton
starting thrift
starting thrift2
starting timelineserver
starting ui
starting zkServer
Sent all start commands.
total services
23
running services
23
not yet running services
0
Failed_Start

C:\Users\SAP>runas /user:hadoop “cmd /K %HADOOP_HOME%\Run-SmokeTests.cmd”
Enter the password for hadoop:
Attempting to start cmd /K c:\hdp\hadoop-2.6.0.2.2.6.0-2800\Run-SmokeTests.cmd as user “HADOOP1\hadoop” …

———–

Hadoop smoke test – wordcount using hadoop.cmd file
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
15/08/10 16:25:43 INFO impl.TimelineClientImpl: Timeline service address: http://hadoop1.durs.si:8188/ws/v1/timeline/
15/08/10 16:25:43 INFO client.RMProxy: Connecting to ResourceManager at HADOOP1.durs.si/10.4.6.95:8032
java.io.IOException: The ownership on the staging directory /user/hadoop/.staging is not as expected. It is owned by smoketestuser. The directory
must be owned by the submitter hadoop or by hadoop
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:120)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:437)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Run-HadoopSmokeTest : Hadoop Smoke Test: FAILED
At line:1 char:1
+ Run-HadoopSmokeTest
+ ~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Run-HadoopSmokeTest

Pig smoke test – wordcount using hadoop.cmd file
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
15/08/10 16:25:54 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
15/08/10 16:25:54 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
15/08/10 16:25:54 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType
2015-08-10 16:25:54,928 [main] INFO org.apache.pig.Main – Apache Pig version 0.14.0.2.2.6.0-2800 (r: unknown) compiled May 18 2015, 20:58:10
2015-08-10 16:25:54,939 [main] INFO org.apache.pig.Main – Logging error messages to: C:\Users\hadoop.HADOOP1\AppData\Local\Temp\pig-143922394506
991.log
2015-08-10 16:25:55,885 [main] INFO org.apache.pig.impl.util.Utils – Default bootup file C:\Users\hadoop.HADOOP1/.pigbootup not found
2015-08-10 16:25:56,108 [main] INFO org.apache.hadoop.conf.Configuration.deprecation – mapred.job.tracker is deprecated. Instead, use mapreduce.
jobtracker.address
2015-08-10 16:25:56,130 [main] INFO org.apache.hadoop.conf.Configuration.deprecation – fs.default.name is deprecated. Instead, use fs.defaultFS
2015-08-10 16:25:56,130 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine – Connecting to hadoop file system at: hdfs:/
/HADOOP1:8020
2015-08-10 16:25:57,809 [main] INFO org.apache.hadoop.conf.Configuration.deprecation – fs.default.name is deprecated. Instead, use fs.defaultFS
2015-08-10 16:25:58,050 [main] INFO org.apache.hadoop.conf.Configuration.deprecation – fs.default.name is deprecated. Instead, use fs.defaultFS
2015-08-10 16:25:58,164 [main] INFO org.apache.hadoop.conf.Configuration.deprecation – mapred.textoutputformat.separator is deprecated. Instead,
use mapreduce.output.textoutputformat.separator
2015-08-10 16:25:58,211 [main] INFO org.apache.pig.tools.pigstats.ScriptState – Pig features used in the script: UNKNOWN
2015-08-10 16:25:58,244 [main] INFO org.apache.hadoop.conf.Configuration.deprecation – fs.default.name is deprecated. Instead, use fs.defaultFS
2015-08-10 16:25:58,319 [main] INFO org.apache.pig.data.SchemaTupleBackend – Key [pig.schematuple] was not set… will not generate code.
2015-08-10 16:25:58,438 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer – {RULES_ENABLED=[AddForEach, ColumnMapKeyPrun
e, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, Pre
dicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
2015-08-10 16:25:58,711 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler – File concatenation threshold: 100
optimistic? false
2015-08-10 16:25:58,796 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer – MR plan size before optim
ization: 1
2015-08-10 16:25:58,798 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer – MR plan size after optimi
zation: 1
2015-08-10 16:25:58,871 [main] INFO org.apache.hadoop.conf.Configuration.deprecation – fs.default.name is deprecated. Instead, use fs.defaultFS
2015-08-10 16:26:00,155 [main] INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl – Timeline service address: http://hadoop1.durs.si
:8188/ws/v1/timeline/
2015-08-10 16:26:00,527 [main] INFO org.apache.hadoop.yarn.client.RMProxy – Connecting to ResourceManager at HADOOP1.durs.si/10.4.6.95:8032
2015-08-10 16:26:00,950 [main] INFO org.apache.pig.tools.pigstats.mapreduce.MRScriptState – Pig script settings are added to the job
2015-08-10 16:26:00,962 [main] INFO org.apache.hadoop.conf.Configuration.deprecation – mapred.job.reduce.markreset.buffer.percent is deprecated.
Instead, use mapreduce.reduce.markreset.buffer.percent
2015-08-10 16:26:00,963 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler – mapred.job.reduce.markrese
t.buffer.percent is not set, set to default 0.3
2015-08-10 16:26:00,965 [main] INFO org.apache.hadoop.conf.Configuration.deprecation – mapred.output.compress is deprecated. Instead, use mapred
uce.output.fileoutputformat.compress
2015-08-10 16:26:00,970 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler – This job cannot be convert
ed run in-process
2015-08-10 16:26:01,522 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler – Added jar file:/C:/hdp/pig
-0.14.0.2.2.6.0-2800/pig-0.14.0.2.2.6.0-2800-core-h2.jar to DistributedCache through /tmp/temp1171630488/tmp-1373881126/pig-0.14.0.2.2.6.0-2800-c
ore-h2.jar
2015-08-10 16:26:01,564 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler – Added jar file:/C:/hdp/pig
-0.14.0.2.2.6.0-2800/lib/automaton-1.11-8.jar to DistributedCache through /tmp/temp1171630488/tmp-2012711590/automaton-1.11-8.jar
2015-08-10 16:26:01,605 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler – Added jar file:/C:/hdp/pig
-0.14.0.2.2.6.0-2800/lib/antlr-runtime-3.4.jar to DistributedCache through /tmp/temp1171630488/tmp69833/antlr-runtime-3.4.jar
2015-08-10 16:26:01,689 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler – Added jar file:/C:/hdp/had
oop-2.6.0.2.2.6.0-2800/share/hadoop/common/lib/guava-11.0.2.jar to DistributedCache through /tmp/temp1171630488/tmp-2095352665/guava-11.0.2.jar
2015-08-10 16:26:01,746 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler – Added jar file:/C:/hdp/pig
-0.14.0.2.2.6.0-2800/lib/joda-time-2.1.jar to DistributedCache through /tmp/temp1171630488/tmp1750961188/joda-time-2.1.jar
2015-08-10 16:26:01,857 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler – Setting up single store jo
b
2015-08-10 16:26:01,889 [main] INFO org.apache.pig.data.SchemaTupleFrontend – Key [pig.schematuple] is false, will not generate code.
2015-08-10 16:26:01,890 [main] INFO org.apache.pig.data.SchemaTupleFrontend – Starting process to move generated code to distributed cacche
2015-08-10 16:26:01,891 [main] INFO org.apache.pig.data.SchemaTupleFrontend – Setting key [pig.schematuple.classes] with classes to deserialize
[]
2015-08-10 16:26:01,971 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – 1 map-reduce job(s) waiting
for submission.
2015-08-10 16:26:01,974 [main] INFO org.apache.hadoop.conf.Configuration.deprecation – mapred.job.tracker.http.address is deprecated. Instead, u
se mapreduce.jobtracker.http.address
2015-08-10 16:26:02,312 [JobControl] INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl – Timeline service address: http://hadoop1.d
urs.si:8188/ws/v1/timeline/
2015-08-10 16:26:02,321 [JobControl] INFO org.apache.hadoop.yarn.client.RMProxy – Connecting to ResourceManager at HADOOP1.durs.si/10.4.6.95:803
2
2015-08-10 16:26:02,393 [JobControl] INFO org.apache.hadoop.conf.Configuration.deprecation – fs.default.name is deprecated. Instead, use fs.defa
ultFS
2015-08-10 16:26:02,411 [JobControl] INFO org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob – PigLatin:script-143922394506991.pig got an
error while submitting
java.io.IOException: The ownership on the staging directory /user/hadoop/.staging is not as expected. It is owned by smoketestuser. The directory
must be owned by the submitter hadoop or by hadoop
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:120)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:437)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:194)
at java.lang.Thread.run(Thread.java:745)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)
2015-08-10 16:26:02,483 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – 0% complete
2015-08-10 16:26:07,508 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – Ooops! Some job has failed!
Specify -stop_on_failure if you want Pig to stop immediately on failure.
2015-08-10 16:26:07,512 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – job null has failed! Stop r
unning all dependent jobs
2015-08-10 16:26:07,514 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – 100% complete
2015-08-10 16:26:07,529 [main] ERROR org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil – 1 map reduce job(s) failed!
2015-08-10 16:26:07,532 [main] INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats – Script Statistics:

HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.6.0.2.2.6.0-2800 0.14.0.2.2.6.0-2800 hadoop 2015-08-10 16:26:00 2015-08-10 16:26:07 UNKNOWN

Failed!

Failed Jobs:
JobId Alias Feature Message Outputs
N/A A,B MAP_ONLY Message: java.io.IOException: The ownership on the staging directory /user/hadoop/.staging is not as expected. It
is owned by smoketestuser. The directory must be owned by the submitter hadoop or by hadoop
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:120)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:437)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:194)
at java.lang.Thread.run(Thread.java:745)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)
hdfs://HADOOP1:8020/user/hadoop/out-143922394506991.log,

Input(s):
Failed to read data from “hdfs://HADOOP1:8020/user/hadoop/hadoop-143922394506991″

Output(s):
Failed to produce result in “hdfs://HADOOP1:8020/user/hadoop/out-143922394506991.log”

Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0

Job DAG:
null

2015-08-10 16:26:07,558 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – Failed!
2015-08-10 16:26:07,594 [main] ERROR org.apache.pig.tools.grunt.GruntParser – ERROR 2244: Job failed, hadoop does not return any error message
Details at logfile: C:\Users\hadoop.HADOOP1\AppData\Local\Temp\pig-143922394506991.log
2015-08-10 16:26:07,656 [main] INFO org.apache.pig.Main – Pig script completed in 13 seconds and 673 milliseconds (13673 ms)
Run-PigSmokeTest : Pig Smoke Test: FAILED
At line:1 char:1
+ Run-PigSmokeTest
+ ~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Run-PigSmokeTest

Hcatalog smoke test – show tables, create table, and drop table
Running hcat command: show tables
15/08/10 16:26:12 WARN conf.HiveConf: HiveConf of name hive.log.dir does not exist
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/hdp/hadoop-2.6.0.2.2.6.0-2800/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerB
inder.class]
SLF4J: Found binding in [jar:file:/C:/hdp/hive-0.14.0.2.2.6.0-2800/lib/hive-jdbc-0.14.0.2.2.6.0-2800-standalone.jar!/org/slf4j/impl/StaticLoggerB
inder.class]
SLF4J: Found binding in [jar:file:/C:/hdp/hbase-0.98.4.2.2.6.0-2800-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Terminate batch job (Y/N)? y
Traceback (most recent call last):
File “c:\hdp\hive-0.14.0.2.2.6.0-2800\hcatalog\bin\hcat.py”, line 157, in <module>
retval = subprocess.call(cmd, stdin=None, stdout=None, stderr=None, shell=True)
File “C:\Python\Python27\lib\subprocess.py”, line 522, in call
return Popen(*popenargs, **kwargs).wait()
File “C:\Python\Python27\lib\subprocess.py”, line 1007, in wait
_subprocess.INFINITE)
KeyboardInterrupt
Terminate batch job (Y/N)? y


Viewing all articles
Browse latest Browse all 3435

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>