Quantcast
Channel: Hortonworks » All Replies
Viewing all articles
Browse latest Browse all 3435

Bad substitution error with spark on HDP 2.2

$
0
0

Hi,

I’m facing issue with running spark on yarn.

YARN is installed through Ambari, HDP v2.2.0.0-2041. Spark 1.2
After submit spark job through YARN, get error message:
Stack trace: ExitCodeException exitCode=1: /hdp/hadoop/yarn/local/usercache/leads_user/appcache/application_1420759015115_0012/container_1420759015115_0012_02_000001/launch_co ntainer.sh: line 27: $PWD:$PWD/__spark__.jar:$HADOOP_CONF_DIR:/usr/hdp/current/hadoop-client/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/ hdp/current/hadoop-hdfs-client/lib/*:/usr/hdp/current/hadoop-yarn-client/*:/usr/hdp/current/hadoop-yarn-client/lib/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr- framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/ hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:/usr/hdp/${hdp.ver sion}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure:$PWD/__app__.jar:$PWD/*: bad substitution

at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

I’ve followed the instructions given in technical preview.

I’ve given mentioned configurations in spark-defaults.conf file inside conf folder. I’ve also checked using verbose logging. It is taking those parameters. But still i’m getting the same error.

In the verbose mode it is printing the following

System properties:
spark.executor.memory -> 3G
SPARK_SUBMIT -> true
spark.executor.extraJavaOptions -> -Dhdp.version=2.2.0.0-2041 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:-HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/hdfs/heapDump/ -XX:+UseCompressedOops
spark.app.name -> com.xxx.xxx.xxxxxx
spark.driver.extraJavaOptions -> -Dhdp.version=2.2.0.0-2041
spark.yarn.am.extraJavaOptions -> -Dhdp.version=2.2.0.0-2041
spark.master -> yarn-cluster

Any idea what could be the problem here.


Viewing all articles
Browse latest Browse all 3435

Latest Images

Trending Articles



Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>