Hello,
ok, i found, after look some jira task about this problem, but it is very curious how its work, it must know localy all file configurations of cluster for send again the configuration already know from cluster at container execution, for my pig task. and i must do some improvement
for tell the classpath of all jar hadoop on the cluster. hadoop is not realy clear in some case on what it do when it is question about of remoting
task client.
core-site.xml
hdfs-site.xml
log4j.properties
mapred-site.xml
yarn-site.xml
mapred-site.xml
i must add localy & cluster
<property>
<name>mapreduce.application.classpath</name>
<value>
/usr/local/hadoop/etc/hadoop/*,
/usr/local/hadoop/share/hadoop/common/*,
/usr/local/hadoop/share/hadoop/common/lib/*,
/usr/local/hadoop/share/hadoop/hdfs/*,
/usr/local/hadoop/share/hadoop/hdfs/lib/*,
/usr/local/hadoop/share/hadoop/mapreduce/*,
/usr/local/hadoop/share/hadoop/mapreduce/lib/*,
/usr/local/hadoop/share/hadoop/yarn/*,
/usr/local/hadoop/share/hadoop/yarn/lib/*
</value>
</property>
<property>
<name>mapred.remote.os</name>
<value>Linux</value>
<description>Remote MapReduce framework’s OS, can be either Linux or
Windows
</description>
</property>
<property>
<name>mapreduce.app-submission.cross-platform</name>
<value>true</value>
</property>
yarn-site.xml
i must add classpath localy & cluster
<property>
<name>yarn.application.classpath</name>
<value>
/usr/local/hadoop/etc/hadoop/*,
/usr/local/hadoop/share/hadoop/common/*,
/usr/local/hadoop/share/hadoop/common/lib/*,
/usr/local/hadoop/share/hadoop/hdfs/*,
/usr/local/hadoop/share/hadoop/hdfs/lib/*,
/usr/local/hadoop/share/hadoop/mapreduce/*,
/usr/local/hadoop/share/hadoop/mapreduce/lib/*,
/usr/local/hadoop/share/hadoop/yarn/*,
/usr/local/hadoop/share/hadoop/yarn/lib/*
</value>
</property>
</configuration>