Quantcast
Channel: Hortonworks » All Replies
Viewing all 3435 articles
Browse latest View live

Can we use VMware Storage (virtual disks) in Hadoop Cluster Installation?


Reply To: Ambari 2.1 on Centos 7 Fails to create a cluster

0
0

Hi,

I am using a 2 node cluster and facing a similar problem, I am able to establish a ssh connection to the both the nodes successfully and vice versa. but the setup agent script still fails.

Any leads would be appreciated.

HDP 2.1 to 2.2 upgrade RHEL6 problem and question

0
0

Hi everyone! I have a cluster with 1 NameNode and 4 DataNodes on Red Hat Linux Enterprise 6. My HDP version is 2.1. Ambari version was 1.7 but I upgraded it to 2.1. I want to upgrade HDP to version 2.2. I read that if I want to upgrade HDP from 2.1 to 2.2 I have to do it before I upgrade Ambari to 2.1. When I am upgrading hdp to 2.2 ambari does not see any changes and everything is not working. I am using this tutorial

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.2.0/HDP_Man_Upgrade_v22/index.html#Item1.1

How can I do it? I tried to downgrade ambari to 1.7 but I got many errors. What if I try upgrade now hdp to 2.2 and then my ambari from 2.1 to 2.1.1. Will it work? The problem is that I have very little time.
Thank you in advance

enhanced consumer release dates

0
0

Hi All

There is enough hype about Kafka coming up with quite improved version of High Level consumer
https://cwiki.apache.org/confluence/display/KAFKA/Client+Rewrite#ClientRewrite-ConsumerAPI

I did wrote mails couple of times over last 2 months to get to understand their plans to release this API. I sent mail to users@kafka.apache.org. However seems this is not the correct place to send mails to.

Can any one please help me with this. In case hortonworks is coming up with its release that way I can migrate to hortonworks installtion.

Thanks in advance
shashank

JobHistory: No SysLog and log4j warn for stderr

0
0

I’m using HDP 2.2.4.2-2. I have trouble to see loggings from JobHistory. Not sure what is the configuration missed server-side or client-side.

So I have a simple wordCount program:

public class WordCount {
public static final Log LOG = LogFactory.getLog(WordCount.class);

public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{

private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
LOG.info("LOG - map function invoked");
System.out.println("stdout - map function invoked");
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}

public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();

public void reduce(Text key, Iterable values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}

public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
conf.set("mapreduce.job.jar","/space/tmp/jar/wordCount.jar");
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path("hdfs://localhost:9000/user/jsun/input"));
FileOutputFormat.setOutputPath(job, new Path("hdfs://localhost:9000/user/jsun/output"));

System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}

I expect some loggings in ‘stdout’ and ‘stderr’ from MapReduce job history. because I have these statement in the map function of Mapper class:

LOG.info("LOG - map function invoked");
System.out.println("stdout - map function invoded");

However, I can see nothing in ‘stdout’ and in ‘stderr’, I was told log4j not properly configured, no appenders…

Log Type: stderr
Log Upload Time: Thu Sep 10 11:30:43 -0700 2015
Log Length: 222
log4j:WARN No appenders could be found for logger (org.apache.hadoop.ipc.Server).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

Log Type: stdout
Log Upload Time: Thu Sep 10 11:30:43 -0700 2015
Log Length: 0

Log Type: syslog
Log Upload Time: Thu Sep 10 11:30:43 -0700 2015
Log Length: 41556
Showing 4096 bytes of 41556 total. Click here for the full log.
ing JobHistoryEventHandler. Size of the outstanding queue size is 0 ...

Seeking help here.

How do I develop MapReduce programs?

0
0

I am looking to develop Hadoop MapReduce programs. So far I’ve done the following:

    Downloaded and installed the HortonWorks sandbox (2.2, but can upgrade to 2.3).
    Installed Eclipse on my local Windows 8 machine.
    Downloaded and unzipped Hadoop 2.7.1 onto my local box.
    Created *Driver.java, *Mapper.java, *Reducer.java classes in my sample Java project. Then I added every .jar I could find in the hadoop 2.7.1 distribution via ‘Add External JARs’ dialog.

But that’s as far as I got. No idea how to actually build it, or run it, or submit it. Or whether I even referenced the right libraries.

I am looking for a step by step tutorial on how to get started (preferably Eclipse). E.g. what libraries to reference, which eclipse plugins to install, how to create and send the job to the VM sandbox, etc…

What I see on the internet right now are tutorials that are 2 years old or so and everything is outdated.

Is there anything that will walk me through creating my first program?

Reply To: Unable to run pig script with python UDF in mapreduce mode : ERROR 2997

0
0

I moved the file to: /tmp/test.py; Ran the following pig script:
REGISTER jython-standalone-2.7.0.jar ;

–REGISTER ‘/user/hue/test.py’ using org.apache.pig.scripting.jython.JythonScriptEngine as testudf ;
REGISTER ‘/tmp/test.py’ USING jython AS testudf;
A = LOAD ‘/user/hue/Data/myfile.txt’ using PigStorage() as t:int;
dump A ;
B = FOREACH A GENERATE t,testudf.helloworld() AS Hello:chararray ;
dump B ;

It runs with following log:
15/09/10 21:04:50 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
15/09/10 21:04:50 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
15/09/10 21:04:50 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType
2015-09-10 21:04:50,405 [main] INFO org.apache.pig.Main – Apache Pig version 0.14.0.2.2.4.2-2 (rexported) compiled Mar 31 2015, 16:33:52
2015-09-10 21:04:50,406 [main] INFO org.apache.pig.Main – Logging error messages to: /hadoop/yarn/local/usercache/hue/appcache/application_1440784855941_0129/container_e04_1440784855941_0129_01_000002/pig_1441919090396.log
2015-09-10 21:04:53,242 [main] INFO org.apache.pig.impl.util.Utils – Default bootup file /home/yarn/.pigbootup not found
2015-09-10 21:04:53,614 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine – Connecting to hadoop file system at: hdfs://sandbox.hortonworks.com:8020
2015-09-10 21:04:58,274 [main] INFO org.apache.pig.scripting.jython.JythonScriptEngine – created tmp python.cachedir=/tmp/pig_jython_5558843429905665754
2015-09-10 21:05:03,599 [main] WARN org.apache.pig.scripting.jython.JythonScriptEngine – pig.cmd.args.remainders is empty. This is not expected unless on testing.
2015-09-10 21:05:04,970 [main] INFO org.apache.pig.scripting.jython.JythonScriptEngine – Register scripting UDF: testudf.helloworld
2015-09-10 21:05:05,716 [main] WARN org.apache.pig.tools.grunt.GruntParser – ‘dump’ statement is ignored while processing ‘explain -script’ or ‘-check’
2015-09-10 21:05:06,114 [main] INFO org.apache.pig.scripting.jython.JythonFunction – Schema ‘word:chararray’ defined for func helloworld
2015-09-10 21:05:06,554 [main] WARN org.apache.pig.tools.grunt.GruntParser – ‘dump’ statement is ignored while processing ‘explain -script’ or ‘-check’
2015-09-10 21:05:06,835 [main] INFO org.apache.pig.data.SchemaTupleBackend – Key [pig.schematuple] was not set… will not generate code.
2015-09-10 21:05:06,946 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer – {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
script.pig syntax OK
2015-09-10 21:05:07,101 [main] INFO org.apache.pig.Main – Pig script completed in 17 seconds and 455 milliseconds (17455 ms)

However, I do not see any output in result:

Results

No results found

Anybody seen something like this?

Reply To: which JAR files are required for remote JDBC clients


Reply To: which JAR files are required for remote JDBC clients

0
0

Hi Vinay, that does not answer my question. Let me phrase it this way. I am aware and connect to the server using the Hive 0.1x JDBC driver. My question was, are there other JAR files which are also supposed to connect to the thriftserver? I am not looking to use beeline… it would be remote machines using the JDBC drivers which Apache SPARK is supposed to support. For example, if you were to use the “equivalent” JAR files in the HDP disro they don’t connect http://www.atware.co.jp/blog/2015/5/28/y5qi4p6x175x1pgbdh32jnj5uss07r

memcached and hive terminal issue

0
0

After memcached configuration with HDP2.3, not able to connect to Hive terminal. Hive terminal is not opening and hanging after some time.
I will be very grateful for your help and advice.

-kamal maurya

Hadoop Cluster Disk Storage

0
0

Hello,

 

We are planning to setup a new Hadoop Cluster in a Virtual Environment. We have checked that this is possible as long as the OS to be used is supported. However for the disk storage ,we have no physical disks (RAID, JBOD) available to be used. We have an option to use the normal virtual storage (VMware storage)  but we are not sure if this is possible since it was also not mentioned in Hortonworks site as we checked.

 

Can you confirm if this one is possible ? If yes, would there be an impact to the whole cluster? Thanks!

 

Regards,

Mitch

 

Reply To: IP 127.0.0.1:8888 in place of 192.168.56.101

0
0

Please look into the eth0 config
While correcting you probably have removed

HOSTNAME=”sandbox.hortonworks.com”

try it and reboot the machine of network

 

Reply To: Hadoop Cluster Disk Storage

0
0

I have built Hadoop clusters on VMWare – that is not a problem. I think I was using VMware storage connecting to a FreeNAS box.

Hadoop does not do anything special to access the disks. It uses the normal operating system facilities. So long as the diskspace is mounted as if it were a local ext3 filesystem then it should work fine.  (I guess there might be some file systems which don’t work with Hadoop)

 

A quick google tells me

https://www.vmware.com/files/pdf/products/vsphere/VMware-vSphere-BDE-Storage-Provisioning-Whitepaper.pdf

and

https://www.vmware.com/files/pdf/techpaper/VMW-Hadoop-Performance-vSphere5.pdf

 

Goodluck!

Upgrading Hive to allow Update/Delete Transactions

0
0

I have created a Hadoop Cluster with Ambari 2.1 including Hive. I would like to be able to do Update and Delete queries within Hive, but it looks like I currently have version 0.12.0.2.0 of Hive. I would like to upgrade to 0.13 or 0.14 to enable these transactions, but I am not sure how to do that with an existing installation of Ambari. Any help would be appreciated.

Cloudbreak Networking in CGP

0
0

Hi,

I’ve got a fairly vanilla install of HDP 2.3 on GCP using cloudbreak. How does the networking setup work?

I am attempting to reach the hdfs service from another hadoop [client] instance. [I actually want to do a disctp]

I can ssh into the cloudbreak provisioned hosts from the place I want to copy from, but can’t seem to get a remote hadoop command to work hadoop fs -ls hdfs://nn:8020/.

Any hints or documentation would be really useful.

Thanks again,
Niall


Reply To: list of Components in Blueprints

0
0

Yes, adding and maintaining this list to the official Ambari wiki would be very helpful.

This page provides this information for HDP 2.1 as well as a link to a comprehensive list of cluster option that can be set via examples: https://blog.codecentric.de/en/2014/05/lambda-cluster-provisioning/

I am pasting the list of components for HDP 2.1
<pre class=”ini” style=”box-sizing: border-box; word-break: break-all; word-wrap: break-word; color: #454545; overflow: visible !important; font-family: monospace !important; font-size: 12px !important; padding: 0px 4px !important; margin-top: 0px !important; margin-bottom: 0px !important; line-height: 16px !important; background-color: transparent !important; border: none !important; border-top-left-radius: 0px !important; border-top-right-radius: 0px !important; border-bottom-right-radius: 0px !important; border-bottom-left-radius: 0px !important; width: auto !important; clear: none !important; -webkit-box-shadow: rgba(0, 0, 0, 0) 0px 0px 0px !important; box-shadow: rgba(0, 0, 0, 0) 0px 0px 0px !important;”>DFS DATANODE, HDFS_CLIENT, JOURNALNODE, NAMENODE, SECONDARY_NAMENODE, ZKFC
YARN APP_TIMELINE_SERVER, NODEMANAGER, RESOURCEMANAGER, YARN_CLIENT
MAPREDUCE2 HISTORYSERVER, MAPREDUCE2_CLIENT
GANGLIA GANGLIA_MONITOR, GANGLIA_SERVER
HBASE HBASE_CLIENT, HBASE_MASTER, HBASE_REGIONSERVER
HIVE HIVE_CLIENT, HIVE_METASTORE, HIVE_SERVER, MYSQL_SERVER
HCATALOG HCAT
WEBHCAT WEBHCAT_SERVER
NAGIOS NAGIOS_SERVER
OOZIE OOZIE_CLIENT, OOZIE_SERVER
PIG PIG
SQOOP SQOOP
STORM DRPC_SERVER, NIMBUS, STORM_REST_API, STORM_UI_SERVER, SUPERVISOR
TEZ TEZ_CLIENT
FALCON FALCON_CLIENT, FALCON_SERVER
ZOOKEEPER ZOOKEEPER_CLIENT, ZOOKEEPER_SERVER

HBASE Java API cannot connect to HDP 2.3

0
0

I installed HDP 2.3 (hdp 2.3.0.0-2557 was installed) and am trying to connect my HBase Java API program to HBase on HDP. I was able to verify the following versions –

Hadoop 2.7.1.2.3.0.0-2557
HBase 1.1.1.2.3.0.0-2557

My java program is maven project which has hbase-site.xml in resources folder (from  which HBaseAdmin is able to extract all required properties). The following dependencies are included in the program-

<dependency>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
<version>1.9</version>
</dependency>
<dependency>
<groupId>jdk.tools</groupId>
<artifactId>jdk.tools</artifactId>
<version>1.7.0_05</version>
<scope>system</scope>
<systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
</dependency>
<dependency>
<groupId>org.ow2.util.bundles</groupId>
<artifactId>commons-collections-3.2.1</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>com.sun</groupId>
<artifactId>tools</artifactId>
<version>1.7.0</version>
<scope>system</scope>
<systemPath>${env.JAVA_HOME}/lib/tools.jar</systemPath>
</dependency>
<dependency>
<groupId>commons-configuration</groupId>
<artifactId>commons-configuration</artifactId>
</dependency>
<dependency>
<groupId>commons-lang</groupId>
<artifactId>commons-lang</artifactId>
<version>2.6</version>
</dependency>
<dependency>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
<version>1.2</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-auth</artifactId>
<version>2.7.1.2.3.0.0-2557</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.1.2.3.0.0-2557</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>2.7.1.2.3.0.0-2557</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.1.1.2.3.0.0-2557</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-common</artifactId>
<version>1.1.1.2.3.0.0-2557</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-protocol</artifactId>
<version>1.1.1.2.3.0.0-2557</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.2.3</version>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-core-asl</artifactId>
<version>1.9.13</version>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-jaxrs</artifactId>
<version>1.9.13</version>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-mapper-asl</artifactId>
<version>1.9.13</version>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-xc</artifactId>
<version>1.9.13</version>
</dependency>
<dependency>
<groupId>javax.xml.parsers</groupId>
<artifactId>jaxp-api</artifactId>
<version>1.4.5</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<dependency>
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
<version>2.5.0</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.7</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.6.4</version>
</dependency>
<dependency>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
<version>3.4.6.2.3.0.0-2557</version>
</dependency>
<dependency>
<groupId>org.mariadb.jdbc</groupId>
<artifactId>mariadb-java-client</artifactId>
</dependency>
<dependency>
<groupId>com.intradiem.enterprise</groupId>
<artifactId>build-tools</artifactId>
</dependency>
<dependency>
<groupId>org.apache.htrace</groupId>
<artifactId>htrace-core</artifactId>
<version>3.1.0-incubating</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.2.0.1.3.7.0-2</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-all</artifactId>
<version>4.0.0.Final</version>
</dependency>
<dependency>
<groupId>org.mortbay.jetty</groupId>
<artifactId>jetty-util</artifactId>
<version>6.1.26</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.1.2.3.0.0-2557</version>
</dependency>

Also I have jetty-6.1.26.hwx.jar is included in path of program.

On

HBaseAdmin.checkHBaseAvailable(configuration)

I am getting the following error –

15/09/11 14:03:32 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7cde5f11 connecting to ZooKeeper ensemble=xxxxxxxxxxxxxx:2181
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/Users/adewan/.m2/repository/org/apache/avro/avro-tools/1.7.7/avro-tools-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/Users/adewan/.m2/repository/org/slf4j/slf4j-log4j12/1.7.10/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/Users/adewan/.m2/repository/org/slf4j/slf4j-simple/1.6.4/slf4j-simple-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation….


84 [main-SendThread(xxxxxxxx:2181)] INFO org.apache.zookeeper.ClientCnxn – Opening socket connection to server xxxxxxxx:2181. Will not attempt to authenticate using SASL (unknown error)
109 [main-SendThread(xxxxxxxx:2181)] INFO org.apache.zookeeper.ClientCnxn – Socket connection established to xxxxxxxx:2181, initiating session
165 [main-SendThread(xxxxxxxx:2181)] INFO org.apache.zookeeper.ClientCnxn – Session establishment complete on server xxxxxxxx:2181, sessionid = 0x14f9a026336007b, negotiated timeout = 40000
15/09/11 14:03:32 WARN util.DynamicClassLoader: Failed to identify the fs of dir hdfs://xxxxxxxx:8020/apps/hbase/data/lib, ignored
org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
at org.apache.hadoop.ipc.Client.call(Client.java:1066)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at com.sun.proxy.$Proxy11.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:222)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:187)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:238)
at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:879)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:635)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2797)
at com.intradiem.framework.tests.TestClass.validateConnectivity(TestClass.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
15/09/11 14:03:33 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x14f9a026336007b
1261 [main] INFO org.apache.zookeeper.ZooKeeper – Session: 0x14f9a026336007b closed
1262 [main-EventThread] INFO org.apache.zookeeper.ClientCnxn – EventThread shut down
org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.NoSuchMethodError: org.apache.hadoop.net.NetUtils.getInputStream(Ljava/net/Socket;)Lorg/apache/hadoop/net/SocketInputWrapper;
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1533)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1553)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1704)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:922)
at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2819)
at com.intradiem.framework.tests.TestClass.validateConnectivity(TestClass.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
Caused by: com.google.protobuf.ServiceException: org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.NoSuchMethodError: org.apache.hadoop.net.NetUtils.getInputStream(Ljava/net/Socket;)Lorg/apache/hadoop/net/SocketInputWrapper;
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:50918)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1564)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1502)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1524)
… 20 more
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.NoSuchMethodError: org.apache.hadoop.net.NetUtils.getInputStream(Ljava/net/Socket;)Lorg/apache/hadoop/net/SocketInputWrapper;
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:773)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:885)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:854)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1180)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
… 25 more
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.net.NetUtils.getInputStream(Ljava/net/Socket;)Lorg/apache/hadoop/net/SocketInputWrapper;
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:715)
… 29 more

Can anyone help in figuring out the problem?

Oozie User Privileges in Database

0
0

Hi,

We gave all privileges for oozie.* to the oozie user. The Oozie database is called ‘oozie’.

However, in the HDP manual, the oozie user is given all privileges to all databases in the existing MySQL database.

Is it really necessary to give Oozie all privileges to all databases in the MySQL server which may be used by other components such as Hive?

Reply To: JobHistory: No SysLog and log4j warn for stderr

0
0

The thing is a have the log4j.properties in the jar file. The structure of jar file is:

META-INF/MANIFEST.MF
.project
.classpath
WordCount$IntSumReducer.class
WordCount$TokenizerMapper.class
WordCount.class
container-log4j.properties
log4j.properties  

And I’m not sure what do you mean by ‘classpath’. Is it client side classpath? Or server-side classpath? hadoop classpath or yarn classpath or just java program classpath?

Ambari failed in cluster

0
0

Hi Team,

I am facing problem when i am creating ambari server , it is getting failed . Please find the logs:-

STDOUT:

Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

 

ERROR:root:ERROR: Bootstrap of host dn1.ambari.com fails because previous action finished with non-zero exit code (1)

ERROR MESSAGE: Connection to dn1.ambari.com closed.

 

STDOUT: Host registration aborted. Ambari Agent host cannot reach Ambari Server ‘184.171.255.47:8080′. Please check the network connectivity between the Ambari Agent host and the Ambari Server

 

Connection to dn1.ambari.com closed.

 

INFO:root:Finished parallel bootstrap

 

13 Sep 2015 01:20:02,773  INFO [pool-9-thread-1] BSHostStatusCollector:55 – Request directory /var/run/ambari-server/bootstrap/3

13 Sep 2015 01:20:02,773  INFO [pool-9-thread-1] BSHostStatusCollector:62 – HostList for polling on [dn1.ambari.com, mn1.ambari.slave]

Please provide the solution.

Viewing all 3435 articles
Browse latest View live




Latest Images