Quantcast
Channel: Hortonworks » All Replies
Viewing all 3435 articles
Browse latest View live

Optimization in Hive: Does Hive update dynamically the graph structure of tasks?

$
0
0

When I look at the Hive source code (latest version 1.0.0):

In the ‘compile’ phase, each SQL query input will generate a list of dependent tasks (a graph of tasks).

My question is in the ‘execution’ phase, will Hive change or update dynamically the graph structure of tasks? and what about the configuration files?

So Hive uses static optimization or dynamic optimization?


Reply To: Memory configuration for 1NN (16GB RAM ) and 4 DN (8GB RAM)

$
0
0

Hi Dave,

I have 5 systems with 1 system for NN having disk space of 250GB , 4 core, 16 GB momoery ,
and other 4 system have disk space of 250GB , 4 core processor and 8GB memory .

Configuration correction :
namenode java heap space =15000
yarn.nodemanager.resource.memory-mb =6144

Thanks and regards,

vikash

As Ship Goes Missing With 49 On Board

$
0
0

The system uses special “signals”, which are basically pieces of information about what trades should be made. Using these signals, the software began to do all of the hard work for us. In fact, there was practically nothing else to do after this point, apart from sit back and watch. Of course, you do not have to actively watch, and you can just leave the software to trade your money for you.

Reply To: Ambari-Server sync-ldap not working

$
0
0

Hello there,

I am having a similar issue to this. And in answer to Jeff’s question – our ambari server is setup for https, listening on port 8443. This seems relevant in some way otherwise I guess you wouldn’t be asking this question.

I have tried setting up ambari to authenticate via ldap (389) and ldaps (636) and receive the same error message.

I would also like to know if you have to run the ldap-sync command in order to authenticate users via LDAP? I had an ambari 1.6 install working, which would authenticate users from LDAP without having run the sync-ldap command. This does not seem to be the case with 1.7, where you seem to have to run the sync ldap command specifying both a user list and group list.

It seems like you need to know the users who will access Ambari in advance of setting up their access and that you are not able to simply specify a group membership, ie just use –groups and tell ambari to allow all members of a specific group access.

Many thanks.

Reply To: Install failures due to efi permissions

$
0
0

All I was able to do was manually edit the 10 or so settings containing the /boot/efi path. This is during the “Customize Services” step in the install wizard. Unfortunately, any time I had to back up to a previous step (like when I unselected Nagios due to a different install failure), I had to re-edit all of these settings.

It seemed to me that these configuration recommendations were being written to /var/run/ambari-server/stack-recommendations, but I couldn’t figure out how or when those files were being generated in order to attempt to influence those “recommendations.”

java.lang.UnsatisfiedLinkError when running hbase shell

$
0
0

Hi Folks,

I have an java.lang.UnsatisfiedLinkError when running HBase shell. Any suggestions? I saw someone had this problem some time ago and they fixed their problem through upgrading JRuby somewhere. I am using the HDP 2.2, and a recent RedHat LInux, AND Kerberos.
One thing which instantly worries me is that I have /tmp mounted as “noexec” and “native lib” suggests there are .so files to be loaded.
Can I tell it to use a different TEMP?

UAT [mclinta@bruathdp004 ~]$ hbase shell
2015-03-09 13:57:12,632 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
java.lang.RuntimeException: java.lang.UnsatisfiedLinkError: /tmp/jffi821701048168284535.tmp: /tmp/jffi821701048168284535.tmp: failed to map segment from shared object: Operation not permitted
at com.kenai.jffi.Foreign$InValidInstanceHolder.getForeign(Foreign.java:90)
at com.kenai.jffi.Foreign.getInstance(Foreign.java:95)
at com.kenai.jffi.Library.openLibrary(Library.java:151)
at com.kenai.jffi.Library.getCachedInstance(Library.java:125)
at com.kenai.jaffl.provider.jffi.Library.loadNativeLibraries(Library.java:66)
at com.kenai.jaffl.provider.jffi.Library.getNativeLibraries(Library.java:56)
at com.kenai.jaffl.provider.jffi.Library.getSymbolAddress(Library.java:35)
at com.kenai.jaffl.provider.jffi.Library.findSymbolAddress(Library.java:45)
at com.kenai.jaffl.provider.jffi.AsmLibraryLoader.generateInterfaceImpl(AsmLibraryLoader.java:188)
at com.kenai.jaffl.provider.jffi.AsmLibraryLoader.loadLibrary(AsmLibraryLoader.java:110)
at com.kenai.jaffl.provider.jffi.Provider.loadLibrary(Provider.java:31)
at com.kenai.jaffl.provider.jffi.Provider.loadLibrary(Provider.java:25)
at com.kenai.jaffl.Library.loadLibrary(Library.java:76)
at org.jruby.ext.posix.POSIXFactory$LinuxLibCProvider$SingletonHolder.<clinit>(POSIXFactory.java:108)
at org.jruby.ext.posix.POSIXFactory$LinuxLibCProvider.getLibC(POSIXFactory.java:112)
at org.jruby.ext.posix.BaseNativePOSIX.<init>(BaseNativePOSIX.java:30)
at org.jruby.ext.posix.LinuxPOSIX.<init>(LinuxPOSIX.java:17)
at org.jruby.ext.posix.POSIXFactory.loadLinuxPOSIX(POSIXFactory.java:70)
at org.jruby.ext.posix.POSIXFactory.loadPOSIX(POSIXFactory.java:31)
at org.jruby.ext.posix.LazyPOSIX.loadPOSIX(LazyPOSIX.java:29)
at org.jruby.ext.posix.LazyPOSIX.posix(LazyPOSIX.java:25)
at org.jruby.ext.posix.LazyPOSIX.isatty(LazyPOSIX.java:159)
at org.jruby.RubyIO.tty_p(RubyIO.java:1897)
at org.jruby.RubyIO$i$0$0$tty_p.call(RubyIO$i$0$0$tty_p.gen:65535)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135)
at org.jruby.ast.CallNoArgNode.interpret(CallNoArgNode.java:63)
at org.jruby.ast.IfNode.interpret(IfNode.java:111)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:147)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:183)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135)
at org.jruby.ast.VCallNode.interpret(VCallNode.java:86)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:169)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:191)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:302)
at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:144)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:148)
at org.jruby.RubyClass.newInstance(RubyClass.java:822)
at org.jruby.RubyClass$i$newInstance.call(RubyClass$i$newInstance.gen:65535)
at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrNBlock.call(JavaMethod.java:249)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135)
at usr.hdp.$2_dot_2_dot_0_dot_0_minus_2041.hbase.bin.hirb.__file__(/usr/hdp/2.2.0.0-2041/hbase/bin/hirb.rb:110)
at usr.hdp.$2_dot_2_dot_0_dot_0_minus_2041.hbase.bin.hirb.load(/usr/hdp/2.2.0.0-2041/hbase/bin/hirb.rb)
at org.jruby.Ruby.runScript(Ruby.java:697)
at org.jruby.Ruby.runScript(Ruby.java:690)
at org.jruby.Ruby.runNormally(Ruby.java:597)
at org.jruby.Ruby.runFromMain(Ruby.java:446)
at org.jruby.Main.doRunFromMain(Main.java:369)
at org.jruby.Main.internalRun(Main.java:258)
at org.jruby.Main.run(Main.java:224)
at org.jruby.Main.run(Main.java:208)
at org.jruby.Main.main(Main.java:188)
Caused by: java.lang.UnsatisfiedLinkError: /tmp/jffi821701048168284535.tmp: /tmp/jffi821701048168284535.tmp: failed to map segment from shared object: Operation not permitted
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1965)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1890)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1851)
at java.lang.Runtime.load0(Runtime.java:795)
at java.lang.System.load(System.java:1062)
at com.kenai.jffi.Init.loadFromJar(Init.java:164)
at com.kenai.jffi.Init.load(Init.java:78)
at com.kenai.jffi.Foreign$InstanceHolder.getInstanceHolder(Foreign.java:49)
at com.kenai.jffi.Foreign$InstanceHolder.<clinit>(Foreign.java:45)
at com.kenai.jffi.Foreign.getInstance(Foreign.java:95)
at com.kenai.jffi.Internals.getErrnoSaveFunction(Internals.java:44)
at com.kenai.jaffl.provider.jffi.StubCompiler.getErrnoSaveFunction(StubCompiler.java:68)
at com.kenai.jaffl.provider.jffi.StubCompiler.<clinit>(StubCompiler.java:18)
at com.kenai.jaffl.provider.jffi.AsmLibraryLoader.generateInterfaceImpl(AsmLibraryLoader.java:146)
… 50 more
Foreign.java:90:in `getForeign': java.lang.RuntimeException: java.lang.UnsatisfiedLinkError: /tmp/jffi821701048168284535.tmp: /tmp/jffi821701048168284535.tmp: failed to map segment from shared object: Operation not permitted
from Foreign.java:95:in `getInstance’
from Library.java:151:in `openLibrary’
from Library.java:125:in `getCachedInstance’
from Library.java:66:in `loadNativeLibraries’
from Library.java:56:in `getNativeLibraries’
from Library.java:35:in `getSymbolAddress’
from Library.java:45:in `findSymbolAddress’
from DefaultInvokerFactory.java:51:in `createInvoker’
from Library.java:27:in `getInvoker’
from NativeInvocationHandler.java:90:in `createInvoker’
from NativeInvocationHandler.java:74:in `getInvoker’
from NativeInvocationHandler.java:110:in `invoke’
from null:-1:in `isatty’
from BaseNativePOSIX.java:300:in `isatty’
from LazyPOSIX.java:159:in `isatty’
from RubyIO.java:1897:in `tty_p’
from RubyIO$i$0$0$tty_p.gen:65535:in `call’
from CachingCallSite.java:292:in `cacheAndCall’
from CachingCallSite.java:135:in `call’
from CallNoArgNode.java:63:in `interpret’
from IfNode.java:111:in `interpret’
from NewlineNode.java:104:in `interpret’
from ASTInterpreter.java:74:in `INTERPRET_METHOD’
from InterpretedMethod.java:147:in `call’
from DefaultMethod.java:183:in `call’
from CachingCallSite.java:292:in `cacheAndCall’
from CachingCallSite.java:135:in `call’
from VCallNode.java:86:in `interpret’
from NewlineNode.java:104:in `interpret’
from BlockNode.java:71:in `interpret’
from ASTInterpreter.java:74:in `INTERPRET_METHOD’
from InterpretedMethod.java:169:in `call’
from DefaultMethod.java:191:in `call’
from CachingCallSite.java:302:in `cacheAndCall’
from CachingCallSite.java:144:in `callBlock’
from CachingCallSite.java:148:in `call’
from RubyClass.java:822:in `newInstance’
from RubyClass$i$newInstance.gen:65535:in `call’
from JavaMethod.java:249:in `call’
from CachingCallSite.java:292:in `cacheAndCall’
from CachingCallSite.java:135:in `call’
from /usr/hdp/2.2.0.0-2041/hbase/bin/hirb.rb:110:in `__file__’
from /usr/hdp/2.2.0.0-2041/hbase/bin/hirb.rb:-1:in `load’
from Ruby.java:697:in `runScript’
from Ruby.java:690:in `runScript’
from Ruby.java:597:in `runNormally’
from Ruby.java:446:in `runFromMain’
from Main.java:369:in `doRunFromMain’
from Main.java:258:in `internalRun’
from Main.java:224:in `run’
from Main.java:208:in `run’
from Main.java:188:in `main’
Caused by:
ClassLoader.java:-2:in `load': java.lang.UnsatisfiedLinkError: /tmp/jffi821701048168284535.tmp: /tmp/jffi821701048168284535.tmp: failed to map segment from shared object: Operation not permitted
from ClassLoader.java:1965:in `loadLibrary1′
from ClassLoader.java:1890:in `loadLibrary0′
from ClassLoader.java:1851:in `loadLibrary’
from Runtime.java:795:in `load0′
from System.java:1062:in `load’
from Init.java:164:in `loadFromJar’
from Init.java:78:in `load’
from Foreign.java:49:in `getInstanceHolder’
from Foreign.java:45:in `<clinit>’
from Foreign.java:95:in `getInstance’
from Internals.java:44:in `getErrnoSaveFunction’
from StubCompiler.java:68:in `getErrnoSaveFunction’
from StubCompiler.java:18:in `<clinit>’
from AsmLibraryLoader.java:146:in `generateInterfaceImpl’
from AsmLibraryLoader.java:110:in `loadLibrary’
from Provider.java:31:in `loadLibrary’
from Provider.java:25:in `loadLibrary’
from Library.java:76:in `loadLibrary’
from POSIXFactory.java:108:in `<clinit>’
from POSIXFactory.java:112:in `getLibC’
from BaseNativePOSIX.java:30:in `<init>’
from LinuxPOSIX.java:17:in `<init>’
from POSIXFactory.java:70:in `loadLinuxPOSIX’
from POSIXFactory.java:31:in `loadPOSIX’
from LazyPOSIX.java:29:in `loadPOSIX’
from LazyPOSIX.java:25:in `posix’
from LazyPOSIX.java:159:in `isatty’
from RubyIO.java:1897:in `tty_p’
from RubyIO$i$0$0$tty_p.gen:65535:in `call’
from CachingCallSite.java:292:in `cacheAndCall’
from CachingCallSite.java:135:in `call’
from CallNoArgNode.java:63:in `interpret’
from IfNode.java:111:in `interpret’
from NewlineNode.java:104:in `interpret’
from ASTInterpreter.java:74:in `INTERPRET_METHOD’
from InterpretedMethod.java:147:in `call’
from DefaultMethod.java:183:in `call’
from CachingCallSite.java:292:in `cacheAndCall’
from CachingCallSite.java:135:in `call’
from VCallNode.java:86:in `interpret’
from NewlineNode.java:104:in `interpret’
from BlockNode.java:71:in `interpret’
from ASTInterpreter.java:74:in `INTERPRET_METHOD’
from InterpretedMethod.java:169:in `call’
from DefaultMethod.java:191:in `call’
from CachingCallSite.java:302:in `cacheAndCall’
from CachingCallSite.java:144:in `callBlock’
from CachingCallSite.java:148:in `call’
from RubyClass.java:822:in `newInstance’
from RubyClass$i$newInstance.gen:65535:in `call’
from JavaMethod.java:249:in `call’
from CachingCallSite.java:292:in `cacheAndCall’
from CachingCallSite.java:135:in `call’
from /usr/hdp/2.2.0.0-2041/hbase/bin/hirb.rb:110:in `__file__’
from /usr/hdp/2.2.0.0-2041/hbase/bin/hirb.rb:-1:in `load’
from Ruby.java:697:in `runScript’
from Ruby.java:690:in `runScript’
from Ruby.java:597:in `runNormally’
from Ruby.java:446:in `runFromMain’
from Main.java:369:in `doRunFromMain’
from Main.java:258:in `internalRun’
from Main.java:224:in `run’
from Main.java:208:in `run’
from Main.java:188:in `main’

Reply To: java.lang.UnsatisfiedLinkError when running hbase shell

$
0
0

OK – I see that several other people have similar problems with JRuby – including https://jira.codehaus.org/browse/JRUBY-6597
A work around seems to be to tell the program to use a different tmp directory. eg

$ sudo su – hbase
$ mkdir /home/hbase/tmp
$ export HBASE_OPTS=-Djava.io.tmpdir=/home/hbase/tmp
$ hbase shell
2015-03-09 14:45:12,742 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
HBase Shell; enter ‘help<RETURN>’ for list of supported commands.
Type “exit<RETURN>” to leave the HBase Shell
Version 0.98.4.2.2.0.0-2041-hadoop2, r18e3e58ae6ca5ef5e9c60e3129a1089a8656f91d, Wed Nov 19 15:10:28 EST 2014

hbase(main):001:0>

Success!

Graphite.sink metrics

$
0
0

How to change . (dot) separator to another character when send metrics to graphite from Hadoop. Graphite crete a new dir for each dot. Is it possible to change the dot to another caracter

Regards ///Ulf Fernholm


Installed 4 node cluster – services not starting Ambari 1.7 HDP 2.1

$
0
0

centos 6.5
ambari 1.7. HDP 2.1
4 node physical cluster
Followed documentation – exactly from

http://docs.hortonworks.com/HDPDocuments/Ambari-1.7.0.0/AMBARI_DOC_SUITE/index.html#Item3.14

All products installed but very few if any services have started
How does HDP start services? Im on centos so I would expect a service – I do see ambari server and agent services but thats it

Also, I dont see any type of $HADOOP_HOME or variables set

please tell me how these services are supposed to get started

SSL Enablement

$
0
0

I’m following the directions here:

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.2.0/HDP_Security_Guide_v22/index.html#Item1.3.4.4

and I’m seeing this error when I enable HTTPS and restart HDFS:
STARTUP_MSG: java = 1.7.0_67
************************************************************/
2015-03-09 19:10:50,250 INFO datanode.DataNode (SignalLogger.java:register(91)) – registered UNIX signal handlers for [TERM, HUP, INT]
2015-03-09 19:10:50,328 WARN common.Util (Util.java:stringAsURI(56)) – Path /hadoop/hadoop/hdfs/data should be specified as a URI in configuration files. Please update hdfs configuration.
2015-03-09 19:10:50,703 INFO security.UserGroupInformation (UserGroupInformation.java:loginUserFromKeytab(938)) – Login successful for user dn/ldevawshdp0002.cedargatepartners.pvc@CEDARGATEPARTNERS.PVC using keytab file /etc/security/keytabs/dn.service.keytab
2015-03-09 19:10:50,850 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(111)) – loaded properties from hadoop-metrics2.properties
2015-03-09 19:10:50,877 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:start(195)) – Sink ganglia started
2015-03-09 19:10:50,934 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(376)) – Scheduled snapshot period at 10 second(s).
2015-03-09 19:10:50,934 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) – DataNode metrics system started
2015-03-09 19:10:50,938 INFO datanode.DataNode (DataNode.java:<init>(403)) – File descriptor passing is enabled.
2015-03-09 19:10:50,938 INFO datanode.DataNode (DataNode.java:<init>(414)) – Configured hostname is ldevawshdp0002.cedargatepartners.pvc
2015-03-09 19:10:50,947 INFO datanode.DataNode (DataNode.java:startDataNode(1049)) – Starting DataNode with maxLockedMemory = 0
2015-03-09 19:10:50,965 INFO datanode.DataNode (DataNode.java:initDataXceiver(848)) – Opened streaming server at /0.0.0.0:1019
2015-03-09 19:10:50,968 INFO datanode.DataNode (DataXceiverServer.java:<init>(76)) – Balancing bandwith is 6250000 bytes/s
2015-03-09 19:10:50,968 INFO datanode.DataNode (DataXceiverServer.java:<init>(77)) – Number threads for balancing is 5
2015-03-09 19:10:50,972 INFO datanode.DataNode (DataXceiverServer.java:<init>(76)) – Balancing bandwith is 6250000 bytes/s
2015-03-09 19:10:50,972 INFO datanode.DataNode (DataXceiverServer.java:<init>(77)) – Number threads for balancing is 5
2015-03-09 19:10:50,974 INFO datanode.DataNode (DataNode.java:initDataXceiver(863)) – Listening on UNIX domain socket: /var/lib/hadoop-hdfs/dn_socket
2015-03-09 19:10:51,053 INFO mortbay.log (Slf4jLog.java:info(67)) – Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-03-09 19:10:51,057 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) – Http request log for http.requests.datanode is not defined
2015-03-09 19:10:51,069 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(699)) – Added global filter ‘safety’ (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-03-09 19:10:51,072 INFO http.HttpServer2 (HttpServer2.java:addFilter(677)) – Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2015-03-09 19:10:51,072 INFO http.HttpServer2 (HttpServer2.java:addFilter(684)) – Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-03-09 19:10:51,072 INFO http.HttpServer2 (HttpServer2.java:addFilter(684)) – Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-03-09 19:10:51,089 INFO http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(603)) – addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-03-09 19:10:51,095 WARN mortbay.log (Slf4jLog.java:warn(76)) – java.lang.NullPointerException
2015-03-09 19:10:51,095 INFO http.HttpServer2 (HttpServer2.java:start(830)) – HttpServer.start() threw a non Bind IOException
java.io.IOException: !JsseListener: java.lang.NullPointerException
at org.mortbay.jetty.security.SslSocketConnector.newServerSocket(SslSocketConnector.java:531)
at org.apache.hadoop.security.ssl.SslSocketConnectorSecure.newServerSocket(SslSocketConnectorSecure.java:46)
at org.mortbay.jetty.bio.SocketConnector.open(SocketConnector.java:73)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:886)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:827)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:663)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1057)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:415)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2268)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2155)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2202)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2378)
at org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)

Anydobdy have any ideas?

Reply To: Hive ORC format

$
0
0

Create a Hive table as text and another one as ORC. ie. table_stg and table_base

Load up the text table then do a insert overwrite into table_base and select * from table_stg.

Nagios Dependency

$
0
0

File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 90, in _call
raise Fail(err_msg)
Fail: Execution of ‘/usr/bin/yum -d 0 -e 0 -y install hdp_mon_nagios_addons’ returned 1. Error: Package: nagios-plugins-1.4.9-1.x86_64 (HDP-UTILS-1.1.0.17)
Requires: libssl.so.10(libssl.so.10)(64bit)
Error: Package: nagios-plugins-1.4.9-1.x86_64 (HDP-UTILS-1.1.0.17)
Requires: libcrypto.so.10(libcrypto.so.10)(64bit)
You could try using –skip-broken to work around the problem
You could try running: rpm -Va –nofiles –nodigest

Any updates on
libssl.so.10
libcrypto.so.10
how to install them manually with RHEL 6??

Thanks in advance

Reply To: oozie workflow fails for hive query of ORC table

$
0
0

I have the same issue running an oozie workflow….have you found a solution?

https://github.com/gbif/occurrence/blob/master/occurrence-index-builder-workflow/src/main/resources/workflowsingleshard.xml#L36

Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://c1n1.gbif.org:8020/user/root/.staging/job_1425649167399_0558/job.splitmetainfo
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.createSplits(JobImpl.java:1566)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1430)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1388)

HDP2.2 Hue 2.6.1, issues with log tab in Hive Editor (beeswax)

$
0
0

I installed HDP2.2 using ambari 1.7 and manually installed and configuredd hue as instructed on this page.

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.2.0/HDP_Man_Install_v22/index.html#Item1.14

The hue is up and running, but I noticed several degradation from previous version. (I was using HDp2.1 and hue 2.3)

The Hive editor (beeswax) log is not working well. In a lot of cases, the command execution doesn’t show logs at all, like the create table (and other ddl). For example,when I create a table from a query, during the execution the log tab is blank while the query is running.

I did some preliminary search on google, it seems this version of hue is connecting to hive server2, while hiveserver2 didn’t provide a getLog api .

I also see some posts saying one can manually complile&build a latest hue 3.7 version and installed it against HDP2.2, but I am not sure whether that will solve my problem.

Below is the component version displayed on my hue home page.
Component Version
Hue 2.6.1-2041
HDP 2.2.0
Hadoop 2.6.0
Pig 0.14.0
Hive-Hcatalog 0.14.0
Oozie 4.1.0
Ambari 1.7-169
HBase 0.98.4

Reply To: HDP2.2 Hue 2.6.1, issues with log tab in Hive Editor (beeswax)

$
0
0

Hello,

I am also having this issue with the same version. I have noticed especially when I insert into a table I dont see any log via beeswax hue. IF I run the same statement through oozie its all good.

Regards

Grant


Reply To: Execute on Tez

$
0
0

I also dont see the checkbox on my install. I just add the following setting in bees wax which does the same thing. You can set this in ambari website under Hive as a default if you wish
key = hive.execution.engine
Value = tez

JA017: Unknown hadoop job

$
0
0

Hi
I am getting this error when i am trying to run pig job:
I am trying to schedule it over cluster that i had setup using HDP2.2 and ambari1.7

JA017: Unknown hadoop job [job_1425653747034_0055] associated with action [0000015-150306202423135-oozie-oozi-W@pig-node]. Failing this action!

Thanks,
Vikash

Unable to run the custom hook script error

$
0
0

I had added custom service using steps described here https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=38571133. It was successful, but then I was not able to start cluster. Then I removed custom services that I had added earlier. But still I am not able to start cluster.
My configuration was Ambari 1.7 with HDP 2.2 on CentOS 6.4 on Amazon aws multi node cluster.

It ended up with following exception:
Fail: Execution of ‘groupadd ”’ returned 3. groupadd: ” is not a valid group name
Error: Error: Unable to run the custom hook script [‘/usr/bin/python2.6′, ‘/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py’, ‘ANY’, ‘/var/lib/ambari-agent/data/command-1486.json’, ‘/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY’, ‘/var/lib/ambari-agent/data/structured-out-1486.json’, ‘INFO’, ‘/var/lib/ambari-agent/data/tmp’]

Reply To: NameNode not running, port conflict

$
0
0

Hi there
I have in the same problem.
Anybody who has a solution please let me know. Timothee for example, can you please tell how exactly you solved this issue?

How to configure Hadoop to use LDAP group mappings?

Viewing all 3435 articles
Browse latest View live




Latest Images