Quantcast
Channel: Hortonworks » All Replies
Viewing all 3435 articles
Browse latest View live

Nothing when I navigate to Ranger Admin console

$
0
0

Hi,

When I navigate to the Ranger Admin console in Chrome, I get an empty page, no login, nothing at all. Same happens in Firefox, and I get HTTP 404 Page not found in IE.

I started all services from the Ambari console, and restart the Ranger system from command line with root. I still can’t log in Ranger.

Do you know how to fix this issue?

Details:
URL Ranger Admin console: http://192.168.23.128:6080/
HDP 2.3 Sandbox for VMWare
Chrome 45.0.2454.101
IE 9
Firefox 40.0.3

Thanks,

Celia


Spark 1.4.1 issue with thrift server java.lang.NoSuchFieldError: SASL_PROPS

$
0
0

I am trying to run the spark thrift server 1.4.1 with user “hive”. Environment: HDP 2.3 kerberized cluster. The user has a kerberos ticket.

Command to start spark thrift server : ./sbin/start-thriftserver.sh –master yarn –hiveconf hive.server2.thrift.bind.host <hostname> –hiveconf hive.server2.thrift.port 10002

15/09/29 10:22:08 INFO yarn.YarnSparkHadoopUtil: getting token for namenode: hdfs://<hostname>:8020/user/hive/.sparkStaging/application_1442865532157_0589</span>

15/09/29 10:22:08 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 1804 for hive on 10.65.128.101:8020

15/09/29 10:22:09 INFO hive.metastore: Trying to connect to metastore with URI thrift://<hostname>:9083

15/09/29 10:22:09 ERROR metadata.Hive: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient

at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412)

at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:62)

at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)

at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453)

at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465)

at org.apache.hadoop.hive.ql.metadata.Hive.getDelegationToken(Hive.java:2572)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.spark.deploy.yarn.Client$.org$apache$spark$deploy$yarn$Client$$obtainTokenForHiveMetastore(Client.scala:1142)

at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:263)

at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:561)

at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:115)

at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)

at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)

at org.apache.spark.SparkContext.<init>(SparkContext.scala:497)

at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:51)

at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:73)

at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665)

at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170)

at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193)

at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112)

at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Caused by: java.lang.reflect.InvocationTargetException

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410)

… 28 more

Caused by: java.lang.NoSuchFieldError: SASL_PROPS

at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S.getHadoopSaslProperties(HadoopThriftAuthBridge20S.java:126)

at org.apache.hadoop.hive.metastore.MetaStoreUtils.getMetaStoreSaslProperties(MetaStoreUtils.java:1475)

at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:322)

at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:214)

… 33 more

 

15/09/29 10:22:09 ERROR yarn.Client: Unexpected Exception java.lang.reflect.InvocationTargetException

15/09/29 10:22:09 ERROR spark.SparkContext: Error initializing SparkContext.

Reply To: Spark 1.4.1 issue with thrift server java.lang.NoSuchFieldError: SASL_PROPS

$
0
0

The same command with the same user works with spark 1.3.1 installed by default in HDP 2.3

Reply To: hadoop dfs -cp : command not working

$
0
0

Hello Himanshu,

In the example you gave, the hadoop fs -cp command interprets both source and destination paths as belonging to the “default file system”.  The default file system is configured in core-site.xml as property fs.defaultFS, and in most installations, this will be a URL pointing to an HDFS cluster.  I assume there is no /path/file.txt in the HDFS cluster, so it results in a file not found error.

If you want to execute the command for a local source path, then there are 2 options.  One is to use copyFromLocal instead of cp.  This will treat the source path as belonging to the local file system.

hadoop fs -copyFromLocal /path/file.txt /tmp

The other option is to continue using cp, but use an explicit file: URL to indicate that the source is coming from the local file system.

hadoop fs -cp file:/path/file.txt /tmp

I hope this helps.

What is the difference between HDP Sandbox and Manual installation?

$
0
0

Hi,

I am posting this question after my failed attempt at understanding the differences between HDP sandbox and HDP manual installation. I understand that HDP Sandbox is a VM that has all the components/tools to setup a hadoop cluster. But I get confused when I read documentation about manual installation about HDP. My queries are

1) What is the difference between HDP SandBox VM and HDP manual installation, in the areas of performance, component update.
2) Whether HDP SandBox is always used in the production environment or is the non-VM manual installation is preferred over SandBox due to performance concerns?

Reply To: Kafka Producer error.

$
0
0

Try this:

<span style=”font-family: ‘Helvetica Neue’, Helvetica, Arial, ‘Open Sans’, ‘Lucida Grande’, sans-serif; font-size: 14px; line-height: 21px;”>bin/kafka-console-producer.sh –broker-list sandbox.hortonworks.com:6667 –topic test1</span>

Reply To: Kafka Producer error.

$
0
0

Sorry for the formatting in my previous message, I meant just this:

bin/kafka-console-producer.sh –-broker-list sandbox.hortonworks.com:6667 –-topic test1

Reply To: Unable to get past step 9 – Hive Metastore start – fails

$
0
0

Hi,
I am also facing the same problem in restarting Hive metastore and MySQL Server components of HIve. I got the following logs files as record to show .

2015-09-30 08:08:33,614 – Error while executing command ‘start’:
Traceback (most recent call last):
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 214, in execute
method(env)
File “/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py”, line 53, in start
self.configure(env) # FOR SECURITY
File “/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py”, line 46, in configure
hive(name = ‘metastore’)
File “/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py”, line 102, in hive
user = params.hive_user
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 152, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 118, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 274, in action_run
raise ex
Fail: Execution of ‘export HIVE_CONF_DIR=/etc/hive/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]’ returned 1. Metastore connection URL: jdbc:mysql://local host/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
*** schemaTool failed ***

org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
*** schemaTool failed ***

Now I also have issue with Mysql server .it was not started.and running
services/HIVE/0.12.0.2.0/package/scripts/mysql_server.py”, line 49, in start
Traceback (most recent call last):
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 214, in execute
method(env)
File “/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/mysql_server.py”, line 49, in start
mysql_service(daemon_name=params.daemon_name, action=’start’)
File “/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/mysql_service.py”, line 42, in mysql_service
sudo = True,
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 152, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 118, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/res


Reply To: Querying HBase Table

$
0
0

<pre id=”tw-target-text” class=”tw-data-text vk_txt tw-ta tw-text-small” dir=”ltr” style=”unicode-bidi: -webkit-isolate; font-family: inherit; border: none; padding: 0px 0.14em 0px 0px; position: relative; margin-top: 0px; margin-bottom: 0px; resize: none; overflow: hidden; width: 237.5px; white-space: pre-wrap; word-wrap: break-word; color: #212121; height: 384px;” data-placeholder=”Traducción” data-fulltext=””><span lang=”en”>Hi I have the same doubt that the former friend .
I want to make this query:

 scan ‘ t_community ‘ , { COLUMNS = > [ ‘community ‘] , FILTER = > ” SingleColumnValueFilter ( ‘community ‘, ‘ NUM_TOTAL ‘ , > =, ‘binary : 10’ ) “}

but the results are not due , when I compare values ​​by more than a figure that is 10 and older.

Can anybody help me?</span>

Reply To: Querying HBase Table

$
0
0

<p style=”box-sizing: border-box; margin-top: 0px; font-family: ‘Helvetica Neue’, Helvetica, Arial, ‘Open Sans’, ‘Lucida Grande’, sans-serif; font-size: 14.4px; line-height: 21.6px; background-color: #fbfbfb;”>Hi I have the same doubt that the former friend .<br style=”box-sizing: border-box;” />I want to make this query:</p>
<p style=”box-sizing: border-box; font-family: ‘Helvetica Neue’, Helvetica, Arial, ‘Open Sans’, ‘Lucida Grande’, sans-serif; font-size: 14.4px; line-height: 21.6px; background-color: #fbfbfb;”> scan ‘ t_community ‘ , { COLUMNS = > [ ‘community ‘] , FILTER = > ” SingleColumnValueFilter ( ‘community ‘, ‘ NUM_TOTAL ‘ , > =, ‘binary : 10’ ) “}</p>
<p style=”box-sizing: border-box; font-family: ‘Helvetica Neue’, Helvetica, Arial, ‘Open Sans’, ‘Lucida Grande’, sans-serif; font-size: 14.4px; line-height: 21.6px; background-color: #fbfbfb;”>but the results are not due , when I compare values ​​by more than a figure that is 10 and older.</p>
<p style=”box-sizing: border-box; font-family: ‘Helvetica Neue’, Helvetica, Arial, ‘Open Sans’, ‘Lucida Grande’, sans-serif; font-size: 14.4px; line-height: 21.6px; background-color: #fbfbfb;”>Can anybody help me?</p>

Json serde issue

$
0
0

I have a json file that I loaded into HDFS and loaded into table as below. But get error while doing select. I have tried various json serdes but no luck so far. The issue with how the json file is formatted. Each row in the json file is a json object. When I do select on the table it says it expects a “,” to separate name and value.

json file sample

{“user”:{“userlocation”:”BANDUNG-INDONESIA”,”id”:236827129,”name”:”Powerpuff Girls”,”screenname”:”TasyootReyoot”,”geoenabled”:true},”tweetmessage”:”RT @viddyVR: Kok gitu sih RT @TasyootReyoot: #HelloKitty itu aneh pala gede bekumis pula badan kecil kayak org kena polio\””,”createddate”:”2013-06-20T12:08:45″,”geolocation”:null}

{“user”:{“userlocation”:”Bolton, UK”,”id”:14141159,”name”:”Chris Beckett”,”screenname”:”ChrisBeckett”,”geoenabled”:true},”tweetmessage”:”vCOps people – Does Advanced Edition == 5 VMs? I know Std has UI and analytics VM, but what does rest use? Hyperic etc? #vmware #vcops”,”createddate”:”2013-06-20T12:08:46″,”geolocation”:null}
<div id=”yiv7605459760yui_3_16_0_1_1443586689289_2907″></div>
<div id=”yiv7605459760yui_3_16_0_1_1443586689289_2738″ class=”yiv7605459760″>CREATE EXTERNAL TABLE sample_twitter_data (</div>
<div id=”yiv7605459760yui_3_16_0_1_1443586689289_2740″ class=”yiv7605459760″>user STRUCT<</div>
<div id=”yiv7605459760yui_3_16_0_1_1443586689289_2742″ class=”yiv7605459760″>userlocation:STRING,</div>
<div id=”yiv7605459760yui_3_16_0_1_1443586689289_2744″ class=”yiv7605459760″>id:STRING,</div>
<div id=”yiv7605459760yui_3_16_0_1_1443586689289_2746″ class=”yiv7605459760″>name:STRING,</div>
<div id=”yiv7605459760yui_3_16_0_1_1443586689289_2748″ class=”yiv7605459760″>screenname:STRING,</div>
<div id=”yiv7605459760yui_3_16_0_1_1443586689289_2750″ class=”yiv7605459760″>geoenabled:STRING>,</div>
<div id=”yiv7605459760yui_3_16_0_1_1443586689289_2752″ class=”yiv7605459760″>tweetmessage STRING,</div>
<div id=”yiv7605459760yui_3_16_0_1_1443586689289_2754″ class=”yiv7605459760″>createddate STRING</div>
<div id=”yiv7605459760yui_3_16_0_1_1443586689289_2756″ class=”yiv7605459760″>)</div>
<div id=”yiv7605459760yui_3_16_0_1_1443586689289_2758″ class=”yiv7605459760″>ROW FORMAT SERDE ‘org.apache.hive.hcatalog.data.JsonSerDe'</div>
<div id=”yiv7605459760yui_3_16_0_1_1443586689289_2760″ class=”yiv7605459760″>LOCATION ‘/user/hdfs/sample_twitter_data’;</div>
<div id=”yiv7605459760yui_3_16_0_1_1443586689289_2772″ class=”yiv7605459760″>LOAD DATA INPATH ‘/user/hdfs/sample_twitter_data.txt’ OVERWRITE INTO TABLE sample_twitter_data;</div>
<div id=”yiv7605459760yui_3_16_0_1_1443586689289_2780″ class=”yiv7605459760″>select * from sample_twitter_data where user.screenname=’ChrisBeckett’;</div>
<div class=”yiv7605459760″></div>
<div class=”yiv7605459760″>Wondering if anybody else had it working. Any help is highly appreciated.</div>

Reply To: Nothing when I navigate to Ranger Admin console

Reply To: Spark 1.0.1 Tech preview available

Reply To: Unable to get past step 9 – Hive Metastore start – fails

Reply To: Nothing when I navigate to Ranger Admin console

$
0
0

Having the same issue – 404 Page when loading the Ranger page on port 6080.


Reply To: Nothing when I navigate to Ranger Admin console

$
0
0

Michael, I didn’t follow any instructions to setup Ranger. Just gave for granted it was there.

Where can I find the instructions you are referring too.

Thanks,

Celia

Error occured: Hbase table list remotely using Java!

$
0
0

I’m facing problem while running java program to create table in hbase: But the same program

public static void main(String args[])throws MasterNotRunningException, IOException
{
HBaseConfiguration hc = new HBaseConfiguration(new Configuration());
hc.set(“hbase.zookeeper.property.clientPort”,”2181″);
System.out.println( “connecting…….” );

HBaseAdmin admin = new HBaseAdmin(hc);
System.out.println( “entering hbase admin” );
HTableDescriptor[] tableDescriptor = admin.listTables();
System.out.println( “entering list” );
for (int i=0; i<tableDescriptor.length;i++ ){
System.out.println(tableDescriptor[i].getNameAsString());
}}

The same program i runs in local system, ite working. but i can’t able do it remotely.

 

Kindly help me regarding this issue please

SparkR on HDP 2.2.6.0-2800

$
0
0

Hi,
I am looking to install SparkR on a HDP 2.2.6.0 system. Is there any support for this available by Hortonworks? I have tried to use the SparkR by amplabs on github but am facing problems with building it. This may be due to my lack of experience in scala.

HDP 2.3 support for ubuntu 12

$
0
0

Hi,
Is there any information on when will Hortonworks support HDP 2.3 for ubuntu 12?

Regards

HBase thrift server dies with "java.lang.OutOfMemoryError: Java heap space\"

$
0
0

We have HDP-2.2.4.1 with Hue-3.8.1. When one clicks on “Data Browsers”->HBase then the HBase thrift server dies with the following message:

/usr/bin/hbase thrift start
2015-10-01 15:15:22,900 INFO  [main] util.VersionInfo: HBase 0.98.4.2.2.4.2-2-hadoop2
2015-10-01 15:15:22,901 INFO  [main] util.VersionInfo: Subversion git://ip-10-0-0-215.ec2.internal/grid/0/jenkins/workspace/HDP-2.2.4.1-ubuntu12/bigtop/output/hbase/hbase-0.98.4.2.2.4.2 -r dd8a499345afc1ac49dc5ef212ba64b23abfe110
2015-10-01 15:15:22,901 INFO  [main] util.VersionInfo: Compiled by jenkins on Tue Mar 31 19:54:11 UTC 2015
2015-10-01 15:15:23,389 INFO  [main] thrift.ThriftServerRunner: Using default thrift server type
2015-10-01 15:15:23,389 INFO  [main] thrift.ThriftServerRunner: Using thrift server type threadpool
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.4.2-2/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
2015-10-01 15:15:24,275 INFO  [main] security.UserGroupInformation: Login successful for user hbase/server.fqdn@REALM.COM using keytab file /etc/security/keytabs/hbase.service.keytab
2015-10-01 15:15:24,419 INFO  [main] impl.MetricsConfig: loaded properties from hadoop-metrics2-hbase.properties
2015-10-01 15:15:24,658 INFO  [main] timeline.HadoopTimelineMetricsSink: Initializing Timeline metrics sink.
2015-10-01 15:15:24,659 INFO  [main] timeline.HadoopTimelineMetricsSink: Identified hostname = server.fqdn, serviceName = hbase
2015-10-01 15:15:24,664 INFO  [main] timeline.HadoopTimelineMetricsSink: Collector Uri: http://server.fqdn:6188/ws/v1/timeline/metrics
2015-10-01 15:15:24,675 INFO  [main] impl.MetricsSinkAdapter: Sink timeline started
2015-10-01 15:15:24,742 INFO  [main] impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-10-01 15:15:24,742 INFO  [main] impl.MetricsSystemImpl: HBase metrics system started
2015-10-01 15:15:24,862 INFO  [main] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-10-01 15:15:24,923 INFO  [main] http.HttpServer: Added global filter ‘safety’ (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2015-10-01 15:15:24,927 INFO  [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context thrift
2015-10-01 15:15:24,928 INFO  [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-10-01 15:15:24,942 INFO  [main] http.HttpServer: Jetty bound to port 9095
2015-10-01 15:15:24,943 INFO  [main] mortbay.log: jetty-6.1.26.hwx
2015-10-01 15:15:24,985 WARN  [main] mortbay.log: Can’t reuse /tmp/Jetty_0_0_0_0_9095_thrift____.vqpz9l, using /tmp/Jetty_0_0_0_0_9095_thrift____.vqpz9l_8304913930048288760
2015-10-01 15:15:25,369 INFO  [main] mortbay.log: Started HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:9095
2015-10-01 15:15:25,370 DEBUG [main] thrift.ThriftServerRunner: Using binary protocol
2015-10-01 15:15:25,396 INFO  [main] thrift.ThriftServerRunner: starting TBoundedThreadPoolServer on /0.0.0.0:9090; min worker threads=16, max worker threads=1000, max queued requests=1000
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError=”kill -9 %p”
#   Executing /bin/sh -c “kill -9 28807″…
Killed

 

I tried to change HBASE_HEAPSIZE and HBASE_THRIFT_OPTS giving it from the default 1 GB to 8 GB with no success and the same error message. Is it known? Is there a workaround?

Best regards,

Sergey

Viewing all 3435 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>