Quantcast
Channel: Hortonworks » All Replies
Viewing all 3435 articles
Browse latest View live

lock into various positions

$
0
0

While it is nearly not possible for any binary options software to deliver a one hundred% success ratio, but Profit Maximizer is proving itself to be the foremost resourceful among all other systems available right beside this one. The extensive bets testing that had been done before it absolutely was really made public has finally paid off and has made it rise as the high most binary options trading software. It’s highly recommendable software to individuals as with 80percent accuracy they get to herald shut to 185% of their cost daily. The availability of an obsessive support team also plays a vital role in serving to the users in case of any dubiety. All and points combined in one sends a reasonably positive message to everyone in want of a excellent binary options trading software.

http://www.reddit.com/r/prftmxzr/comments/2t5rw8/

http://www.reddit.com/r/diachong/comments/2t5slk/


Reply To: Unable to Start Namenode

$
0
0

Happy to report I found the issue. I checked the NameNode logs and saw it was trying to bind to the wrong ip address (we changed our address scheme recently). I went into /etc/hosts and corrected the ip…bam NameNode came right up.

Reply To: Ubuntu 12.04 how to use Ambari to set up single-node on localhost – permission d

Reply To: Ambari Timesout Registering

$
0
0

Anything else in the logs? What OS? Just to rule out an issue with SSL could you upgrade your SSL client for example “yum upgrade openssl”

Reply To: Ambari Timesout Registering

$
0
0

Sorry, should had posted that I figured out what the issue was. Long story short, my systems were stuck on the yum update due to lack of an internet connection (I have local repos setup, but it’s causing issues). I granted them access to the internet and rebooted them (do get them out of being able to not find mirrors) and they registered/installed instantly. Thanks for the reply!

Upgrade HDP from 2.1 to 2.2

$
0
0

Getting error while upgrading Hive metastore database schema from v13 to v14. Following the document http://docs.hortonworks.com/HDPDocuments/Ambari-1.7.0.0/Ambari_Upgrade_v170/index.html#Item2.1.4 on STEP 8

Error I’m getting is

/usr/hdp/2.2.0.0-2041/hive/bin/schematool -upgradeSchema -dbType postgres -verbose
15/01/22 12:58:57 WARN conf.HiveConf: HiveConf of name hive.optimize.mapjoin.mapreduce does not exist
15/01/22 12:58:57 WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
15/01/22 12:58:57 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist
15/01/22 12:58:57 WARN conf.HiveConf: HiveConf of name hive.semantic.analyzer.factory.impl does not exist
15/01/22 12:58:57 WARN conf.HiveConf: HiveConf of name hive.auto.convert.sortmerge.join.noconditionaltask does not exist
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to load driver
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to load driver
at org.apache.hive.beeline.HiveSchemaTool.getConnectionToMetastore(HiveSchemaTool.java:186)
at org.apache.hive.beeline.HiveSchemaTool.doUpgrade(HiveSchemaTool.java:212)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:532)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.ClassNotFoundException: org.postgresql.Driver
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:190)
at org.apache.hive.beeline.HiveSchemaTool.getConnectionToMetastore(HiveSchemaTool.java:177)
… 8 more
*** schemaTool failed ***

Reply To: Ubuntu 12.04 how to use Ambari to set up single-node on localhost – permission d

$
0
0

Perfect, that fixed everything. Thanks for your patience

Reply To: Hive Show Database Error

$
0
0

This error can happen with the “hive.server2.logging.operation.log.location” is pointing to a location that does not exist. I was able to correct the issue with the following steps:

1. In Ambari, go to the Hive configuration and check the value for “hive.server2.logging.operation.log.location”. By default, this points to “/tmp/hive/operations_logging” (Note – it actually uses variables, but this is the location it resolves to)
2. On the HiveServer2 Master node, if this location does not exist, create it using the “hive” user account.
3. Restart the Hive Services in Ambari.

If the folder did not exist, by creating it and restarting the Hive service – this should resolve the issue.

Greg
Hortonworks


Reply To: Upgrade HDP from 2.1 to 2.2

$
0
0

Issue gout resolved after creating link for postgres jdbs driver under /usr/hdp/<$version>/hive/lib
ln -s /usr/lib/hive/lib/postgresql-jdbc.jar postgresql-jdbc.jar

Ganglia Not Reporting

$
0
0

I have a very weird issue. On the node with the Ganglia server, I am able to see all the stats. But for every other node it merely says ”
No Data There was no data available. Possible reasons include inaccessible Ganglia service.”. I’ve checked and they are connecting to the Ganglia server, but it appears that no data is being generated. Also, in the messages log I see “Jan 22 15:20:55 host /usr/sbin/gmond[3218]: Error 1 sending the modular data for cpu_user#012″. That message appears for all the servers. Any ideas?

${hdp.version} won't resolve

$
0
0

I have a custom yarn application that doesn’t seem to work with hortonworks 2.2 because the ${hdp.version} variable doesn’t resolve. I get an error something like this:

Illegal character in path at index 11: /hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework

I’m just wondering if there is a workaround for this until it gets addressed.

Thanks for any help/ideas you guys have.

Reply To: Add service disable

$
0
0

The “Add Services” link in the Actions menu is also disabled in 1.7 on HDP 2.2. I went through the steps at https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=38571133, but there was no change. I also looked at the ambari-server jar. Since this is a test environment, I unzipped it and added the SAMPLESERV directory from the link above to it, rezipped everything, and put it in place of the original jar. After restarting the server, there was no change in behavior.

Ambari fails installation with ssh 8840

$
0
0

Hello all,

After reviewing this forum for all the tips I could get to resolve this issue. I am stuck with trying to make the installation proceed.
My env is a CentOS 7 on a VirtualBox. Please review the log and advice.
==========================
Running setup agent script…
==========================

Command start time 2015-01-23 07:59:19
Verifying Python version compatibility…
Using python /usr/bin/python2.7
Found ambari-agent PID: 19019
Stopping ambari-agent
Removing PID file at /var/run/ambari-agent/ambari-agent.pid
ambari-agent successfully stopped
Restarting ambari-agent
Verifying Python version compatibility…
Using python /usr/bin/python2.7
ambari-agent is not running. No PID found at /var/run/ambari-agent/ambari-agent.pid
Verifying Python version compatibility…
Using python /usr/bin/python2.7
Checking for previously running Ambari Agent…
Starting ambari-agent
Verifying ambari-agent process status…
Ambari Agent successfully started
Agent PID at: /var/run/ambari-agent/ambari-agent.pid
Agent out at: /var/log/ambari-agent/ambari-agent.out
Agent log at: /var/log/ambari-agent/ambari-agent.log
(“WARNING 2015-01-23 07:58:58,794 NetUtil.py:92 – Server at https://localhost.localdomain.localdomain:8440 is not reachable, sleeping for 10 seconds…
INFO 2015-01-23 07:59:08,804 NetUtil.py:48 – Connecting to https://localhost.localdomain.localdomain:8440/ca
WARNING 2015-01-23 07:59:08,907 NetUtil.py:71 – Failed to connect to https://localhost.localdomain.localdomain:8440/ca due to [Errno -2] Name or service not known
WARNING 2015-01-23 07:59:08,907 NetUtil.py:92 – Server at https://localhost.localdomain.localdomain:8440 is not reachable, sleeping for 10 seconds…
INFO 2015-01-23 07:59:18,926 NetUtil.py:48 – Connecting to https://localhost.localdomain.localdomain:8440/ca
WARNING 2015-01-23 07:59:19,025 NetUtil.py:71 – Failed to connect to https://localhost.localdomain.localdomain:8440/ca due to [Errno -2] Name or service not known
WARNING 2015-01-23 07:59:19,026 NetUtil.py:92 – Server at https://localhost.localdomain.localdomain:8440 is not reachable, sleeping for 10 seconds…
INFO 2015-01-23 07:59:22,016 main.py:83 – loglevel=logging.INFO
INFO 2015-01-23 07:59:22,016 main.py:55 – signal received, exiting.
INFO 2015-01-23 07:59:22,016 ProcessHelper.py:39 – Removing pid file
INFO 2015-01-23 07:59:22,016 ProcessHelper.py:46 – Removing temp files
INFO 2015-01-23 07:59:29,662 main.py:83 – loglevel=logging.INFO
INFO 2015-01-23 07:59:29,663 DataCleaner.py:36 – Data cleanup thread started
INFO 2015-01-23 07:59:29,685 DataCleaner.py:117 – Data cleanup started
INFO 2015-01-23 07:59:29,685 DataCleaner.py:119 – Data cleanup finished
INFO 2015-01-23 07:59:29,935 PingPortListener.py:51 – Ping port listener started on port: 8670
WARNING 2015-01-23 07:59:30,022 main.py:235 – Unable to determine the IP address of the Ambari server ‘localhost.localdomain.localdomain.localdomain’
INFO 2015-01-23 07:59:30,022 NetUtil.py:48 – Connecting to http

Hive and SAS proc sql

$
0
0

Hi,
I have problem with using Hadoop and SAS proc sql (and or ODBC). I want create table in Hive and load data into it from SAS. In SAS we have 3 ways to do this:

1.
libname sasflt 'SAS-library';
libname hdp_air hadoop user=louis pwd=louispwd server='hdpcluster' schema=statsdiv;

proc sql;
create table hdp_air.flights98
as select * from sasflt.flt98;
quit;

2.

data hdp_air.allflights;
set sasflt.allflights;
run;

3.

Append to existign table:
proc append base=hdp_air.allflights
data=sasflt.flt98;
run;

When I’m running example 1 or 2 table is created id hadoop, but later I always get error:

“ERROR: CLI prepare error: [Hortonworks][HiveODBC] (55) Insert operation is not support for table:
…”

Do You know the way to solve this problem? Some tips?
I’m using HDP 2.1 with hive 0.13

Tutorial:Real time Data Ingestion in HBase & Hive using Storm Bolt

$
0
0

Hi there
Is Anybody experiencing problems with this tutorial
Submitted the topology
all the services for Storm and Kafka are started
then issued command to Start the ‘TruckEventsProducer’ Kafka Producer. I can see events are being produced and logs sent to the screen
But data is not being persisted. The Kafka sprout is not producing anything. (When I view using storm ui the KafkaSprout-emitted counter is not updating… stays at 0). When I check log files for the worker task for the TruckEventProcessor in /var/log/storm…
I see the following

13:01:45 b.s.d.worker [INFO] Launching worker for truck-event-processor-1-1422017673 on 8c75249c-e8e9-4d31-9908-579f25c4fb88:6701 with id 48ed2969-334c-479c-9b03-2a31053fa65c
13:01:45 b.s.d.worker [ERROR] Error on initialization of server mk-worker
java.io.IOException: No such file or directory

Ive tried resubmitting this topology several times and I get the error always.
I also made sure I cleaned out storm.local.dir: /hadoop/storm before each run
I was able to get everything working in “Ingesting and processing Realtime events with Apache Storm”
That topology submitted for tutorial2 was fine, but the one submitted for this exercise tutorial3 (storm jar target/Tutorial-1.0-SNAPSHOT.jar com.hortonworks.tutorials.tutorial3.TruckEventProcessingTopology ) doesn’t seem to process.

Anybody have any ideas please
Thank you


Oozie retention period for metastore

$
0
0

Does oozie have a configuration parameter for a retention period of data in the metastore? I see my database size growing and hitting the max database size. Does oozie have any configuration parameters to automatically remove data or do I need to set something up at the database level.

Thanks,
Bill

How to get Accumulo running on Slider using Slider View

Reply To: Tutorial:Real time Data Ingestion in HBase & Hive using Storm Bolt

$
0
0

I completed the 2 previous Kafka/Storm tutorials
However I’m experiencing the same problems that you mentioned in this tutorial.

I couldn’t start a shell in the browser.
Therefore I created the HBase tables logging in as su HBase using the normal command line.

I also had to change the incorrect port number when starting the producer:
from
java -cp target/Tutorial-1.0-SNAPSHOT.jar com.hortonworks.tutorials.tutorial1.TruckEventsProducer localhost:9092 localhost:2181 &
to
java -cp target/Tutorial-1.0-SNAPSHOT.jar com.hortonworks.tutorials.tutorial1.TruckEventsProducer sandbox:6667 sandbox:2181 &
Is this correct?

The producer is running.
The topology has been submitted.
All services are in the green in Ambari.
However the data doesn’t get persisted to HBase.

I would like to suggest to the HortonWorks tutorial team:
1. Please review the tutorial and fix any errors
2. Please add more guidelines/content so that the tutorial can be completed.

Reply To: Tutorial:Real time Data Ingestion in HBase & Hive using Storm Bolt

$
0
0

I looked closer and found the following error message.
How do I fix the issue?

java.lang.RuntimeException: Error retrievinging connection and access to HBase Tables at com.hortonworks.tutorials.tutorial3.TruckHBaseBolt.prepare(TruckHBaseBolt.java:76) at backtype.storm.daemon.executor$fn__5697$fn__5710.invoke(executor.clj:732) at backtype.storm.util$async_loop$fn__452.invoke(util.clj:463) at clojure.lang.AFn.run(AFn.java:24) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.InterruptedIOException at org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1073) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.tracedWriteRequest(RpcClient.java:1032) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1474) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1684) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1737) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:29240) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1524) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1294) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1128) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1111) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1070) at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:347) at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:331) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:749) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:731) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:721) at com.hortonworks.tutorials.tutorial3.TruckHBaseBolt.prepare(TruckHBaseBolt.java:69) … 4 more

Reply To: Spark 1.2 Technical Preview with HDP 2.2

$
0
0

Spark 1.2 Technical Preview will work with HDP 2.2

If you want to evaluate Spark 1.1 try it with HDP 2.1

Viewing all 3435 articles
Browse latest View live




Latest Images