Quantcast
Channel: Hortonworks » All Replies
Viewing all 3435 articles
Browse latest View live

Reply To: Centos7 repository tarballs are outdated?

$
0
0

HI,
Can any one help me on installing horton on Centos7 machine???


Reply To: Nothing when I navigate to Ranger Admin console

$
0
0

I finally managed to setup Ranger. I post here how I did it, in case it’s useful for someone else.

I went to the Ambari web site, and followed next steps:

  1. Dashboard tab.
  2. Click on the “Actions” button on the left, after all listed services.
  3. Click on “Add Service”.
  4. Select “Ranger” and “Ranger KMS”.
  5. Click on “Next”, and I’ve got the instructions Michael mentions.

Then I opened a SSH connection, and checked if I had a JDBC for MySql.

[root@sandbox java]# rpm -qa | grep mysql

mysql-libs-5.1.73-5.el6_6.x86_64

mysql-connector-java-5.1.17-6.el6.noarch

mysql-server-5.1.73-5.el6_6.x86_64

mysql-5.1.73-5.el6_6.x86_64

 

I set up the JDBC connection in the Ambari server.

ambari-server setup –jdbc-db=mysql –jdbc-driver=/usr/share/java/mysql-connector-java.jar

 

Still in the SSH session, I logged in MySQL with root:

mysql

or mysql -u root -p (click on Enter when prompted for password).

 

I run next query:

select * from mysql.user;

which showed that user root didn’t have a password in MySql. The password is required in Ambari to add the service Ranger.

So I added the password to root:

update mysql.user set password = PASSWORD(‘hadoop’) where password = ”;

flush privileges;

 

I went back to Ambari, accepted that I have the conditions required in the instructions, and proceed with the Wizard. I’ve got some warnings about some services which weren’t well configured, but I just continued. As I’ve said, the setup was successful.

Reply To: Ranger Install Service fails because of mysql driver

$
0
0

Hi Michael,

root is not only a Linux user in the VM, but also a user in MySql. During the Ranger setup in Ambari you are required to give root’s password, and it is not allowed to leave it empty.

However, MySql root user is created with no password, so you need to set it prior to run the Ranger setup in Ambari.

Open a SSH session with root, and log in MySQL with root:

mysql

or mysql -u root -p (click on Enter when prompted for password).

If you run next query, you’ll see that user root doesn’t have a password:

select * from mysql.user;

 

Add the password to root:

update mysql.user set password = PASSWORD(‘hadoop’) where password = ”;

flush privileges;

Then you can continue with the Ranger setup in Ambari.

I posted all the steps I followed to setup Ranger here.

Reply To: Nothing when I navigate to Ranger Admin console

$
0
0

It seems I didn’t pasted correctly the command to setup the JDBC connection for Ambari. I try again:

ambari-server setup –jdbc-db=mysql –jdbc-driver=/usr/share/java/mysql-connector-java.jar

Reply To: Nothing when I navigate to Ranger Admin console

$
0
0

For some reason the page transforms 2 dashes in one when I submit my comment. The correct command should have two dashes before “jdbc-db”, and two more dashes right before “jdbc-driver”.

How to use Spark with HDP 1.3 (Hortonworks Sandbox HDP 1.3)

$
0
0

I’m using Hortonworks sandbox HDP 1.3. I want to use spark with it. I found that spark is only in HDP 2.0 and above versions. Since my hardware does not support it I would like to use Spark with HDP 1.3. Please help me with the solutions.

Newbie: deploy a multi node cluster

$
0
0

Hello,

 

This is my first post, hope you can help me. I’m deploying a multinode cluster, I now aprox. what server roles I have to deploy and where, but my question is about “clients”: where should I install, for example, “HDFS client”, “YARN client”, “HBase client”…? I don’t know where to install them.

I am using Ambari to do everything.

Thank you

Reply To: What is the difference between HDP Sandbox and Manual installation?

$
0
0

Hi Pradeep,

The HDP Sandbox is built by a team at Hortonworks to provide a incredibly quick way for anyone to gain access to the Hortonworks distribution and start working immediately.  However, it is a single stack vm, with all Master and Slave services running within a single virtual linux instance.  Typically, Hortonworks will launch the sandbox as close to a production release as possible.  The HDP sandbox should not be used for performance testing, and it is designed as a learning tool, and not a performance environment.

I have had many occasions to sand up my own HDP single node environment, leveraging Apache Ambari to install the components and patches I would like to test.  Ambari makes this a very simple task.  Even for multi node configurations.  If you have very specific goals in mind, with a subset of the product, I recommend investing in reviewing and leveraging the Ambari installation per this link:  http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.1.0/index.html

Regards,

Mark


Reply To: Newbie: deploy a multi node cluster

$
0
0

Hi Silvio,

Hadoop client libraries can be installed on any edge node or master / slave are an entry point for clients accessing services in your cluster.  If this is a dev test environment, I typically install the clients across several nodes, both inside and on the edge.  It can never hurt to have multiple dev test hadoop client paths.  I typically steer clear from slave nodes, and pick master nodes that have less workload or are not part of HA configuration requirements.  <span style=”line-height: 1.5;”>If it is production, you want to designate nodes as Hadoop Client entry points to your cluster and they may only service this purpose.  </span>

Good luck,

Mark

Reply To: HUE 3.8 installation on HDP 2.3 & SLES 11 OS

$
0
0

Did you try creating dag.repo?

[dag]

name=Dag RPM Repository for Red Hat Enterprise Linux

baseurl=http://apt.sw.be/redhat/el6/en/$basearch/dag

gpgcheck=1

enabled=1

 

Cannot start sandbox

$
0
0

Hi,
I am a newbie to hadoop and hortonworks technologies. I downloaded SandboxHDP2.3.1 and tried to run it on VMware. However during the initialization of the sandbox I get the error below.
call from sandbox.hortonworks.com/192.168.200.128 to sandbox.hortonworks.com:8020 failed on connection exception: java.net.ConnectionException: Connection refused; For more details see: http://wiki.apache.org/hadoop/Connectionrefused

I downloaded the sandbox and ran it on VMware several times, however this did not resolve the issue. When the sandbox starts and after I log in, the address http://192.168.200.128/ does not open and displays such an error:
Service Temporarily Unavailable

The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.

Apache/2.2.15 (CentOS) Server at 192.168.200.128 Port 80

Any help would be appreciated
Thank you

Reply To: Hdp2.3 sandbox issues with virtualbox

$
0
0

i am having the same problem.  i was previously able to use virtualbox and Hdp 1.3 and get it to work.  however, for hdp 2.3 and virtualbox 5, i see these barriers a) connection refused for hadoop and b) hangs up when trying to load oozie.  it previously did this when trying to load ambari but i uninstalled and reinstalled.  is there something special that i need to do?  thanks in advance

Corrupt file recovery

$
0
0

Hypothetical situation : let us suppose I have a four node hadoop cluster – one name node and three data nodes.

I load a 3 gb data file and I am thinking that it will split the data amongst the three data nodes – say 1 gb each.

Then sometime later let us suppose that I get corruption errors and am able to narrow down the corrupt data to node 3 – the same file as loaded in the first step.

Let us suppose that I don’t have another copy of that file anywhere.

So how do i recover only that part of the data? That 1 gb on node 3?

Second hypothetical situation : instead of the 3 gb data file, let us suppose that the data file is 300 gb.

And same corruption happens only on node 3 which hosts 1/3rd of the data ie 100 gb.

And I HAVE the original data file.

So how do I load only that 100 gb data into Node 3 on HDFS?

So in short I am looking for solutions to address partial data node corruption issues.

Appreciate the insights.

Reply To: Correct steps to setup Ambari on a CentOS VM

$
0
0

Hi

I am also having the same problem. The install step never complete 100%. Kindly suggest.

Unable to Start Hive Service

$
0
0

Hello,

We have just installed HDP 2.3.2 in our environment but we are not able to start the Hive Service due to Schema failing.

Metastore connection URL: jdbc:mysql://hadoop-slave01.solutionworks.com/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
Starting metastore schema initialization to 1.2.0
Initialization script hive-schema-1.2.0.mysql.sql
Error: Specified key was too long; max key length is 1000 bytes (state=42000,code=1071)
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
*** schemaTool failed ***

Let me know how if you have encountered this issue and how you resolved it.


Job History /app-logs permissions

$
0
0

When trying to view application logs vi job history UI and Hue the following error is received: Error getting logs at shldvatlbxxx.tvlport.net:45454. In the Node Manager log I see the following error message:
2015-09-24 15:58:38,270 ERROR logaggregation.AppLogAggregatorImpl (AppLogAggregatorImpl.java:cleanOldLogs(373)) – Failed to clean old logs
org.apache.hadoop.security.AccessControlException: Permission denied: user=yarn, access=EXECUTE, inode=”/app-logs/jalaj.pahuja/logs”:jalaj.pahuja:hdfs:drwxrwx—
Changed permissions to chmod -R 1777 /app-logs and YARN is the owner and could access logs. But after next job runs permissions for /app-logs/{user}/logs are reset to drwxrwx— and can not access log files.

Type Name Size User Group Permissions Date
. jalaj.pahuja hdfs drwxrwxrwxt August 10, 2015 12:35 PM
.. yarn hdfs drwxrwxrwxt September 30, 2015 09:49 AM
logs jalaj.pahuja hdfs drwxrwx— October 13, 2015 09:12 AM

Why is the directory permissions being reset?

Reply To: Hue 3.8 execute Hive Query error for HDP 2.2

$
0
0

Hi Robin,
You can try to set the below under the [beeswax] header of your hue.ini configuration
use_get_log_api=true
This will continue to give you “Server does not support GetLog()” error on hue’s hive log but you can view your results through hue.
You can check out patch https://issues.apache.org/jira/browse/HIVE-4629 which deals with log issue.

Thanks
Parthiv

Reply To: Welcome to the new Hortonworks Sandbox!

$
0
0

Loaded Sandbox on an A4 instance on Azure – wanting to work through the available tutorials. When I navigate to port 8080 (Ambari), I see 3 alerts. They are:

1. MapReduce2: History Server Web UI: Connection failed to http://sandbox.hortonworks.com:19888 <span style=”line-height: 1.5;”>[Errno 111]</span>

2. MapReduce2: History Server Process: Connection failed: [Errno 111] Connection refused to sand<span style=”line-height: 1.5;”>box.hortonworks.com:19888</span>

3. Hive: Hive Metastore Process: <span style=”color: #222222; font-family: Menlo, monospace; font-size: 11px; line-height: normal; white-space: pre-wrap;”>Metastore on sandbox.hortonworks.com failed (Execution of ‘ambari-sudo.sh su ambari-qa -l -s /bin/bash -c ‘export PATH='”‘”‘/usr/sbin:/sbin:/usr/lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/sbin/:/usr/hdp/current/hive-metastore/bin'”‘”‘ ; export HIVE_CONF_DIR='”‘”‘/usr/hdp/current/hive-metastore/conf/conf.server'”‘”‘ ; hive –hiveconf hive.metastore.uris=thrift://sandbox.hortonworks.com:9083 –hiveconf hive.metastore.client.connect.retry.delay=1 –hiveconf hive.metastore.failure.retries=1 –hiveconf hive.metastore.connect.retries=1 –hiveconf hive.metastore.client.socket.timeout=14 –hiveconf hive.execution.engine=mr -e ‘”‘”‘show databases;'”‘””’ was killed due timeout after 30 seconds)</span>

 

I am unable to find resources online directly pertaining to these issues on Azure… I am wondering why they are occurring immediately on start up of a prepackaged tutorial tool?

ODBC Driver for HDP 2.3 on OS X 10.11 El Capitan

$
0
0

Hello everybody,
Did anybody have success configuring ODBC connection from Excel to Hive (HDP 2.3) on OS X 10.11 El Capitan?
Even the new Hive ODBC Driver for HDP 2.3 (v2.0.5), found at http://hortonworks.com/hdp/addons/ , cannot be installed on the newest OS X. The installation fails with message: “This package is incompatible with this version of OS X and may fail to install.”.
Replies would be really appreciated!!
Cheers

Reply To: Unable to Start Hive Service

$
0
0

Hi AICS,

Can you confirm that you have the UnlimitedJCE policies in place?  I had a similar issue recently and found that despite copying the policy jars during installation I had to re-copy them again later to get a few services up and running.  Something, at some point during installation, must be overwriting the good files manually distributed.

Rob

Viewing all 3435 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>