Quantcast
Channel: Hortonworks » All Replies
Viewing all 3435 articles
Browse latest View live

HDP 2.1 Sandbox for Hyper-V installs ok but application has many errors!

0
0

Install of HDP sandbox on Hyper-V VM is fine but when you start using the browser to do tests with various tools, you start seeing problems immediately! I know this is not just me as I have seen others report similar errors. Here are the details of my problems. I hope someone looks into this and helps us all out :)

These tool links work fine:
ABOUT, PIG, JOB DESIGNER, HUE SHELL, USER ADMIN, HELP

Following tool links do not work:

BEESWAX (HIVE UI) and HCATALOG take 2-3 minutes before throwing error Could not connect to sandbox.hortonworks.com:9083

FILE BROWSER and JOB BROWSER links take good 2-3 minutes before throwing error urlopen error [Errno 110] Connection timed out

OOZIE EDITOR/DASHBOARD takes about a minute and errors with An error with Oozie occurred. The Oozie server is not running: <urlopen error [Errno 110] Connection timed out>


never forget your first love

Spatial Analytics with Hive and Hadoop

0
0

I tried running the example presented by Carter Shanklin using the spatial-sdk-hive-1.0-MODIFIED.jar (from his Github acct), esri-geometry-api.jar and HiveUDFs.jar. I uploaded the uber logs into HCat.

I receive the following message running the hiveSQL script.
Thanks.

SELECT id, ST_GeodesicLengthWGS84(
ST_SetSRID(
ST_LineString(collect_array(point)), 4326)) as length
FROM (
SELECT id, ST_Point(longitude, latitude) as point
FROM
uber
) sub
GROUP BY id;
==========================================================
Error Message:
OK
converting to local hdfs://sandbox.hortonworks.com:8020/user/hue/jars/esri-geometry-api.jar
Added /tmp/85d1cec7-641f-4e07-bc06-eeee1de2cfa0_resources/esri-geometry-api.jar to class path
Added resource: /tmp/85d1cec7-641f-4e07-bc06-eeee1de2cfa0_resources/esri-geometry-api.jar
converting to local hdfs://sandbox.hortonworks.com:8020/user/hue/jars/HiveUDFs.jar
Added /tmp/85d1cec7-641f-4e07-bc06-eeee1de2cfa0_resources/HiveUDFs.jar to class path
Added resource: /tmp/85d1cec7-641f-4e07-bc06-eeee1de2cfa0_resources/HiveUDFs.jar
converting to local hdfs://sandbox.hortonworks.com:8020/user/hue/jars/spatial-sdk-hive-1.0-MODIFIED.jar
Added /tmp/85d1cec7-641f-4e07-bc06-eeee1de2cfa0_resources/spatial-sdk-hive-1.0-MODIFIED.jar to class path
Added resource: /tmp/85d1cec7-641f-4e07-bc06-eeee1de2cfa0_resources/spatial-sdk-hive-1.0-MODIFIED.jar
FAILED: SemanticException [Error 10014]: Line 9:8 Wrong arguments ‘latitude’: The UDF implementation class ‘com.esri.hadoop.hive.ST_Point’ is not present in the class path

Reply To: Spatial Analytics with Hive and Hadoop

0
0

How old is spatial-sdk-hive-1.0-MODIFIED.jar? The ST_Point function had an issue where the class was not specified as having public scope, which caused issues in Hive starting in Hive 0.13.0. ESRI has since fixed their Hive UDFs, if you can get an up-to-date version. The UDF class not being a public class is one common cause of the ” UDF not present in the class path” errors from Hive.

Sometimes many does not matter

0
0

Ole Miss vs Boise State live stream

Reply To: Spatial Analytics with Hive and Hadoop

0
0

I just re-downloaded the code from github/Esri and packaged them up. I am running esri-geometry-api-1.1.2-SNAPSHOT.jar and spatial-sdk-hive-1.0.3-SNAPSHOT.jar. Unfortunately, the error is still there.

Thanks.

The Promise And Problems Of 3D

Singer installation

0
0

After reading about Stinger being delivered, I would like to perform an evaluation. I have Hadoop 2.4, Zookeeper 3.4, and Accumulo 1.5 all running on CentOs 6. What must I do next? The road ahead is unclear.


Upload fails for *.gz file – could not interpret archive type

0
0

Hello,

I was trying to upload a log file that was zipped and has a “gz” extension. However, when using the File Browser from the Sandbox, I get an error “Upload failed: Could not interpret archive type.”

When I try to locate the directory, /user/hue so that I can SSH and copy the file into HDFS, I can’t seem to find that in the directory structure.

How can I get these types of files uploaded?

Thank you,
Kim

Intermittent Issue with Oozie installation

0
0

I am setting up a 3 node virtual machine using vagrant in my Mac OS with 16GB ram. For some reason Ambari manual installation fails at the Oozie step mentioning the python script timeout. When I click retry button in Ambari Console on the failed tasks it works fine.

I also tried the same with blueprint installation and it failed at the oozie stage as it was not able to download one of the bigtop jars from the site. Any thoughts ?

Error from the ambari-agent.log

Fail: Execution of ‘/usr/bin/yum -d 0 -e 0 -y install oozie’ returned 1. Error Downloading Packages:
bigtop-tomcat-6.0.37-1.el6.noarch: failure: bigtop-tomcat/bigtop-tomcat-6.0.37-1.el6.noarch.rpm from HDP-2.1: [Errno 256] No more mirrors to try.

Not able to install HDP in windows 7

0
0

hi,

I am not able to install HDP in my windows 7 machine
The java path set is C:\glassfish3\jdk7
Python is set C:\Windows\system32;c:\Python27

when i try to install it shows installation failed and to check the log file HadoopInstallFiles but i cant find that file in my system :(

The message shown is :
Installation Failed.Please see installation log for details:
C:\HaddopInsatllFiles\HadoopSetupTools\hdp-2.1.3.0.winpkg.install.log

HDFS file append failing in single node configuration

0
0

The following issue happens in both fully distributed and single node setup.
I have looked to the thread(https://issues.apache.org/jira/browse/HDFS-4600) about simiral issue in multinode cluster and made some changes of my configuration however it does not changed anything. The configuration files and application sources are attached.
Steps to reproduce:
Source file:

hdfsFS fs = hdfsConnect("127.0.0.1", 9000);
if(!fs) return 0;
const char* writePath = "/tmp/testfile.txt"; //O_WRONLY|O_CREAT | O_APPEND
hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_APPEND, 0, 0, 0);
if(!writeFile) return 0;
char* buffer = "Hello, World!\n";
tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
if (hdfsFlush(fs, writeFile)) return 0;
hdfsCloseFile(fs, writeFile);


$ ./test_hdfs
2014-08-27 14:23:08,472 WARN [Thread-5] hdfs.DFSClient (DFSOutputStream.java:run(628)) - DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)
FSDataOutputStream#close error:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)

I have tried to run a simple example in java inside intellij IDE , that uses append function. It failed too.
I have tried to get hadoop environment settings from java application intellij IDE. It has shown the default ones. I guess that there should be some sort of environment variable that should point to site configuration files.

java.lang.AbstractMethodError: org.netezza.sql.NzPreparedStatament.isClosed()Z

0
0

I am using HDP 2.0.6.0 for Windows. When attempting to import data from a Netezza appliance using Sqoop, the MapReduce job fails with the Netezza JDBC error shown below:


2014-08-29 14:24:00,467 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@3d989dea
2014-08-29 14:24:00,807 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: "SITE_ID" >= 1 AND "SITE_ID" < 100413
2014-08-29 14:24:00,860 INFO [main] org.apache.sqoop.mapreduce.db.DBRecordReader: Working on split: "SITE_ID" >= 1 AND "SITE_ID" < 100413
2014-08-29 14:24:00,923 INFO [main] org.apache.sqoop.mapreduce.db.DBRecordReader: Executing query: SELECT "VP_ID", "BR_ID", "ACCOUNT_ID", "SITE_ID", "X", "Y" FROM "BR_SITE_LOCATION_V" AS "BR_SITE_LOCATION_V" WHERE ( "SITE_ID" >= 1 ) AND ( "SITE_ID" < 100413 )
2014-08-29 14:24:19,104 INFO [Thread-11] org.apache.sqoop.mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
2014-08-29 14:24:19,155 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.AbstractMethodError: org.netezza.sql.NzPreparedStatament.isClosed()Z
at org.apache.sqoop.mapreduce.db.DBRecordReader.close(DBRecordReader.java:163)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.close(MapTask.java:499)
at org.apache.hadoop.mapred.MapTask.closeQuietly(MapTask.java:1982)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:772)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)

I use the following import command:


sqoop import --username map -P --connect jdbc:netezza://nzhost:5480/nzdb --table BR_SITE_LOCATION_V --split-by SITE_ID --target-dir /user/coz323/br-site --verbose

I’ve tried the Netezza JDBC drivers versions 5.0 and 7.0 but the error is the same for both. Using the –direct option also does not make a difference. I am confident Sqoop is using the correct connection manager as I see the following lines logged to the console:


manager.DefaultManagerFactory: Trying with scheme: jdbc:netezza:
sqoop.ConnFactory: Instantiated ConnManager org.apache.sqoop.manager.NetezzaManager@3595b750

Any ideas or suggestions would be highly appreciated. Thanks.

themeBoard Will Let designers Share

Reply To: java.lang.AbstractMethodError: org.netezza.sql.NzPreparedStatament.isClosed()Z

0
0

The root cause is described in SQOOP-1279. I think the 2.0.6.0 product does not have the right Sqoop version. Can you get the later version of HDP and try

Thanks
Venkat


Reply To: Not able to install HDP in windows 7

0
0

Hi!

Only Windows Server 2008 R2/2012 are supported.

Ivan

How to compare line by line in two text in pig hadoop

0
0

File 1:
123456 raj kall dno 23 23-02-1984 xyz
123457 Tal dall dno 23 23-02-1985 xyz
123458 aaa fff dno 23 23-02-1986 xyz
123459 gg hhhh dno 23 23-02-1987 xyz
123460 aa hhhh dno 23 23-02-1987 xyz
123461 bb hhhh dno 23 23-02-1987 xyz
file 2 :
123456 raj kall dno 23 23-02-1984 xyz
123457 Tal dall dno 23 23-02-1985 xyz
123458 aaa uuu dno 23 23-02-1986 xyz
123459 gg hhhh dno 23 23-02-1987 xyz
123461 bb hhhh dno 23 23-02-1987 xyz

output :
123458 aaa fff dno 23 23-02-1986 xyz
123460 aa hhhh dno 23 23-02-1987 xyz
123461 bb hhhh dno 23 23-02-1987 xyz

Running bigquery command via oozie

0
0

Hi,
I am trying to run bigquery python script bq.py via oozie. If I run simple command ‘python bq.py version’ via a shell script its running fine in oozie. But when I am trying to run ‘python bqpy ls -p’, in STDOUT I am getting the message ‘Heart Beat’ only. There is no error message in this and its running for long hours with the same message in STDOUT. I am submitting the oozie as root user. Files are already copied to the Hadoop’s current working directory and if I run manually python script from there, am getting the correct result. But not getting the output correctly running via oozie.
Anyone please help me to find out the reason.

Reply To: Intermittent Issue with Oozie installation

0
0

Hi,

Seems like you are performing your cluster install using the public repositories for HDP? Typically, the error you are seeing indicates the public repositories are unavailable or unreachable. How is your internet connection from the cluster machines to the public repositories?

If your internet connectivity issues persist, you can always build local repositories for the HDP packages instead:

http://docs.hortonworks.com/HDPDocuments/Ambari-1.6.1.0/bk_using_Ambari_book/content/ambari-chap1-6.html

Cheers.

Not able to open the IP address

0
0

Hi all
My question is I don’t know how to open the ip address in my browser. The final pop-out after the configuration suggested to use 127.0.0.1:8888. I searched online and someone suggested using bash’s command ifconfig.. I tried that and saw something different

I tried both 192.160.0.12 and 10.0.3.15 but the page still could not be found. Any ideas about how to establish the connection in the browser? Thx!

Viewing all 3435 articles
Browse latest View live




Latest Images