An update:
It started working:
Looks like after I logged in on the virtual machine it start working correctly (may be something is running after log in)
But now I get scrumble values but this seems to be due to binary encoding .
I will write some python to encode it correctly
Reply To: HBase REST
Reply To: Issues installing Sandbox on VMWare player
I have the same problem with the V2.2 Preview Sandbox on Windows 8 using VMWare Player (Free) V7.0.0 . I can log into the VM no problem, I can ssh into the VM and run Hive no problem, but cannot get into the browser-based UI at http://192.168.18:8000, which is the link provided.
I also tried connecting to 192.168.200.128:8000- that did not work either.
The exact error I received on starting the sandbox was a warning around zookeeper starting:
“Using config: /etc/zookeeper/conf/zoo.cfg
safemode: Call from sandbox.hortonworks.com/192.168.183.128 to sandbox.hortonworks.com:8020 failed on connection exception: java.net.ConnectionException: Connection refused: For more details see: http://wiki.apache.org/hadoop/Connectionrefused”
Looked at all the posts, and the only thing I haven’t done is changed networking to bridged mode as I am no expert at networking and don’t see below that it specifically solved the problem.
Look forward to anyone else’s thoughts on what might work.
In the meantime, I’m enjoying the command line – it’s why I wanted to install my own instance anyway. Didn’t want to use the browser UI. So it is not holding up my work too much. It’s just annoying.
Reply To: Unable to import incremental data from Oracle using SQOOP with timestamp
Narashima:
From the error message, especially the ORA-01841 code, it appears the source data may have an unexpected data value or format. The typical cause of an ORA-01841 error is a string-to-date conversion that results in an out of range date. The date is expected to be in the form YYYYMMDD. Characters before the date string may cause this error code, too. I recommend checking your column values for outlying or unusual values; I suspect you may find one or more unexpected timestamps, and those are likely the ones causing the import to fail.
You also don’t give the value you are using for {last_import_date}. That also needs to be in an expected format.
Happy hadooping,
Bryce Ryan
Reply To: Where to download HDP tar ?
Jeff:
There is no single tarball with the entire Hortonworks Data Platform (HDP) software. Instead, our software consists of a number of packages prepared for OS-specific package managers. These packages can be installed automatically via Ambari, or manually, and the resulting cluster will be customized to your requirements. We do maintain repositories, which can be referenced for the HDP software.
I recommend a review of the manual or automated installation guides, which can be found at the Hortonworks documentation site, http://docs.hortonworks.com/. These provide guidelines on how to set up systems for installing HDP software, as well as how to install and customize an installation.
Happy hadooping!
Bryce Ryan
Update tutorial Error
Hi,
When I click on update buttion on the homepage of sandbox to update the Tutorials ,I am getting below error.I have installed Hadoop sandbox 2.1 with VMware.
Update tutorials failed: RAN: ‘/bin/bash /usr/lib/tutorials/tutorials_app/run/run.sh’ STDOUT: Pull… STDERR: ssh: Could not resolve hostname github.com: Name or service not known fatal: The remote end hung up unexpectedly
Reply To: HBase REST
Update 2:
Now after restarting the virtual machine again , got the same problem again.
I think something need to be configure here but I don’t know what. If you know please let me know
Tx
Ambari Setup Guide regarding user perrmissions
Hello,
Is there a guide regarding user / permissions in order to setup HDP via Ambari on Amazon EC2 instances.
My application will combine Hue,Hive,Oozie,Hbase and Kafka.
Do I use the default ‘ec2-user’ user and chmod 777 all other users hive /oozie since there always seem to be permission issues?
Whats best practice?
Furthermore for Hbase /Hive to work in oozie which libraries do I need ?
Will the below suffice?
hbase-client-0.98.4.2.2.0.0-2041-hadoop2.jar
hbase-common-0.98.4.2.2.0.0-2041-hadoop2.jar
hbase-server-0.98.4.2.2.0.0-2041-hadoop2.jar
Thanks in advance?
give HDFS permissions to an user
I’m trying to do HDFS task with another user and get permission denied. How do i give another user HDFS rights besides the hdfs user?
Configure solfCloud on HDFS in HDP 2.1 Sandbox
Hi,
I am trying to setup SolrCloud on HDFS on HDP 2.1 Sandbox. I have setup a separate instance of Zookeeper that can be accessed from port 2191. I am facing problem in configuring the cloud-scripts to upload config files for the index. These config files are in a location on hdfs and I have to upload the config files to zookeeper. How should the HDFS path to these be passed to zookeeper ? The HDFS path (format : hdfs://HOST:PORT/path) is giving an illegal directory error.
Thanks,
Ar
Reply To: Decimal data type not supported in SPARK->HiveQL?
This is fixed in Spark upstream and we plan to release another tech preview based on Spark 1.2 in the coming weeks.
HCatInput Format Exception on HDP 2.1
I’m trying to read a Hive table as input to an MR job and I get the following exception…
[rt2357@104-04-01 ~]$ hadoop jar ./platform-persistence-mapreduce-0.0.1-SNAPSHOT.jar com.att.bdcoe.platform.persistence.mapreduce.jobs.CLFHiveBulkLoader -q “104-03-02.c.datamaster.bigtdata.io,104-03-03.c.datamaster.bigtdata.io,104-04-03.c.datamaster.bigtdata.io” -t clf_csv
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/users/rt2357/platform-common-0.0.1-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/users/rt2357/platform-persistence-api-0.0.1-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
12-15-14 15:10:10,405 INFO metastore:297 – Trying to connect to metastore with URI thrift://104-03-02.c.datamaster.bigtdata.io:9083
12-15-14 15:10:10,446 INFO metastore:385 – Connected to metastore.
12-15-14 15:10:11,659 INFO TimelineClientImpl:123 – Timeline service address: http://104-03-03.c.datamaster.bigtdata.io:8188/ws/v1/timeline/
12-15-14 15:10:11,668 INFO RMProxy:92 – Connecting to ResourceManager at 104-04-02.c.datamaster.bigtdata.io/10.0.28.117:8050
12-15-14 15:10:12,965 INFO deprecation:1009 – mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
12-15-14 15:10:12,986 INFO FileInputFormat:247 – Total input paths to process : 1
12-15-14 15:10:12,995 INFO FileInputFormat:247 – Total input paths to process : 1
12-15-14 15:10:13,062 INFO JobSubmitter:396 – number of splits:2
12-15-14 15:10:13,206 INFO JobSubmitter:479 – Submitting tokens for job: job_1418440661430_28689
12-15-14 15:10:13,381 INFO YarnClientImpl:236 – Submitted application application_1418440661430_28689
12-15-14 15:10:13,409 INFO Job:1289 – The url to track the job: http://104-04-02.c.datamaster.bigtdata.io:8088/proxy/application_1418440661430_28689/
12-15-14 15:10:13,410 INFO Job:1334 – Running job: job_1418440661430_28689
12-15-14 15:15:33,371 INFO Job:1355 – Job job_1418440661430_28689 running in uber mode : false
12-15-14 15:15:33,372 INFO Job:1362 – map 0% reduce 0%
12-15-14 15:15:58,528 INFO Job:1441 – Task Id : attempt_1418440661430_28689_m_000000_0, Status : FAILED
Error: org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(Lorg/apache/hadoop/hive/serde2/Deserializer;Lorg/apache/hadoop/conf/Configuration;Ljava/util/Properties;Ljava/util/Properties;)V
12-15-14 15:15:59,546 INFO Job:1441 – Task Id : attempt_1418440661430_28689_m_000001_0, Status : FAILED
Error: org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(Lorg/apache/hadoop/hive/serde2/Deserializer;Lorg/apache/hadoop/conf/Configuration;Ljava/util/Properties;Ljava/util/Properties;)V
12-15-14 15:16:21,648 INFO Job:1441 – Tas
Reply To: give HDFS permissions to an user
Hi John,
I believe you can try adding use to the property:
Hope that helps.
Regards,
Robert
Reply To: Host Tab displays wrong IP addresses
See the issue that I am facing – http://hortonworks.com/community/forums/topic/ambari-host-show-ip-as-127-0-0-1/
Check replied for thosse maybe that can help you out.. I am still stuck with my issue
Regards,
Vishal
Reply To: ambari-server upgrade error (1.6 > 1.7 doc)
Thanks to Ramesh Babu, now ambari-server can be started normally after disabled SELinux
kerberos and ambari ( 1.7.0)
dear all,
I’m actually building a docker container (centos 6) which does contain an fully ambari setup which is working pretty well.
I wanted to secure it with Kerberos and there comes the problems…
I’m actually facing the issue that it’s impossible to start the datanode trough Ambari UI, what makes me more confusing is that the namenode can be started without a problem
error msg from /var/lib/ambari-agent/data/errors-144.txt:
Fail: Execution of ‘ulimit -c unlimited; su -s /bin/bash – root -c ‘export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh –config /etc/hadoop/conf start datanode” returned 1. starting datanode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-datanode-95a1a98c04e2.out
setup steps:
usr/sbin/kadmin.local -q “addprinc root/admin”
/sbin/service krb5kdc start
/sbin/service kadmin start
mkdir /etc/security/keytabs
cd /etc/security/keytabs
#run kadmin.local
addprinc -randkey ambari-qa@HDUSER
addprinc -randkey hdfs@HDUSER
addprinc -randkey HTTP/95a1a98c04e2@HDUSER
addprinc -randkey yarn/95a1a98c04e2@HDUSER
addprinc -randkey dn/95a1a98c04e2@HDUSER
addprinc -randkey falcon/95a1a98c04e2@HDUSER
addprinc -randkey jhs/95a1a98c04e2@HDUSER
addprinc -randkey hive/95a1a98c04e2@HDUSER
addprinc -randkey knox/95a1a98c04e2@HDUSER
addprinc -randkey nagios/95a1a98c04e2@HDUSER
addprinc -randkey nn/95a1a98c04e2@HDUSER
addprinc -randkey nm/95a1a98c04e2@HDUSER
addprinc -randkey oozie/95a1a98c04e2@HDUSER
addprinc -randkey rm/95a1a98c04e2@HDUSER
addprinc -randkey zookeeper/95a1a98c04e2@HDUSER
xst -norandkey -k smokeuser.headless.keytab ambari-qa@HDUSER
xst -norandkey -k hdfs.headless.keytab hdfs@HDUSER
xst -norandkey -k spnego.service.keytab HTTP/95a1a98c04e2@HDUSER
xst -norandkey -k yarn.service.keytab yarn/95a1a98c04e2@HDUSER
xst -norandkey -k dn.service.keytab dn/95a1a98c04e2@HDUSER
xst -norandkey -k falcon.service.keytab falcon/95a1a98c04e2@HDUSER
xst -norandkey -k jhs.service.keytab jhs/95a1a98c04e2@HDUSER
xst -norandkey -k hive.service.keytab hive/95a1a98c04e2@HDUSER
xst -norandkey -k knox.service.keytab knox/95a1a98c04e2@HDUSER
xst -norandkey -k nagios.service.keytab nagios/95a1a98c04e2@HDUSER
xst -norandkey -k nn.service.keytab nn/95a1a98c04e2@HDUSER
xst -norandkey -k nm.service.keytab nm/95a1a98c04e2@HDUSER
xst -norandkey -k oozie.service.keytab oozie/95a1a98c04e2@HDUSER
xst -norandkey -k rm.service.keytab rm/95a1a98c04e2@HDUSER
xst -norandkey -k zk.service.keytab zookeeper/95a1a98c04e2@HDUSER
#exit kadmin.local
Then i do give the right permission the fils in /etc/security/keytabs/
ambari-server stop
ambari-agent stop
/sbin/service krb5kdc restart
/sbin/service kadmin restart
ambari-server start
ambari-agent start
then i start the security procedure trough ambari ( admin -> security -> enable kerberos)
I don’t ee where is my issue here but the fact is that Ambari can’t start kerberos
My container has also plenty of port open
In advance thank you for your help
Reply To: How to connect to Internet inside the sandbox?
Hi, I think this is not a problem with Sandbox but VirtualBox settings.
I am able to connect to internet on VirtualBox with Sandbox installed. Just check, if your network settings are proper.
You can check it at by selecting the horton image–>settings–>Network.
On my VirtualBox for Sandbox I have it as NAT.
Reply To: HCatInput Format Exception on HDP 2.1
I’m using the following code right out of the Hortonworks class materials…
String principalID = System.getProperty(HCatConstants.HCAT_METASTORE_PRINCIPAL);
if (principalID != null)
conf.set(HCatConstants.HCAT_METASTORE_PRINCIPAL, principalID);
HCatInputFormat.setInput(job, database, table);
job.setInputFormatClass(HCatInputFormat.class);
Yet, when I get to the mapper, the serde exception happens. I’ve tried changing from sequence file to text file format and the same exception happens.
Any help is appreciated.
Reply To: unable to start hue on HDP-2.1.2
I resolved this issue. Problem was with one of the configuration parameter in hue.ini file.
[desktop]
# If set to false, runcpserver will not actually start the web server.
#enable_server=no this should be enable_server=yes
!!Thanks!!
Reply To: HCatInput Format Exception on HDP 2.1
Looks like HDP 2.1 is expecting hive.serde2 verion 0.14 and 0.13 is on the platform…
2014-12-16 11:24:56,777 INFO [main] org.apache.hive.hcatalog.mapreduce.InternalUtil: Initializing org.apache.hadoop.hive.ql.io.orc.OrcSerde with properties {field.delim=|, transient_lastDdlTime=1418143490, name=default.common_location_format, serialization.null.format=\N, columns=original_subscriber_id,original_subscriber_type,normalized_subscriber_id,source_network_element_id,source_network_element_type,create_timestamp,create_timestamp_string,event_timestamp_string,event_timestamp_utc,event_timestamp_local,duration,time_zone,geohash,latitude,longitude,altitude,mgrs_bin,locate_method,accuracy,geofence_ids,job_id, serialization.lib=org.apache.hadoop.hive.ql.io.orc.OrcSerde, serialization.format=|, columns.types=string,string,string,string,string,bigint,string,string,bigint,bigint,int,string,string,float,float,float,string,string,float,array<string>,string}
2014-12-16 11:24:56,778 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.NoSuchMethodError: org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(Lorg/apache/hadoop/hive/serde2/Deserializer;Lorg/apache/hadoop/conf/Configuration;Ljava/util/Properties;Ljava/util/Properties;)V
at org.apache.hive.hcatalog.mapreduce.InternalUtil.initializeDeserializer(InternalUtil.java:156)
at org.apache.hive.hcatalog.mapreduce.HCatRecordReader.createDeserializer(HCatRecordReader.java:127)
at org.apache.hive.hcatalog.mapreduce.HCatRecordReader.initialize(HCatRecordReader.java:92)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:525)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Reply To: Confirming and Registering Hosts fails, where is the setup script getting domain
Ok, I fixed getting to the name node by editing the /etc/ambari-agent/conf/ambari-agent.ini manually. I changed the first part of the file to look like this:
hostname=namenode.localdomain.com
url_port=8440
secured_url_port=8441