Reply To: Error when starting Sandbox After Install
I am getting the same issue … can you please let me know the resolution… my machine is sony vaio 64 bit
View ArticleReply To: Unable to connect to server
I am facing similar issue after kerberizing the cluster. I see following errors while starting the sandbox. ipc.client: REtrying connect to server : sandbox.hortonworks.com/192.168.1.1:8020 Already...
View ArticleSandbox fails to start after Kerberizing the clsuter
Hello, We are using HDP 2.3 Sandbox on VMWare player and it was working as expected before. Two days back I kerberized the cluster with a new MIT KDC (installed and started in the same sandbox.) I...
View ArticleHive Storage Handler to replace HDFS with custom store
Hi All, Hive by default queries onto HDFS. If I implement Hive Storage Handler for my store with 1. input format 2. output format 3. AbstractSerDe Hive queries will then store and retrieve from my...
View ArticleReply To: Kafka LeaderNotAvailableException
Running into the same problem – can you share your solution?
View ArticleReply To: Kafka Producer error.
Did you find a solution? I seem to be running into the same problem.
View ArticleRecommended Backup & Disaster Recovery strategy
Can anyone point to a good high level concept on Backup & Disaster Recovery strategy for either hortonworks or cloudera distributions? The equation might include tape backups, NFS, or other...
View ArticleReply To: Welcome to the new Hortonworks Sandbox!
I received some help and was able to get by the issue. here is what I was told. <p class=”MsoNormal”><span style=”font-size: 10.5pt; mso-fareast-font-family: ‘Times New Roman’; color:...
View ArticleReply To: JDBC connection to Hive Permission issue
I know it’s real late, but this issue has been bugging me as well. Although, you may also have to set the following property to true in hive-site.xml: set hive.server2.enable.impersonation to true
View ArticleReply To: Failed with exception Unable to move source hdfs
Found the solution. The query failing on size of output was a major hint. The following set command issued before an “insert overwrite directory …” command will resolve the problem for files up to...
View ArticleMap and Reduce tasks succeed container killed by the applicationmaster
I see this error on my job error is non zero. Error code 143. container killed by the applicationmaster. container killed on request. exit code is 143 But all of the mappers and reducers are...
View ArticleHive Query
Hi I have Employeetable which contains id,name,st and Age columns. Employee id name st Age 1 Ram NY 2 Raj NJ 3 Ravi CT 4 Anand CA 25 Another “variable” table columns var1,value1,var2,value2 Variable...
View ArticleReply To: Ambari metrics : No data available
Logs look clean: “Saving 6380 metric aggregates.” This indicates system is receiving writes. Also, log looks current. Do you not see any graphs at all ? Server / Dashboard / Host page? Can you check...
View ArticleReply To: Spark manual upgrade to 1.4.1 (HDP – 2.3.2)
Sorry Mike, Ram, there is a typo in the doc, we are working to get it fixed ASAP. Meanwhile to install 1.4.1 on HDP 2.3.2 try the below. You can omit python package if you don’t plan to use PySpark....
View ArticleHow to use Analytical/Window functions (last_value) in Spark Java?
I’m trying to use analytical/window function last_value in Spark Java. Netezza Query: select sno, name, addr1, addr2, run_dt, last_value(addr1 ignore nulls) over (partition by sno, name, addr1, addr2,...
View ArticleReply To: Hbase failed to start
Looks like you are using Cloudera manager. That looks like a Cloudera Manager deployment issue as it cannot find the parcels that were supposed to be there. Since this is the HDP forums I think you’ll...
View ArticleReply To: How to put huge data in Hdfs in Production
In my experience if you can somehow have the file on the filesystem to start with, you can use put to get it into HDFS. If you need integration with legacy systems, there are many options. since NFS...
View ArticleHow many map/reduce tasks for a job on a single node?
For one job – how many map/reduce tasks will a single node handle? Yes, I am aware that for a job, there can be multiple map/reducer jobs spread across multiple nodes. And a node can have multiple...
View ArticleHow to recover namenode metadata using backed up copies of fsimage/edits files
Backing up namenode metadata involves taking a copy of the fsimage and edits file(s). So in case of a DR scenario, where the namenode information is lost, how to recover namenode metadata using the...
View ArticleReply To: How to recover namenode metadata using backed up copies of...
Hi n c, As long as you have the same configs, fsimage and edits, you should be able to just put these files on another machine running namenode (same version of course). Make sure the new machine...
View Article