Quantcast
Channel: Hortonworks » All Replies
Viewing all articles
Browse latest Browse all 3435

Hue cannot create hcat tables

$
0
0

I have a strange hue issue. Here is my issue in a nutshell

Upon On the Initial hue installation and configuration, I first logged into the hue application with an account other than “hue” (I shall call this account “bob”, I can’t use the real account name, because it is a customer name)

I had been receiving errors from hue when trying to create an Hcat table from a file in HDFS like :

HCatClient error on create table: {“errorDetail”:”org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=admin, access=WRITE, inode=\”/user/hive\”:hive:hdfs:drwxr-xr-x\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6449)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4251)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4221)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4194)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:415)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\n)\n\tat org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:677)\n\tat org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3959)\n\tat org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:295)\n\tat org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)\n\tat org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)\n\tat org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1604)\n\tat org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1364)\n\tat org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1177)\n\tat org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)\n\tat org.apache.hadoop.hive.ql.Driver.run(Driver.java:994)\n\tat org.apache.hive.hcatalog.cli.HCatDriver.run(HCatDriver.java:43)\n\tat org.apache.hive.hcatalog.cli.HCatCli.processCmd(HCatCli.java:291)\n\tat org.apache.hive.hcatalog.cli.HCatCli.processLine(HCatCli.java:245)\n\tat org.apache.hive.hcatalog.cli.HCatCli.main(HCatCli.java:183)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:606)\n\tat org.apache.hadoop.util.RunJar.run(RunJar.java:221)\n\tat org.apache.hadoop.util.RunJar.main(RunJar.java:136)\nCaused by: MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=admin, access=WRITE, inode=\”/user/hive\”:hive:hdfs:drwxr-xr-x\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6449)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4251)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4221)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4194)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:415)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\n)\n\tat org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:1139)\n\tat org.apache.hadoop.hive.metastore.Warehouse.mkdirs(Warehouse.java:207)\n\tat org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1351)\n\tat org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1407)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:606)\n\tat org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102)\n\tat com.sun.proxy.$Proxy10.create_table_with_environment_context(Unknown Source)\n\tat org.apache.hadoop.hive.metastore.HiveMetaStoreClient.create_table_with_environment_context(HiveMetaStoreClient.java:1884)\n\tat org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.create_table_with_environment_context(SessionHiveMetaStoreClient.java:96)\n\tat org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:607)\n\tat org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:595)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:606)\n\tat org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90)\n\tat com.sun.proxy.$Proxy11.createTable(Unknown Source)\n\tat org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:671)\n\t… 19 more\n”,”error”:”FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=admin, access=WRITE, inode=\”/user/hive\”:hive:hdfs:drwxr-xr-x\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6449)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4251)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4221)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4194)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:415)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\n)”,”sqlState”:”08S01″,”errorCode”:40000,”database”:”default”,”table”:”PONTO”} (error 500)

I went back, and within the hue application, created the “hue” user and gave him admin rights, ensured he was in the proper groups, and now I don’t get the permissions errors any longer, but I when I try to create an hcat table from a file , I will see a status message of “Creating table …” and then “Importing Data…”, and then hue finishes with “No tables available” … ?

This is what shows up in my hue logs during the attempt …

==> runcpserver.log <==
[13/May/2015 10:22:54 +0000] access INFO 192.62.130.37 hue – “GET /hcatalog/ping_hive_job/job_1431384606043_0003/ HTTP/1.1″
[13/May/2015 10:22:57 +0000] access INFO 192.62.130.37 hue – “GET /hcatalog/ping_hive_job/job_1431384606043_0003/ HTTP/1.1″
[13/May/2015 10:22:59 +0000] access INFO 192.62.130.37 hue – “GET /hcatalog/ping_hive_job/job_1431384606043_0003/ HTTP/1.1″
[13/May/2015 10:23:02 +0000] access INFO 192.62.130.37 hue – “GET /hcatalog/ping_hive_job/job_1431384606043_0003/ HTTP/1.1″
[13/May/2015 10:23:05 +0000] access INFO 192.62.130.37 hue – “GET /hcatalog/ping_hive_job/job_1431384606043_0003/ HTTP/1.1″
[13/May/2015 10:23:07 +0000] access INFO 192.62.130.37 hue – “GET /hcatalog/ping_hive_job/job_1431384606043_0003/ HTTP/1.1″
[13/May/2015 10:23:10 +0000] access INFO 192.62.130.37 hue – “GET /hcatalog/ping_hive_job/job_1431384606043_0003/ HTTP/1.1″
[13/May/2015 10:23:11 +0000] access INFO 192.62.130.37 hue – “GET /hcatalog/ HTTP/1.1″
[13/May/2015 10:23:11 +0000] access INFO 192.62.130.37 hue – “POST /hcatalog/tables/ HTTP/1.1″
[13/May/2015 10:23:11 +0000] access INFO 192.62.130.37 hue – “GET /hcatalog/table//drop HTTP/1.1″

==> access.log <==
[13/May/2015 10:22:54 +0000] INFO 192.62.130.37 hue – “GET /hcatalog/ping_hive_job/job_1431384606043_0003/ HTTP/1.1″
[13/May/2015 10:22:57 +0000] INFO 192.62.130.37 hue – “GET /hcatalog/ping_hive_job/job_1431384606043_0003/ HTTP/1.1″
[13/May/2015 10:22:59 +0000] INFO 192.62.130.37 hue – “GET /hcatalog/ping_hive_job/job_1431384606043_0003/ HTTP/1.1″
[13/May/2015 10:23:02 +0000] INFO 192.62.130.37 hue – “GET /hcatalog/ping_hive_job/job_1431384606043_0003/ HTTP/1.1″
[13/May/2015 10:23:05 +0000] INFO 192.62.130.37 hue – “GET /hcatalog/ping_hive_job/job_1431384606043_0003/ HTTP/1.1″
[13/May/2015 10:23:07 +0000] INFO 192.62.130.37 hue – “GET /hcatalog/ping_hive_job/job_1431384606043_0003/ HTTP/1.1″
[13/May/2015 10:23:10 +0000] INFO 192.62.130.37 hue – “GET /hcatalog/ping_hive_job/job_1431384606043_0003/ HTTP/1.1″
[13/May/2015 10:23:11 +0000] INFO 192.62.130.37 hue – “GET /hcatalog/ HTTP/1.1″
[13/May/2015 10:23:11 +0000] INFO 192.62.130.37 hue – “POST /hcatalog/tables/ HTTP/1.1″
[13/May/2015 10:23:11 +0000] INFO 192.62.130.37 hue – “GET /hcatalog/table//drop HTTP/1.1″

Have tried the following to troubleshoot …

– I have restarted the hue application
– I tried “tweaking” the hue.ini settings (Going to HUE_URL:8000/dump_config still shows all “Syntax OK”)
– Restarting the cluster
– Rebooting the namenode
– Stopping the cluster and removing any files or directories under the linux /tmp directory owned by hue,hcat or hive
– Stopping the cluster and removing any files or directories under the hdfs /tmp directory owned by hue,hcat or hive
– Tried making other users admin within the hue application and tried to create hcat tables, and cannot
– Checked all HDFS permissions

The hcat logs don’t seem to reveal much …

[root@bigdata05 webhcat]# tail webhcat.log webhcat-console-error.log webhcat-console.log
==> webhcat.log <==
INFO | 19 May 2015 12:14:45,679 | org.apache.hive.hcatalog.templeton.tool.ZooKeeperCleanup | checkAndDelete failed for 0000000013 due to: Node already deleted 0000000013
INFO | 19 May 2015 12:14:45,679 | org.apache.hive.hcatalog.templeton.tool.ZooKeeperCleanup | checkAndDelete failed for 0000000014 due to: Node already deleted 0000000014
INFO | 19 May 2015 12:14:45,679 | org.apache.hive.hcatalog.templeton.tool.ZooKeeperCleanup | checkAndDelete failed for 0000000015 due to: Node already deleted 0000000015
INFO | 19 May 2015 12:14:45,679 | org.apache.hive.hcatalog.templeton.tool.ZooKeeperCleanup | checkAndDelete failed for 0000000016 due to: Node already deleted 0000000016
INFO | 19 May 2015 12:14:45,679 | org.apache.hive.hcatalog.templeton.tool.ZooKeeperCleanup | checkAndDelete failed for 0000000017 due to: Node already deleted 0000000017
INFO | 19 May 2015 12:14:45,679 | org.apache.hive.hcatalog.templeton.tool.ZooKeeperCleanup | checkAndDelete failed for 0000000018 due to: Node already deleted 0000000018
INFO | 19 May 2015 12:14:45,679 | org.apache.hive.hcatalog.templeton.tool.ZooKeeperCleanup | checkAndDelete failed for 0000000019 due to: Node already deleted 0000000019
INFO | 19 May 2015 12:14:45,680 | org.apache.hive.hcatalog.templeton.tool.ZooKeeperCleanup | checkAndDelete failed for 0000000020 due to: Node already deleted 0000000020
INFO | 19 May 2015 12:14:45,680 | org.apache.hive.hcatalog.templeton.tool.ZooKeeperCleanup | checkAndDelete failed for 0000000021 due to: Node already deleted 0000000021
INFO | 19 May 2015 12:14:45,682 | org.apache.hive.hcatalog.templeton.tool.ZooKeeperCleanup | Next execution: Tue May 19 23:29:42 BRT 2015

==> webhcat-console-error.log <==
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.0.0-2041/hive/lib/hive-jdbc-0.14.0.2.2.0.0-2041-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.0.0-2041/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.0.0-2041/hive/lib/hive-jdbc-0.14.0.2.2.0.0-2041-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.0.0-2041/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

==> webhcat-console.log <==
templeton: listening on port 50111
templeton: listening on port 50111
templeton: listening on port 50111
templeton: listening on port 50111
templeton: listening on port 50111
templeton: listening on port 50111
templeton: listening on port 50111
templeton: listening on port 50111
templeton: listening on port 50111
templeton: listening on port 50111

To re-iterate, now within Hue , I don’t get any errors, it just seems my hue admin user (or any other user) cannot “see” hcat tables that were created seemingly successfully ?

Need some help on this …I cannot reproduce the problem in the HDP 2.2 Sandbox in our lab yet …


Viewing all articles
Browse latest Browse all 3435

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>