org.apache.hadoop.security.AccessControlException

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • When I configure HDFS as tachyon's underfs, I'm unable to successfully run Shark queries that write to Tachyon. The writes to the underfs always fail with permissions issues. I have my tachyon master and worker daemons (running on the hadoop name node and data node machines) configured to run as user "tachyon". Then, on the client machine, users launch shark under their own user ID, or under a team/group user id. (E.g., user "sense".) However, when I run a shark query that writes the output to tachyon (e.g., "create table imported_tachyon as select * from imported where ...") I see the following errors: {code} java.lang.RuntimeException (java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: Permission denied: user=sense, access=WRITE, inode="/tmp/tachyon/workers":tachyon:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5584) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5566) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5540) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3685) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3655) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3629) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:741) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:558) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980) ) tachyon.util.CommonUtils.runtimeException(CommonUtils.java:246) tachyon.UnderFileSystemHdfs.mkdirs(UnderFileSystemHdfs.java:290) tachyon.client.TachyonFS.createAndGetUserUnderfsTempFolder(TachyonFS.java:319) tachyon.client.FileOutStream.<init>(FileOutStream.java:65) tachyon.client.TachyonFile.getOutStream(TachyonFile.java:77) shark.tachyon.TachyonOffHeapTableWriter.writeColumnPartition(TachyonOffHeapTableWriter.scala:54) shark.execution.MemoryStoreSinkOperator$$anonfun$2$$anonfun$apply$2.apply(MemoryStoreSinkOperator.scala:135) shark.execution.MemoryStoreSinkOperator$$anonfun$2$$anonfun$apply$2.apply(MemoryStoreSinkOperator.scala:134) scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) shark.execution.MemoryStoreSinkOperator$$anonfun$2.apply(MemoryStoreSinkOperator.scala:134) shark.execution.MemoryStoreSinkOperator$$anonfun$2.apply(MemoryStoreSinkOperator.scala:132) org.apache.spark.rdd.RDD$$anonfun$2.apply(RDD.scala:460) org.apache.spark.rdd.RDD$$anonfun$2.apply(RDD.scala:460) org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34) org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241) org.apache.spark.rdd.RDD.iterator(RDD.scala:232) org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109) org.apache.spark.scheduler.Task.run(Task.scala:53) org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213) org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:42) org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:41) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:415) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554) org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:41) org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:744) {code} This should not be the case, however. When I build the Hadoop cluster, I execute the following: {code} sudo -u hdfs hadoop fs -mkdir -p /tmp/tachyon/workers sudo -u hdfs hadoop fs -mkdir -p /tmp/tachyon/data sudo -u hdfs hadoop fs -chown -R tachyon /tmp/tachyon sudo -u hdfs hadoop fs -chmod -R 1777 /tmp/tachyon/workers sudo -u hdfs hadoop fs -chmod -R 1777 /tmp/tachyon/data {code} So those directories should have the correct ownership and permissions. However, it looks like Tachyon is somehow overriding that and/or later changing the permssions: {code} root@ip-10-153-181-108:~# hadoop fs -ls /tmp Found 1 items drwxr-xr-x - tachyon supergroup 0 2014-08-15 06:49 /tmp/tachyon root@ip-10-153-181-108:~# hadoop fs -ls /tmp/tachyon Found 2 items drwxr-xr-x - tachyon supergroup 0 2014-08-15 06:49 /tmp/tachyon/data drwxr-xr-x - tachyon supergroup 0 2014-08-15 06:49 /tmp/tachyon/workers {code} In short, Tachyon should either: a) perform all writes to the /tmp directories as the daemon user (in my case "tachyon"), or b) it should not change the permissions on the /tmp directories back to 755, since that prevents other users from writing to the directories. Note: I'm running tachyon 0.4.1.
    via by David Rosenstrauch,
  • Hadoop Cluster Deployment Using Pivotal
    via Stack Overflow by user3017176
    ,
  • Hadoop | Learning in the Open
    via by Unknown author,
  • Pyspark on 4 node CDH Cluster
    via Stack Overflow by Steve
    ,
  • permission
    via by bull fx,
    • org.apache.hadoop.security.AccessControlException: Permission denied: user=sense, access=WRITE, inode="/tmp/tachyon/workers":tachyon:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5584) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5566) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5540) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3685) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3655) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3629) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:741) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:558) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980)

    Users with the same issue

    Unknown visitor
    Unknown visitor1 times, last one,
    Unknown visitor
    Unknown visitor1 times, last one,
    Unknown visitor
    Unknown visitor1 times, last one,
    Unknown visitor
    Unknown visitor1 times, last one,