org.apache.hadoop.security.AccessControlException: Permission denied: user=sense, access=WRITE, inode="/tmp/tachyon/workers":tachyon:supergroup:drwxr-xr-x

JIRA | David Rosenstrauch | 2 years ago
  1. 0

    When I configure HDFS as tachyon's underfs, I'm unable to successfully run Shark queries that write to Tachyon. The writes to the underfs always fail with permissions issues. I have my tachyon master and worker daemons (running on the hadoop name node and data node machines) configured to run as user "tachyon". Then, on the client machine, users launch shark under their own user ID, or under a team/group user id. (E.g., user "sense".) However, when I run a shark query that writes the output to tachyon (e.g., "create table imported_tachyon as select * from imported where ...") I see the following errors: {code} java.lang.RuntimeException (java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: Permission denied: user=sense, access=WRITE, inode="/tmp/tachyon/workers":tachyon:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5584) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5566) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5540) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3685) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3655) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3629) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:741) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:558) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980) ) tachyon.util.CommonUtils.runtimeException(CommonUtils.java:246) tachyon.UnderFileSystemHdfs.mkdirs(UnderFileSystemHdfs.java:290) tachyon.client.TachyonFS.createAndGetUserUnderfsTempFolder(TachyonFS.java:319) tachyon.client.FileOutStream.<init>(FileOutStream.java:65) tachyon.client.TachyonFile.getOutStream(TachyonFile.java:77) shark.tachyon.TachyonOffHeapTableWriter.writeColumnPartition(TachyonOffHeapTableWriter.scala:54) shark.execution.MemoryStoreSinkOperator$$anonfun$2$$anonfun$apply$2.apply(MemoryStoreSinkOperator.scala:135) shark.execution.MemoryStoreSinkOperator$$anonfun$2$$anonfun$apply$2.apply(MemoryStoreSinkOperator.scala:134) scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) shark.execution.MemoryStoreSinkOperator$$anonfun$2.apply(MemoryStoreSinkOperator.scala:134) shark.execution.MemoryStoreSinkOperator$$anonfun$2.apply(MemoryStoreSinkOperator.scala:132) org.apache.spark.rdd.RDD$$anonfun$2.apply(RDD.scala:460) org.apache.spark.rdd.RDD$$anonfun$2.apply(RDD.scala:460) org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34) org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241) org.apache.spark.rdd.RDD.iterator(RDD.scala:232) org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109) org.apache.spark.scheduler.Task.run(Task.scala:53) org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213) org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:42) org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:41) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:415) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554) org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:41) org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:744) {code} This should not be the case, however. When I build the Hadoop cluster, I execute the following: {code} sudo -u hdfs hadoop fs -mkdir -p /tmp/tachyon/workers sudo -u hdfs hadoop fs -mkdir -p /tmp/tachyon/data sudo -u hdfs hadoop fs -chown -R tachyon /tmp/tachyon sudo -u hdfs hadoop fs -chmod -R 1777 /tmp/tachyon/workers sudo -u hdfs hadoop fs -chmod -R 1777 /tmp/tachyon/data {code} So those directories should have the correct ownership and permissions. However, it looks like Tachyon is somehow overriding that and/or later changing the permssions: {code} root@ip-10-153-181-108:~# hadoop fs -ls /tmp Found 1 items drwxr-xr-x - tachyon supergroup 0 2014-08-15 06:49 /tmp/tachyon root@ip-10-153-181-108:~# hadoop fs -ls /tmp/tachyon Found 2 items drwxr-xr-x - tachyon supergroup 0 2014-08-15 06:49 /tmp/tachyon/data drwxr-xr-x - tachyon supergroup 0 2014-08-15 06:49 /tmp/tachyon/workers {code} In short, Tachyon should either: a) perform all writes to the /tmp directories as the daemon user (in my case "tachyon"), or b) it should not change the permissions on the /tmp directories back to 755, since that prevents other users from writing to the directories. Note: I'm running tachyon 0.4.1.

    JIRA | 2 years ago | David Rosenstrauch
    org.apache.hadoop.security.AccessControlException: Permission denied: user=sense, access=WRITE, inode="/tmp/tachyon/workers":tachyon:supergroup:drwxr-xr-x
  2. 0

    Hadoop Cluster Deployment Using Pivotal

    Stack Overflow | 2 years ago | user3017176
    org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  4. 0

    Hive can't import the flume tweet data to the Warehouse (HDFS)

    Stack Overflow | 2 years ago | user3590417
    org.apache.hadoop.hive.ql.metadata.HiveException: copyFiles: error while checking/creating destination directory!!!
  5. 0

    hive server 2- Exception hive.service.ServiceException: Error setting stage directories

    Stack Overflow | 2 years ago | Aaditya Raj
    org.apache.hive.service.ServiceException: Error setting stage directories

    4 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.hadoop.security.AccessControlException

      Permission denied: user=sense, access=WRITE, inode="/tmp/tachyon/workers":tachyon:supergroup:drwxr-xr-x

      at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission()
    2. Apache Hadoop HDFS
      ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod
      1. org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
      2. org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
      3. org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
      4. org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
      5. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5584)
      6. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5566)
      7. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5540)
      8. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3685)
      9. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3655)
      10. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3629)
      11. org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:741)
      12. org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:558)
      13. org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
      13 frames
    3. Hadoop
      Server$Handler$1.run
      1. org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
      2. org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
      3. org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986)
      4. org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982)
      4 frames
    4. Java RT
      Subject.doAs
      1. java.security.AccessController.doPrivileged(Native Method)
      2. javax.security.auth.Subject.doAs(Subject.java:415)
      2 frames
    5. Hadoop
      Server$Handler.run
      1. org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
      2. org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980)
      2 frames