org.apache.hadoop.security.AccessControlException: Permission denied: user=ads, access=WRITE, inode="/tmp":hadoop:supergroup:drwxr-xr-x

GitHub | zhangshiyu01 | 6 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    How to run distributed training using yarn?

    GitHub | 6 months ago | zhangshiyu01
    org.apache.hadoop.security.AccessControlException: Permission denied: user=ads, access=WRITE, inode="/tmp":hadoop:supergroup:drwxr-xr-x
  2. 0

    Hive can't import the flume tweet data to the Warehouse (HDFS)

    Stack Overflow | 3 years ago | user3590417
    org.apache.hadoop.hive.ql.metadata.HiveException: copyFiles: error while checking/creating destination directory!!!
  3. 0

    with no indication what went wrong. Details: The application creates a PFS with a base location of /temp/pfs2: {code} createDataset("pfs1", PartitionedFileSet.class, PartitionedFileSetProperties.builder() .setPartitioning(Partitioning.builder().addStringField("x").build()) .setBasePath("/temp/pfs2").build()); {code} When deploying this app, I get this error: {noformat} Upload failed co.cask.cdap.data2.dataset2.DatasetManagementException: Failed to add instance pfs1, details: Response code: 500, message: 'Internal Server Error', body: '' {noformat} I have to read the master logs to find out what the problem was: {noformat} 2015-07-16T19:32:21,911Z INFO c.c.c.d.d.d.s.e.DatasetAdminOpHTTPHandler [unew2015-1000.dev.continuuity.net] [executor-39] DatasetAdminOpHTTPHandler:create(DatasetAdminOpHTTPHan dler.java:109) - Creating dataset instance xyz.pfs1, type meta: DatasetTypeMeta{name=co.cask.cdap.api.dataset.lib.PartitionedFileSet, modules=DatasetModuleMeta{name=fileSet, cla ssName=co.cask.cdap.data2.dataset2.lib.file.FileSetModule, jarLocation=null, usesModules=, usedByModules=timePartitionedFileSet,partitionedFileSet},DatasetModuleMeta{name=ordere dTable-hbase, className=co.cask.cdap.data2.dataset2.module.lib.hbase.HBaseTableModule, jarLocation=null, usesModules=, usedByModules=core,objectMappedTable,cube,usage,queueDatas et},DatasetModuleMeta{name=core, className=co.cask.cdap.data2.dataset2.lib.table.CoreDatasetsModule, jarLocation=null, usesModules=orderedTable-hbase, usedByModules=timePartitio nedFileSet,partitionedFileSet},DatasetModuleMeta{name=partitionedFileSet, className=co.cask.cdap.data2.dataset2.lib.partitioned.PartitionedFileSetModule, jarLocation=null, usesM odules=fileSet,orderedTable-hbase,core, usedByModules=}}, props: DatasetProperties{properties=base.path=/temp/pfs2,partitioning.field.x=STRING,partitioning.fields.=x} 2015-07-16T19:32:21,933Z ERROR c.c.c.c.HttpExceptionHandler [unew2015-1000.dev.continuuity.net] [executor-39] HttpExceptionHandler:handle(HttpExceptionHandler.java:45) - Unexpec ted error: request=POST /v3/namespaces/xyz/data/datasets/pfs1/admin/create user=<null>: org.apache.hadoop.security.AccessControlException: Permission denied: user=yarn, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:232) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:176) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5519) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5501) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5475) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3618) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3588) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3562) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:760) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:558) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2555) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2524) at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:827) at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:823) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:823) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:816) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815) at org.apache.twill.filesystem.HDFSLocation.mkdirs(HDFSLocation.java:184) at co.cask.cdap.data2.dataset2.lib.file.FileSetAdmin.create(FileSetAdmin.java:58) at co.cask.cdap.api.dataset.lib.CompositeDatasetAdmin.create(CompositeDatasetAdmin.java:64) at co.cask.cdap.data2.datafabric.dataset.service.executor.DatasetAdminOpHTTPHandler.create(DatasetAdminOpHTTPHandler.java:123) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at co.cask.http.HttpMethodInfo.invoke(HttpMethodInfo.java:85) at co.cask.http.HttpDispatcher.messageReceived(HttpDispatcher.java:41) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43) at org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67) at org.jboss.netty.handler.execution.OrderedMemoryAwareThreadPoolExecutor$ChildExecutor.run(OrderedMemoryAwareThreadPoolExecutor.java:314) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {noformat} This is a terrible user experience.

    Cask Community Issue Tracker | 2 years ago | Andreas Neumann
    org.apache.hadoop.security.AccessControlException: Permission denied: user=yarn, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    hive server 2- Exception hive.service.ServiceException: Error setting stage directories

    Stack Overflow | 2 years ago | Aaditya Raj
    org.apache.hive.service.ServiceException: Error setting stage directories
  6. 0

    AccessControlException when writing to HDFS from eclipse on windows

    Stack Overflow | 2 years ago | Arbi
    org.apache.hadoop.security.AccessControlException: Permission denied: user=Arbi, access=WRITE, inode="/tmp":hadoop:supergroup:drwxr-xr-x

    4 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.hadoop.security.AccessControlException

      Permission denied: user=ads, access=WRITE, inode="/tmp":hadoop:supergroup:drwxr-xr-x

      at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission()
    2. Apache Hadoop HDFS
      ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod
      1. org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:274)
      2. org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:260)
      3. org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:241)
      4. org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:185)
      5. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5546)
      6. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5528)
      7. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5493)
      8. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3632)
      9. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3602)
      10. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3576)
      11. org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:760)
      12. org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:560)
      13. org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
      13 frames
    3. Hadoop
      Server$Handler$1.run
      1. org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
      2. org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
      3. org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
      4. org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
      4 frames
    4. Java RT
      Subject.doAs
      1. java.security.AccessController.doPrivileged(Native Method)
      2. javax.security.auth.Subject.doAs(Subject.java:396)
      2 frames
    5. Hadoop
      Server$Handler.run
      1. org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1550)
      2. org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
      2 frames
    6. Java RT
      Constructor.newInstance
      1. sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
      2. sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
      3. sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
      4. java.lang.reflect.Constructor.newInstance(Constructor.java:513)
      4 frames
    7. Hadoop
      RemoteException.unwrapRemoteException
      1. org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
      2. org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
      2 frames
    8. Apache Hadoop HDFS
      DistributedFileSystem$16.doCall
      1. org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2567)
      2. org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2536)
      3. org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:835)
      4. org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:831)
      4 frames
    9. Hadoop
      FileSystemLinkResolver.resolve
      1. org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
      1 frame
    10. Apache Hadoop HDFS
      DistributedFileSystem.mkdirs
      1. org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:831)
      2. org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:824)
      2 frames
    11. Hadoop
      FileSystem.mkdirs
      1. org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
      2. org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:595)
      2 frames
    12. org.apache.hadoop
      Client.main
      1. org.apache.hadoop.yarn.dmlc.Client.setupCacheFiles(Client.java:134)
      2. org.apache.hadoop.yarn.dmlc.Client.run(Client.java:282)
      3. org.apache.hadoop.yarn.dmlc.Client.main(Client.java:348)
      3 frames