java.lang.RuntimeException: Unable to retrieve Hadoop configuration for key fs.s3n.awsAccessKeyId

Stack Overflow | V. Samma | 3 months ago
  1. 0

    Use S3DistCp to copy file from S3 to EMR

    Stack Overflow | 3 months ago | V. Samma
    java.lang.RuntimeException: Unable to retrieve Hadoop configuration for key fs.s3n.awsAccessKeyId
  2. 0

    s3distcp fails on CDH 4.2

    Stack Overflow | 4 years ago | bocse
    java.lang.RuntimeException: Error running job
  3. 0

    Errors when attempting to write to S3 underfs

    Google Groups | 6 months ago | Jack Kosaian
    java.lang.RuntimeException: Invalid configuration key fs.s3n.awsAccessKeyId
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    I tried to set s3 as the underfs by setting deploy/vagrant/provision/roles/tachyon/tasks/start_mesos_framework.yml the following vars under "environment": TACHYON_UNDERFS_ADDRESS: "s3n://nu-spark/double-entry" S3_KEY: "my_aws_secret_key" S3_ID: "my_aws_access_key" And I got the following errors: a) for TachyonMaster task from mesos: 2016-01-29 19:43:31,920 ERROR MASTER_LOGGER (MetricsConfig.java:loadConfigFile) - Error loading metrics configuration file. 2016-01-29 19:43:31,923 ERROR MASTER_LOGGER (TachyonMaster.java:main) - Uncaught exception terminating Master java.lang.IllegalArgumentException: All eligible Under File Systems were unable to create an instance for the given path: s3n://nu-spark/double-entry java.lang.RuntimeException: Invalid configuration key fs.s3n.awsAccessKeyId. at tachyon.underfs.UnderFileSystemRegistry.create(UnderFileSystemRegistry.java:132) at tachyon.underfs.UnderFileSystem.get(UnderFileSystem.java:100) at tachyon.underfs.UnderFileSystem.get(UnderFileSystem.java:83) at tachyon.master.TachyonMaster.connectToUFS(TachyonMaster.java:412) at tachyon.master.TachyonMaster.startMasters(TachyonMaster.java:280) at tachyon.master.TachyonMaster.start(TachyonMaster.java:261) at tachyon.master.TachyonMaster.main(TachyonMaster.java:64) at tachyon.mesos.TachyonMasterExecutor$1.run(TachyonMasterExecutor.java:71) b) For TachyonWorker task on mesos: 2016-01-29 19:33:32,310 ERROR WORKER_LOGGER (ClientBase.java:connect) - Failed to connect (29) to BlockMaster master @ TachyonMaster/10.187.83.93:19998 : java.net.ConnectException: Connection refused 2016-01-29 19:33:32,311 ERROR WORKER_LOGGER (TachyonWorker.java:main) - Failed to initialize the block worker, exiting. java.io.IOException: Failed to connect to BlockMaster master @ TachyonMaster/10.187.83.93:19998 after 29 attempts at tachyon.ClientBase.connect(ClientBase.java:134) at tachyon.client.WorkerBlockMasterClient.getId(WorkerBlockMasterClient.java:101) at tachyon.worker.WorkerIdRegistry.registerWithBlockMaster(WorkerIdRegistry.java:59) at tachyon.worker.block.BlockWorker.<init>(BlockWorker.java:200) at tachyon.worker.TachyonWorker.main(TachyonWorker.java:42) at tachyon.mesos.TachyonWorkerExecutor$1.run(TachyonWorkerExecutor.java:71) Tachyon Mesos Framework does not send the configured accesskey/secretkey on JAVA_OPTS to the running tasks of Tachyon. As a workaround, I put the accesskeys and secretkeys into my own compiled and uploaded tachyon.tar.gz (on tachyon-env.sh) and then it worked.

    JIRA | 10 months ago | Renan Capaverde
    java.lang.RuntimeException: Invalid configuration key fs.s3n.awsAccessKeyId.
  6. 0

    Configuring Mesos with Tachyon Framework

    Google Groups | 10 months ago | Renan Capaverde
    java.lang.RuntimeException: Invalid configuration key fs.s3n.awsAccessKeyId.

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.RuntimeException

      Unable to retrieve Hadoop configuration for key fs.s3n.awsAccessKeyId

      at com.amazon.external.elasticmapreduce.s3distcp.ConfigurationCredentials.getConfigOrThrow()
    2. com.amazon.external
      S3DistCp.run
      1. com.amazon.external.elasticmapreduce.s3distcp.ConfigurationCredentials.getConfigOrThrow(ConfigurationCredentials.java:29)
      2. com.amazon.external.elasticmapreduce.s3distcp.ConfigurationCredentials.<init>(ConfigurationCredentials.java:35)
      3. com.amazon.external.elasticmapreduce.s3distcp.S3DistCp.createInputFileListS3(S3DistCp.java:85)
      4. com.amazon.external.elasticmapreduce.s3distcp.S3DistCp.createInputFileList(S3DistCp.java:60)
      5. com.amazon.external.elasticmapreduce.s3distcp.S3DistCp.run(S3DistCp.java:529)
      6. com.amazon.external.elasticmapreduce.s3distcp.S3DistCp.run(S3DistCp.java:216)
      6 frames
    3. Hadoop
      ToolRunner.run
      1. org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
      2. org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
      2 frames
    4. com.amazon.external
      Main.main
      1. com.amazon.external.elasticmapreduce.s3distcp.Main.main(Main.java:12)
      1 frame
    5. Java RT
      Method.invoke
      1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      2. sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      3. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      4. java.lang.reflect.Method.invoke(Method.java:606)
      4 frames
    6. Hadoop
      RunJar.main
      1. org.apache.hadoop.util.RunJar.run(RunJar.java:221)
      2. org.apache.hadoop.util.RunJar.main(RunJar.java:136)
      2 frames