org.apache.hadoop.fs.s3.S3Exception

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • 1. step - launch 16 nodes (m2.2xlarge) 2. load via HDFS: s3n://h2o-airlines-unpacked/allyears.csv 3. run GLM on allyears with corresponding columns ignored 4. load s3n://h2o-airlines-unpacked/year2012.csv UI reports: Got Exception DistributedException, with msg from /10.159.2.113:54321; java.lang.RuntimeException: org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Encountered too many S3 Internal Server errors (6), aborting request. while mapping key $00000000401700000000$s3n://h2o-airlines-unpacked/year2012.csv Logs reports: 04:38:36.223 FJ-9-18 ERRR WATER: + org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Encountered too many S3 Internal Server errors (6), aborting request. + at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieve(Jets3tNativeFileSystemStore.java:154) + at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) + at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) + at java.lang.reflect.Method.invoke(Method.java:597) + at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) + at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) + at org.apache.hadoop.fs.s3native.$Proxy5.retrieve(Unknown Source) + at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.seek(NativeS3FileSystem.java:111) + at org.apache.hadoop.fs.BufferedFSInputStream.seek(BufferedFSInputStream.java:76) + at org.apache.hadoop.fs.BufferedFSInputStream.skip(BufferedFSInputStream.java:56) + at java.io.FilterInputStream.skip(FilterInputStream.java:125) + at com.google.common.io.ByteStreams.skipFully(ByteStreams.java:683) + at water.persist.PersistHdfs$2.call(PersistHdfs.java:153) + at water.persist.PersistHdfs.run(PersistHdfs.java:239) + at water.persist.PersistHdfs.load(PersistHdfs.java:143) + at water.Value.loadPersist(Value.java:190) + at water.Value.memOrLoad(Value.java:88) + at water.parser.DParseTask$VAChunkDataIn.getChunkData(DParseTask.java:138) + at water.parser.CsvParser.parallelParse(CsvParser.java:416) + at water.parser.DParseTask.map(DParseTask.java:545) + at water.MRTask.lcompute(MRTask.java:66) + at water.DRemoteTask.compute2(DRemoteTask.java:91) + at water.H2O$H2OCountedCompleter.compute(H2O.java:683) + at jsr166y.CountedCompleter.exec(CountedCompleter.java:429) + at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263) + at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974) + at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477) + at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104) + Caused by: org.jets3t.service.S3ServiceException: Encountered too many S3 Internal Server errors (6), aborting request. + at org.jets3t.service.S3Service.sleepOnInternalError(S3Service.java:344) + at org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:374) + at org.jets3t.service.impl.rest.httpclient.RestS3Service.performRestGet(RestS3Service.java:686) + at org.jets3t.service.impl.rest.httpclient.RestS3Service.getObjectImpl(RestS3Service.java:1558) + at org.jets3t.service.impl.rest.httpclient.RestS3Service.getObjectImpl(RestS3Service.java:1501) + at org.jets3t.service.S3Service.getObject(S3Service.java:1876) + at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieve(Jets3tNativeFileSystemStore.java:144) + ... 27 more
    via by Michal Malohlava,
  • 1. step - launch 16 nodes (m2.2xlarge) 2. load via HDFS: s3n://h2o-airlines-unpacked/allyears.csv 3. run GLM on allyears with corresponding columns ignored 4. load s3n://h2o-airlines-unpacked/year2012.csv UI reports: Got Exception DistributedException, with msg from /10.159.2.113:54321; java.lang.RuntimeException: org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Encountered too many S3 Internal Server errors (6), aborting request. while mapping key $00000000401700000000$s3n://h2o-airlines-unpacked/year2012.csv Logs reports: 04:38:36.223 FJ-9-18 ERRR WATER: + org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Encountered too many S3 Internal Server errors (6), aborting request. + at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieve(Jets3tNativeFileSystemStore.java:154) + at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) + at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) + at java.lang.reflect.Method.invoke(Method.java:597) + at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) + at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) + at org.apache.hadoop.fs.s3native.$Proxy5.retrieve(Unknown Source) + at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.seek(NativeS3FileSystem.java:111) + at org.apache.hadoop.fs.BufferedFSInputStream.seek(BufferedFSInputStream.java:76) + at org.apache.hadoop.fs.BufferedFSInputStream.skip(BufferedFSInputStream.java:56) + at java.io.FilterInputStream.skip(FilterInputStream.java:125) + at com.google.common.io.ByteStreams.skipFully(ByteStreams.java:683) + at water.persist.PersistHdfs$2.call(PersistHdfs.java:153) + at water.persist.PersistHdfs.run(PersistHdfs.java:239) + at water.persist.PersistHdfs.load(PersistHdfs.java:143) + at water.Value.loadPersist(Value.java:190) + at water.Value.memOrLoad(Value.java:88) + at water.parser.DParseTask$VAChunkDataIn.getChunkData(DParseTask.java:138) + at water.parser.CsvParser.parallelParse(CsvParser.java:416) + at water.parser.DParseTask.map(DParseTask.java:545) + at water.MRTask.lcompute(MRTask.java:66) + at water.DRemoteTask.compute2(DRemoteTask.java:91) + at water.H2O$H2OCountedCompleter.compute(H2O.java:683) + at jsr166y.CountedCompleter.exec(CountedCompleter.java:429) + at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263) + at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974) + at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477) + at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104) + Caused by: org.jets3t.service.S3ServiceException: Encountered too many S3 Internal Server errors (6), aborting request. + at org.jets3t.service.S3Service.sleepOnInternalError(S3Service.java:344) + at org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:374) + at org.jets3t.service.impl.rest.httpclient.RestS3Service.performRestGet(RestS3Service.java:686) + at org.jets3t.service.impl.rest.httpclient.RestS3Service.getObjectImpl(RestS3Service.java:1558) + at org.jets3t.service.impl.rest.httpclient.RestS3Service.getObjectImpl(RestS3Service.java:1501) + at org.jets3t.service.S3Service.getObject(S3Service.java:1876) + at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieve(Jets3tNativeFileSystemStore.java:144) + ... 27 more
    via by Michal Malohlava,
  • Unable to use hadoop distcp with Riak
    via by psterk,
  • S3 GET failed for Pentaho Kettle
    via Stack Overflow by Sarang Manjrekar
    ,
  • Artifactory leaks connections during certain error scenarios. This can lead to a connection pool exhaustion. To reproduce: 1. Set up a new Artifactory 5.1.4 instance 2. Set it up to use S3, for example the binarystore.xml should look like this {noformat} <config version="v1"> <chain template="s3"/> <provider id="s3" type="s3"> <identity>XXXXXXXXX</identity> <credential>XXXXXXXX</credential> <endpoint>s3.amazonaws.com</endpoint> <bucketName>some-bucket-name</bucketName> </provider> </config> {noformat} 3. Upload a file a repository and make sure it gets to the bucket 4. If the file exists in the cache, delete it 5. Make the _pre folder in the cache directory unreadable (chmod 000 _pre) 6. Attempt to download the file from Artifactory 7. Wait for the Java garbage collector to run (or force it with: jcmd <art-pid> GC.run) 8. Observe the leaked connections that are cleaned up by the finalizer method {noformat} 2017-03-23 20:53:13,056 [Finalizer] [WARN ] (o.j.s.i.r.h.HttpMethodReleaseInputStream:221) - Successfully released HttpMethod in finalize(). You were lucky this time... Please ensure response data streams are always fully consumed or closed. {noformat} If enough of these incidents occur before the GC has a chance to clean up, Artifactory will run into this error: {noformat} 2017-03-23 20:27:08,083 [http-nio-8081-exec-6] [ERROR] (o.a.a.f.t.j.s.S3JetS3tBinaryProvider:181) - Failed to download blob '8ab108f085be266e3f6d720253471eb9e50c259c' from s3 org.jets3t.service.S3ServiceException: Request Error: Timeout waiting for connection from pool at org.jets3t.service.S3Service.getObject(S3Service.java:1470) ~[jets3t-0.9.4.jar:0.9.4] at org.artifactory.addon.filestore.type.jets3t.S3IamBaseService.get(S3IamBaseService.java:47) ..... at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-util.jar:8.0.39] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102] Caused by: org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool at org.apache.http.impl.conn.tsccm.ConnPoolByRoute.getEntryBlocking(ConnPoolByRoute.java:412) ~[httpclient-4.5.1.jar:4.5.1] at org.apache.http.impl.conn.tsccm.ConnPoolByRoute$1.getPoolEntry(ConnPoolByRoute.java:298) ~[httpclient-4.5.1.jar:4.5.1] at org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager$1.getConnection(ThreadSafeClientConnManager.java:238) ~[httpclient-4.5.1.jar:4.5.1] at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:423) ~[httpclient-4.5.1.jar:4.5.1] at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882) ~[httpclient-4.5.1.jar:4.5.1] at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[httpclient-4.5.1.jar:4.5.1] at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55) ~[httpclient-4.5.1.jar:4.5.1] at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:328) ~[jets3t-0.9.4.jar:0.9.4] at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:279) ~[jets3t-0.9.4.jar:0.9.4] at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRestGet(RestStorageService.java:1104) ~[jets3t-0.9.4.jar:0.9.4] at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRestGet(RestStorageService.java:1076) ~[jets3t-0.9.4.jar:0.9.4] at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectImpl(RestStorageService.java:2267) ~[jets3t-0.9.4.jar:0.9.4] at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectImpl(RestStorageService.java:2204) ~[jets3t-0.9.4.jar:0.9.4] at org.jets3t.service.StorageService.getObject(StorageService.java:1167) ~[jets3t-0.9.4.jar:0.9.4] at org.jets3t.service.S3Service.getObject(S3Service.java:2674) ~[jets3t-0.9.4.jar:0.9.4] at org.jets3t.service.S3Service.getObject(S3Service.java:89) ~[jets3t-0.9.4.jar:0.9.4] at org.jets3t.service.StorageService.getObject(StorageService.java:552) ~[jets3t-0.9.4.jar:0.9.4] at org.jets3t.service.S3Service.getObject(S3Service.java:1468) ~[jets3t-0.9.4.jar:0.9.4] ... 98 common frames omitted {noformat}
    via by Arturo Aparicio,
    • org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Encountered too many S3 Internal Server errors (6), aborting request. at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieve(Jets3tNativeFileSystemStore.java:154) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at org.apache.hadoop.fs.s3native.$Proxy5.retrieve(Unknown Source) at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.seek(NativeS3FileSystem.java:111) at org.apache.hadoop.fs.BufferedFSInputStream.seek(BufferedFSInputStream.java:76) at org.apache.hadoop.fs.BufferedFSInputStream.skip(BufferedFSInputStream.java:56) at java.io.FilterInputStream.skip(FilterInputStream.java:125) at com.google.common.io.ByteStreams.skipFully(ByteStreams.java:683) at water.persist.PersistHdfs$2.call(PersistHdfs.java:153) at water.persist.PersistHdfs.run(PersistHdfs.java:239) at water.persist.PersistHdfs.load(PersistHdfs.java:143) at water.Value.loadPersist(Value.java:190) at water.Value.memOrLoad(Value.java:88) at water.parser.DParseTask$VAChunkDataIn.getChunkData(DParseTask.java:138) at water.parser.CsvParser.parallelParse(CsvParser.java:416) at water.parser.DParseTask.map(DParseTask.java:545) at water.MRTask.lcompute(MRTask.java:66) at water.DRemoteTask.compute2(DRemoteTask.java:91) at water.H2O$H2OCountedCompleter.compute(H2O.java:683) at jsr166y.CountedCompleter.exec(CountedCompleter.java:429) at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263) at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974) at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477) at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104) Caused by: org.jets3t.service.S3ServiceException: Encountered too many S3 Internal Server errors (6), aborting request. at org.jets3t.service.S3Service.sleepOnInternalError(S3Service.java:344) at org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:374) at org.jets3t.service.impl.rest.httpclient.RestS3Service.performRestGet(RestS3Service.java:686) at org.jets3t.service.impl.rest.httpclient.RestS3Service.getObjectImpl(RestS3Service.java:1558) at org.jets3t.service.impl.rest.httpclient.RestS3Service.getObjectImpl(RestS3Service.java:1501) at org.jets3t.service.S3Service.getObject(S3Service.java:1876) at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieve(Jets3tNativeFileSystemStore.java:144) ... 27 more

    Users with the same issue

    Unknown visitor
    Unknown visitor1 times, last one,
    Unknown visitor
    Unknown visitor1 times, last one,