alluxio.org.apache.thrift.transport.TTransportException: java.net <http://java.net/>.SocketTimeoutException: Read timed out

Google Groups | Chanh Le | 5 months ago
  1. 0

    Do I need switch to FT mode?

    Google Groups | 5 months ago | Chanh Le
    alluxio.org.apache.thrift.transport.TTransportException: java.net <http://java.net/>.SocketTimeoutException: Read timed out
  2. 0

    set TTL from spark

    Google Groups | 6 months ago | Antonio Si
    alluxio.org.apache.thrift.transport.TTransportException
  3. 0

    Spark on Tachyon(alluxio). Frame size (273247862) larger than max length (16777216)

    Stack Overflow | 4 months ago | Carl H
    alluxio.org.apache.thrift.transport.TTransportException: Frame size (273247862) larger than max length (16777216)!
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. alluxio.org.apache.thrift.transport.TTransportException

      java.net <http://java.net/>.SocketTimeoutException: Read timed out

      at alluxio.org.apache.thrift.transport.TIOStreamTransport.read()
    2. alluxio.org.apache
      TServiceClient.receiveBase
      1. alluxio.org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
      2. alluxio.org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
      3. alluxio.org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
      4. alluxio.org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
      5. alluxio.org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
      6. alluxio.org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
      7. alluxio.org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
      8. alluxio.org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
      9. alluxio.org.apache.thrift.protocol.TProtocolDecorator.readMessageBegin(TProtocolDecorator.java:135)
      10. alluxio.org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)
      10 frames
    3. alluxio.thrift
      BlockWorkerClientService$Client.cancelBlock
      1. alluxio.thrift.BlockWorkerClientService$Client.recv_cancelBlock(BlockWorkerClientService.java:282)
      2. alluxio.thrift.BlockWorkerClientService$Client.cancelBlock(BlockWorkerClientService.java:268)
      2 frames
    4. alluxio.client.block
      BlockWorkerClient$4.call
      1. alluxio.client.block.BlockWorkerClient$4.call(BlockWorkerClient.java:167)
      2. alluxio.client.block.BlockWorkerClient$4.call(BlockWorkerClient.java:164)
      2 frames
    5. alluxio
      AbstractClient.retryRPC
      1. alluxio.AbstractClient.retryRPC(AbstractClient.java:327)
      1 frame
    6. alluxio.client.block
      RemoteBlockOutStream.cancel
      1. alluxio.client.block.BlockWorkerClient.cancelBlock(BlockWorkerClient.java:164)
      2. alluxio.client.block.RemoteBlockOutStream.cancel(RemoteBlockOutStream.java:65)
      2 frames
    7. alluxio.client.file
      FileInStream.seek
      1. alluxio.client.file.FileInStream.closeOrCancelCacheStream(FileInStream.java:339)
      2. alluxio.client.file.FileInStream.handleCacheStreamIOException(FileInStream.java:397)
      3. alluxio.client.file.FileInStream.read(FileInStream.java:214)
      4. alluxio.client.file.FileInStream.readCurrentBlockToPos(FileInStream.java:617)
      5. alluxio.client.file.FileInStream.seekInternalWithCachingPartiallyReadBlock(FileInStream.java:562)
      6. alluxio.client.file.FileInStream.seek(FileInStream.java:247)
      6 frames
    8. alluxio.hadoop
      HdfsFileInputStream.seek
      1. alluxio.hadoop.HdfsFileInputStream.seek(HdfsFileInputStream.java:324)
      1 frame
    9. Hadoop
      FSDataInputStream.seek
      1. org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:62)
      1 frame
    10. org.apache.parquet
      ParquetFileReader.readFooter
      1. org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:417)
      2. org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
      2 frames
    11. org.apache.spark
      UnsafeRowParquetRecordReader.tryInitialize
      1. org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:98)
      2. org.apache.spark.sql.execution.datasources.parquet.UnsafeRowParquetRecordReader.initialize(UnsafeRowParquetRecordReader.java:130)
      3. org.apache.spark.sql.execution.datasources.parquet.UnsafeRowParquetRecordReader.tryInitialize(UnsafeRowParquetRecordReader.java:117)
      3 frames
    12. Spark
      CoalescedRDD$$anonfun$compute$1.apply
      1. org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:169)
      2. org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
      3. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      4. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      5. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      6. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      7. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      8. org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
      9. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      10. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      11. org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:96)
      12. org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:95)
      12 frames
    13. Scala
      Iterator$$anon$13.hasNext
      1. scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
      1 frame
    14. org.apache.spark
      InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply
      1. org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:376)
      2. org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
      3. org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
      3 frames
    15. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      2. org.apache.spark.scheduler.Task.run(Task.scala:89)
      3. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      3 frames
    16. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames