Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Google Groups by Emran Talukder, 1 year ago
via Stack Overflow by jackar
, 2 years ago
via Apache's JIRA Issue Tracker by Yin Huai, 1 year ago
via Apache's JIRA Issue Tracker by Erik Selin, 1 year ago
alluxio.exception.InvalidPathException: Path 
*/demo.parquet/_temporary/0/_temporary/attempt_201608280418_0002_m_000002_0/divi*
*sion=CENTRAL DIVISION/region=BIG SOUTH 
REGION/part-r-00002-72473d11-052c-48b2-aef0-c2a8e9c98a7b.snappy.parquet* is 
invalid	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)	at java.lang.reflect.Constructor.newInstance(Constructor.java:422)	at alluxio.exception.AlluxioException.fromThrift(AlluxioException.java:99)	at alluxio.AbstractClient.retryRPC(AbstractClient.java:326)	at alluxio.client.file.FileSystemMasterClient.createFile(FileSystemMasterClient.java:109)	at alluxio.client.file.BaseFileSystem.createFile(BaseFileSystem.java:97)	at alluxio.hadoop.AbstractFileSystem.create(AbstractFileSystem.java:153)	at alluxio.hadoop.FileSystem.create(FileSystem.java:25)	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)	at org.apache.parquet.hadoop.ParquetFileWriter.(ParquetFileWriter.java:176)	at org.apache.parquet.hadoop.ParquetFileWriter.(ParquetFileWriter.java:160)	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:289)	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:262)	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.(ParquetFileFormat.scala:548)	at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:138)	at org.apache.spark.sql.execution.datasources.BaseWriterContainer.newOutputWriter(WriterContainer.scala:131)	at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.org$apache$spark$sql$execution$datasources$DynamicPartitionWriterContainer$$newOutputWriter(WriterContainer.scala:361)	at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer$$anonfun$writeRows$4.apply$mcV$sp(WriterContainer.scala:428)	at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer$$anonfun$writeRows$4.apply(WriterContainer.scala:416)	at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer$$anonfun$writeRows$4.apply(WriterContainer.scala:416)	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1325)	at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:438)	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)	at org.apache.spark.scheduler.Task.run(Task.scala:85)	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)	at java.lang.Thread.run(Thread.java:745)