com.univocity.parsers.common.TextParsingException

Error processing input: org.apache.spark.TaskKilledException - null Parser Configuration: CsvParserSettings: Column reordering enabled=true Empty value=null Header extraction enabled=false Headers=[C0, C1, C2, C3, C4, C5, C6, C7, C8, C9, C10] Ignore leading whitespaces=false Ignore trailing whitespaces=false Input buffer size=128 Input reading on separate thread=false Line separator detection enabled=false Maximum number of characters per column=1000 Maximum number of columns=20 Null value= Number of records to read=all Parse unescaped quotes=true Row processor=none Selected fields=none Skip empty lines=trueFormat configuration: CsvFormat: Comment character=\0 Field delimiter=\t Line separator (normalized)=\n Line separator sequence=\n Quote character=" Quote escape character=quote escape Quote escape escape character=\0, line=706, char=197760. Content parsed: [mexic]

Samebug tips0

There are no available Samebug tips for this exception. If you know how to solve this issue, help other users by writing a short tip.

Don't give up yet. Paste your full stack trace to get a solution.

Solutions on the web7

  • via Apache's JIRA Issue Tracker by Shubhanshu Mishra, 11 months ago
    number of characters per column=1000 Maximum number of columns=20 Null value= Number of records to read=all Parse unescaped quotes=true Row processor=none Selected fields=none Skip empty lines
  • columns=512 Null value=null Number of records to read=all Parse unescaped quotes=true Row processor=none Selected fields=none Skip empty lines=trueFormat configuration: CsvFormat: Comment character=# Field delimiter=\t Line separator
  • via GitHub by mumrah
    , 10 months ago
    whitespaces=true Input buffer size=1048576 Input reading on separate thread=true Keep escape sequences=false Keep quotes=false Length of content displayed on error=-1 Line separator detection enabled=false Maximum number of characters per column
  • Stack trace

    • com.univocity.parsers.common.TextParsingException: Error processing input: org.apache.spark.TaskKilledException - null Parser Configuration: CsvParserSettings: Column reordering enabled=true Empty value=null Header extraction enabled=false Headers=[C0, C1, C2, C3, C4, C5, C6, C7, C8, C9, C10] Ignore leading whitespaces=false Ignore trailing whitespaces=false Input buffer size=128 Input reading on separate thread=false Line separator detection enabled=false Maximum number of characters per column=1000 Maximum number of columns=20 Null value= Number of records to read=all Parse unescaped quotes=true Row processor=none Selected fields=none Skip empty lines=trueFormat configuration: CsvFormat: Comment character=\0 Field delimiter=\t Line separator (normalized)=\n Line separator sequence=\n Quote character=" Quote escape character=quote escape Quote escape escape character=\0, line=706, char=197760. Content parsed: [mexic] at com.univocity.parsers.common.AbstractParser.handleException(AbstractParser.java:241) at com.univocity.parsers.common.AbstractParser.parseNext(AbstractParser.java:356) at org.apache.spark.sql.execution.datasources.csv.BulkCsvReader.next(CSVParser.scala:137) at org.apache.spark.sql.execution.datasources.csv.BulkCsvReader.next(CSVParser.scala:120) at scala.collection.Iterator$class.foreach(Iterator.scala:742) at org.apache.spark.sql.execution.datasources.csv.BulkCsvReader.foreach(CSVParser.scala:120) at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:155) at org.apache.spark.sql.execution.datasources.csv.BulkCsvReader.foldLeft(CSVParser.scala:120) at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:212) at org.apache.spark.sql.execution.datasources.csv.BulkCsvReader.aggregate(CSVParser.scala:120) at org.apache.spark.rdd.RDD$$anonfun$aggregate$1$$anonfun$22.apply(RDD.scala:1058) at org.apache.spark.rdd.RDD$$anonfun$aggregate$1$$anonfun$22.apply(RDD.scala:1058) at org.apache.spark.SparkContext$$anonfun$35.apply(SparkContext.scala:1827) at org.apache.spark.SparkContext$$anonfun$35.apply(SparkContext.scala:1827) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69) at org.apache.spark.scheduler.Task.run(Task.scala:82) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:231) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.spark.TaskKilledException at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369) at org.apache.spark.sql.execution.datasources.csv.StringIteratorReader.refill(CSVParser.scala:167) at org.apache.spark.sql.execution.datasources.csv.StringIteratorReader.read(CSVParser.scala:195) at org.apache.spark.sql.execution.datasources.csv.StringIteratorReader.read(CSVParser.scala:215) at com.univocity.parsers.common.input.DefaultCharInputReader.reloadBuffer(DefaultCharInputReader.java:81) at com.univocity.parsers.common.input.AbstractCharInputReader.updateBuffer(AbstractCharInputReader.java:118) at com.univocity.parsers.common.input.AbstractCharInputReader.nextChar(AbstractCharInputReader.java:180) at com.univocity.parsers.csv.CsvParser.parseValue(CsvParser.java:94) at com.univocity.parsers.csv.CsvParser.parseField(CsvParser.java:179) at com.univocity.parsers.csv.CsvParser.parseRecord(CsvParser.java:75) at com.univocity.parsers.common.AbstractParser.parseNext(AbstractParser.java:328) ... 18 more

    Write tip

    You have a different solution? A short tip here would help you and many other users who saw this issue last week.

    Users with the same issue

    9 times, 6 months ago
    Unknown user
    Once, 1 year ago