Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via GitHub by tarzanek
, 1 year ago
Document contains at least one immense term in field="full" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[101
via GitHub by boliza
, 3 months ago
startOffset must be non-negative, and endOffset must be >= startOffset, and offsets must not go backwards startOffset=1,en dOffset=5,lastStartOffset=4 for field 'content'
via GitHub by ywjno
, 1 year ago
first position increment must be > 0 (got 0) for field 'text'
via Stack Overflow by rolando
, 1 year ago
Document contains at least one immense term in field="text" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[99
via grokbase.com by Unknown author, 2 years ago
Document contains at least one immense term in field="content__s_i_suggest" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first
via apache.org by Unknown author, 2 years ago
Document contains at least one immense term in field="content__s_i_suggest" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first
org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes can be at most 32766 in length; got 89226	at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:263)	at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:150)	at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:762)	at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:417)	at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:373)	at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:232)	at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:449)	at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1492)	at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1271)	at org.opensolaris.opengrok.index.IndexDatabase.addFile(IndexDatabase.java:635)	at org.opensolaris.opengrok.index.IndexDatabase.indexDown(IndexDatabase.java:883)	at org.opensolaris.opengrok.index.IndexDatabase.indexDown(IndexDatabase.java:848)	at org.opensolaris.opengrok.index.IndexDatabase.indexDown(IndexDatabase.java:848)	at org.opensolaris.opengrok.index.IndexDatabase.indexDown(IndexDatabase.java:848)	at org.opensolaris.opengrok.index.IndexDatabase.indexDown(IndexDatabase.java:848)	at org.opensolaris.opengrok.index.IndexDatabase.indexDown(IndexDatabase.java:848)	at org.opensolaris.opengrok.index.IndexDatabase.indexDown(IndexDatabase.java:848)	at org.opensolaris.opengrok.index.IndexDatabase.update(IndexDatabase.java:397)	at org.opensolaris.opengrok.index.IndexDatabase$2.run(IndexDatabase.java:184)	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)	at java.util.concurrent.FutureTask.run(FutureTask.java:266)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)	at java.lang.Thread.run(Thread.java:745)