Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via sugarcrm.com by Unknown author, 1 year ago
Document contains at least one immense term in field="description" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is
via GitHub by josh-lane
, 10 months ago
Document contains at least one immense term in field="rawus" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[-removed-]...', original message: bytes can be at most 32766 in length; got 41548
via GitHub by boliza
, 2 months ago
startOffset must be non-negative, and endOffset must be >= startOffset, and offsets must not go backwards startOffset=1,en dOffset=5,lastStartOffset=4 for field 'content'
via GitHub by ywjno
, 11 months ago
first position increment must be > 0 (got 0) for field 'text'
via Google Groups by Angel Cross, 1 year ago
Document contains at least one immense term in field="content.raw" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is
via Stack Overflow by rolando
, 1 year ago
Document contains at least one immense term in field="text" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[99
org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes can be at most 32766 in length; got 40005	at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:284)	at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:151)	at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:663)	at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:359)	at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:318)	at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:241)	at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:465)	at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1526)	at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1252)	at org.elasticsearch.index.engine.InternalEngine.innerIndex(InternalEngine.java:432)	at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:364)	at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:510)	at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:413)	at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:148)	at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:574)	at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase$1.doRun(TransportShardReplicationOperationAction.java:440)	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)	at java.lang.Thread.run(Thread.java:745)