java.lang.IllegalArgumentException

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • I have a Phrase entity with multiple Tags. {code:title=Phrase.java|borderStyle=solid} @ManyToMany(fetch = FetchType.LAZY) @JoinTable(name = "phrase_tag", joinColumns = @JoinColumn(name = "phrase_id", referencedColumnName = "id"), inverseJoinColumns = @JoinColumn(name = "tag_id", referencedColumnName = "id")) @IndexedEmbedded(includeEmbeddedObjectId = false) public Set<Tag> getTags() { return tags; } {code} The only indexed field in Tag is iid: {code:title=Tag.java|borderStyle=solid} @Transient @Facet(forField = "iid") @Field(name = "iid", index = Index.YES, analyze = Analyze.NO) public Long getIndexingId(){ return getId(); } {code} Adding a {{@Facet}} annotation to mapping results in the following exception while indexing: {code} org.hibernate.search.exception.impl.LogErrorHandler - HSEARCH000058: HSEARCH000183: Unable to index instance of type com.phrask.model.phrase.Phrase while batch indexing: id=33, hashCode=33 java.lang.IllegalArgumentException: DocValuesField "tags.iid" appears more than once in this document (only one value is allowed per field) at org.apache.lucene.index.NumericDocValuesWriter.addValue(NumericDocValuesWriter.java:54) at org.apache.lucene.index.DefaultIndexingChain.indexDocValue(DefaultIndexingChain.java:438) at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:392) at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:318) at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:241) at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:465) at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1526) at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1252) at org.hibernate.search.backend.impl.lucene.works.AddWorkExecutor.performWork(AddWorkExecutor.java:54) at org.hibernate.search.backend.impl.lucene.LuceneBackendTaskStreamer.doWork(LuceneBackendTaskStreamer.java:53) at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueProcessor.applyStreamWork(LuceneBackendQueueProcessor.java:76) at org.hibernate.search.indexes.spi.DirectoryBasedIndexManager.performStreamOperation(DirectoryBasedIndexManager.java:107) at org.hibernate.search.backend.impl.StreamingOperationExecutorSelector$AddSelectionExecutor.performStreamOperation(StreamingOperationExecutorSelector.java:106) at org.hibernate.search.backend.impl.batch.DefaultBatchBackend.sendWorkToShards(DefaultBatchBackend.java:62) at org.hibernate.search.backend.impl.batch.DefaultBatchBackend.enqueueAsyncWork(DefaultBatchBackend.java:48) at org.hibernate.search.batchindexing.impl.IdentifierConsumerDocumentProducer.index(IdentifierConsumerDocumentProducer.java:263) at org.hibernate.search.batchindexing.impl.IdentifierConsumerDocumentProducer.indexAllQueue(IdentifierConsumerDocumentProducer.java:192) at org.hibernate.search.batchindexing.impl.IdentifierConsumerDocumentProducer.loadList(IdentifierConsumerDocumentProducer.java:169) at org.hibernate.search.batchindexing.impl.IdentifierConsumerDocumentProducer.loadAllFromQueue(IdentifierConsumerDocumentProducer.java:135) at org.hibernate.search.batchindexing.impl.IdentifierConsumerDocumentProducer.run(IdentifierConsumerDocumentProducer.java:108) at org.hibernate.search.batchindexing.impl.OptionallyWrapInJTATransaction.runWithErrorHandler(OptionallyWrapInJTATransaction.java:104) at org.hibernate.search.batchindexing.impl.ErrorHandledRunnable.run(ErrorHandledRunnable.java:32) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {code} Removing {{@Facet}} annotation removes the exception along with ability for faceting on tag Ids. I didn't find any evidence that faceting over multi value fields is deprecated in {{5.3.0.Final}} so it makes me think that it's a bug.
    via by Ashot Golovenko,
  • Related issue that probably will have to be solved first: HSEARCH-1927 (Range faceting on multiple numeric values does not work) --- I have a Phrase entity with multiple Tags. {code:title=Phrase.java|borderStyle=solid} @ManyToMany(fetch = FetchType.LAZY) @JoinTable(name = "phrase_tag", joinColumns = @JoinColumn(name = "phrase_id", referencedColumnName = "id"), inverseJoinColumns = @JoinColumn(name = "tag_id", referencedColumnName = "id")) @IndexedEmbedded(includeEmbeddedObjectId = false) public Set<Tag> getTags() { return tags; } {code} The only indexed field in Tag is iid: {code:title=Tag.java|borderStyle=solid} @Transient @Facet(forField = "iid") @Field(name = "iid", index = Index.YES, analyze = Analyze.NO) public Long getIndexingId(){ return getId(); } {code} Adding a {{@Facet}} annotation to mapping results in the following exception while indexing: {code} org.hibernate.search.exception.impl.LogErrorHandler - HSEARCH000058: HSEARCH000183: Unable to index instance of type com.phrask.model.phrase.Phrase while batch indexing: id=33, hashCode=33 java.lang.IllegalArgumentException: DocValuesField "tags.iid" appears more than once in this document (only one value is allowed per field) at org.apache.lucene.index.NumericDocValuesWriter.addValue(NumericDocValuesWriter.java:54) at org.apache.lucene.index.DefaultIndexingChain.indexDocValue(DefaultIndexingChain.java:438) at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:392) at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:318) at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:241) at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:465) at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1526) at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1252) at org.hibernate.search.backend.impl.lucene.works.AddWorkExecutor.performWork(AddWorkExecutor.java:54) at org.hibernate.search.backend.impl.lucene.LuceneBackendTaskStreamer.doWork(LuceneBackendTaskStreamer.java:53) at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueProcessor.applyStreamWork(LuceneBackendQueueProcessor.java:76) at org.hibernate.search.indexes.spi.DirectoryBasedIndexManager.performStreamOperation(DirectoryBasedIndexManager.java:107) at org.hibernate.search.backend.impl.StreamingOperationExecutorSelector$AddSelectionExecutor.performStreamOperation(StreamingOperationExecutorSelector.java:106) at org.hibernate.search.backend.impl.batch.DefaultBatchBackend.sendWorkToShards(DefaultBatchBackend.java:62) at org.hibernate.search.backend.impl.batch.DefaultBatchBackend.enqueueAsyncWork(DefaultBatchBackend.java:48) at org.hibernate.search.batchindexing.impl.IdentifierConsumerDocumentProducer.index(IdentifierConsumerDocumentProducer.java:263) at org.hibernate.search.batchindexing.impl.IdentifierConsumerDocumentProducer.indexAllQueue(IdentifierConsumerDocumentProducer.java:192) at org.hibernate.search.batchindexing.impl.IdentifierConsumerDocumentProducer.loadList(IdentifierConsumerDocumentProducer.java:169) at org.hibernate.search.batchindexing.impl.IdentifierConsumerDocumentProducer.loadAllFromQueue(IdentifierConsumerDocumentProducer.java:135) at org.hibernate.search.batchindexing.impl.IdentifierConsumerDocumentProducer.run(IdentifierConsumerDocumentProducer.java:108) at org.hibernate.search.batchindexing.impl.OptionallyWrapInJTATransaction.runWithErrorHandler(OptionallyWrapInJTATransaction.java:104) at org.hibernate.search.batchindexing.impl.ErrorHandledRunnable.run(ErrorHandledRunnable.java:32) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {code} Removing {{@Facet}} annotation removes the exception along with ability for faceting on tag Ids. I didn't find any evidence that faceting over multi value fields is deprecated in {{5.3.0.Final}} so it makes me think that it's a bug.
    via by Ashot Golovenko,
  • Strange exception in Elasticsearch 1.4.3
    via by Angel Cross,
  • freebsd dxr howto · GitHub
    via by Unknown author,
  • Hello, When serializing / deserializing an AddLuceneWork with the Avro serializer on a document containing a field type with term vector = YES, the resulting field type is not equal to the original one: the storeTermVectorPayloads attribute is set to true but was originally false. Attached is the org.hibernate.search.test.serialization.SerializationTest unit test taken from github where I added a new field {code:java} Field newField = new Field( "StringTermYes", "String field 3", Store.YES, Field.Index.ANALYZED, Field.TermVector.YES); {code} When used in a real-life scenario (like JMS replication), this leads to the following Lucene error: {code} java.lang.IllegalArgumentException: cannot index term vector payloads without term vector positions (field="x_id_lowercase_copy") at org.apache.lucene.index.TermVectorsConsumerPerField.start(TermVectorsConsumerPerField.java:147) at org.apache.lucene.index.TermsHashPerField.start(TermsHashPerField.java:297) at org.apache.lucene.index.FreqProxTermsWriterPerField.start(FreqProxTermsWriterPerField.java:72) at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:616) at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:359) at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:318) at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:239) at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:457) at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1511) at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1246) at org.hibernate.search.backend.impl.lucene.works.AddWorkDelegate.performWork(AddWorkDelegate.java:54) at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.performWork(LuceneBackendQueueTask.java:111) at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:92) at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:47) at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.applyChangesets(SyncWorkProcessor.java:145) at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.run(SyncWorkProcessor.java:135) at java.lang.Thread.run(Thread.java:745) {code} Thanks Benoit
    via by Benoit Guillon,
    • java.lang.IllegalArgumentException: DocValuesField "tags.iid" appears more than once in this document (only one value is allowed per field) at org.apache.lucene.index.NumericDocValuesWriter.addValue(NumericDocValuesWriter.java:54) at org.apache.lucene.index.DefaultIndexingChain.indexDocValue(DefaultIndexingChain.java:438) at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:392) at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:318) at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:241) at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:465) at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1526) at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1252) at org.hibernate.search.backend.impl.lucene.works.AddWorkExecutor.performWork(AddWorkExecutor.java:54) at org.hibernate.search.backend.impl.lucene.LuceneBackendTaskStreamer.doWork(LuceneBackendTaskStreamer.java:53) at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueProcessor.applyStreamWork(LuceneBackendQueueProcessor.java:76) at org.hibernate.search.indexes.spi.DirectoryBasedIndexManager.performStreamOperation(DirectoryBasedIndexManager.java:107) at org.hibernate.search.backend.impl.StreamingOperationExecutorSelector$AddSelectionExecutor.performStreamOperation(StreamingOperationExecutorSelector.java:106) at org.hibernate.search.backend.impl.batch.DefaultBatchBackend.sendWorkToShards(DefaultBatchBackend.java:62) at org.hibernate.search.backend.impl.batch.DefaultBatchBackend.enqueueAsyncWork(DefaultBatchBackend.java:48) at org.hibernate.search.batchindexing.impl.IdentifierConsumerDocumentProducer.index(IdentifierConsumerDocumentProducer.java:263) at org.hibernate.search.batchindexing.impl.IdentifierConsumerDocumentProducer.indexAllQueue(IdentifierConsumerDocumentProducer.java:192) at org.hibernate.search.batchindexing.impl.IdentifierConsumerDocumentProducer.loadList(IdentifierConsumerDocumentProducer.java:169) at org.hibernate.search.batchindexing.impl.IdentifierConsumerDocumentProducer.loadAllFromQueue(IdentifierConsumerDocumentProducer.java:135) at org.hibernate.search.batchindexing.impl.IdentifierConsumerDocumentProducer.run(IdentifierConsumerDocumentProducer.java:108) at org.hibernate.search.batchindexing.impl.OptionallyWrapInJTATransaction.runWithErrorHandler(OptionallyWrapInJTATransaction.java:104) at org.hibernate.search.batchindexing.impl.ErrorHandledRunnable.run(ErrorHandledRunnable.java:32) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

    Users with the same issue

    tyson925
    3 times, last one,
    Unknown visitor1 times, last one,