Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

  1. ,
    via oracle.com by Unknown author

    An easy way to solve OutOfMemoryError in java is to increase the maximum heap size by using JVM options -Xmx512M, this will immediately solve your OutOfMemoryError.

  2. ,
    via Stack Overflow by Eugene Yokota

    In Eclipse : go to Run --> Run Configurations --> then select the project under maven build --> then select the tab "JRE" --> then enter -Xmx1024m.

    This should increase the memory heap for all the builds/projects. The above memory size is 1 GB.

Solutions on the web

via wordpress.com by Unknown author, 1 year ago
via wordpress.com by Unknown author, 2 years ago
GC overhead limit exceeded
via wordpress.com by Unknown author, 1 year ago
via pentaho.com by Unknown author, 1 year ago
Requested array size exceeds VM limit java.util.Arrays.copyOf(Arrays.java:3332) java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:137) java.lang.AbstractStringBuilder.ensureCapacityInternal
via Stack Overflow by Jinfeng
, 2 years ago
via Stack Overflow by joshuar
, 1 year ago
Requested array size exceeds VM limit
java.lang.OutOfMemoryError: GC overhead limit exceeded	at java.util.Arrays.copyOf(Arrays.java:2367)	at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)	at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)	at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)	at java.lang.StringBuffer.append(StringBuffer.java:237)	at java.net.URI.appendAuthority(URI.java:1852)	at java.net.URI.appendSchemeSpecificPart(URI.java:1890)	at java.net.URI.toString(URI.java:1922)	at java.net.URI.(URI.java:749)	at org.apache.hadoop.fs.Path.initialize(Path.java:203)	at org.apache.hadoop.fs.Path.(Path.java:116)	at org.apache.hadoop.fs.Path.(Path.java:94)	at org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:230)	at org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:263)	at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:732)	at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:105)	at org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:755)	at org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:751)	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)	at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:751)	at org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:268)	at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)	at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)	at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)	at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)	at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)	at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)	at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)	at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)	at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)	at org.apache.hadoop.fs.shell.Command.run(Command.java:154)	at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)