You are on page 1of 66

https://stackify.

com/memory-leaks-java/

Introduction to Memory Leaks In Java Apps


One of the core benefits of Java is the JVM, which is an out-of-the-box memory
management. Essentially, we can create objects and the Java Garbage Collector will
take care of allocating and freeing up memory for us.

Nevertheless, memory leaks can still occur in Java applications.

In this article, we’re going to describe the most common memory leaks, understand
their causes, and look at a few techniques to detect/avoid them. We’re also going
to use the Java YourKit profiler throughout the article, to analyze the state of
our memory at runtime.

1. What is a Memory Leak in Java?


The standard definition of a memory leak is a scenario that occurs when objects are
no longer being used by the application, but the Garbage Collector is unable to
remove them from working memory – because they’re still being referenced. As a
result, the application consumes more and more resources – which eventually leads
to a fatal OutOfMemoryError.

For a better understanding of the concept, here’s a simple visual representation:

How memory leaks happen in Java

As we can see, we have two types of objects – referenced and unreferenced; the
Garbage Collector can remove objects that are unreferenced. Referenced objects
won’t be collected, even if they’re actually not longer used by the application.

Detecting memory leaks can be difficult. A number of tools perform static analysis
to determine potential leaks, but these techniques aren’t perfect because the most
important aspect is the actual runtime behavior of the running system.

So, let’s have a focused look at some of the standard practices of preventing
memory leaks, by analyzing some common scenarios.

2. Java Heap Leaks


In this initial section, we’re going to focus on the classic memory leak scenario –
where Java objects are continuously created without being released.

An advantageous technique to understand these situations is to make reproducing a


memory leak easier by setting a lower size for the Heap. That’s why, when starting
our application, we can adjust the JVM to suit our memory needs:

-Xms<size>
-Xmx<size>
These parameters specify the initial Java Heap size as well as the maximum Heap
size.

2.1. Static Field Holding On to the Object Reference


The first scenario that might cause a Java memory leak is referencing a heavy
object with a static field.

Let’s have a look at a quick example:

private Random random = new Random();


public static final ArrayList<Double> list = new ArrayList<Double>(1000000);

@Test
public void givenStaticField_whenLotsOfOperations_thenMemoryLeak() throws
InterruptedException {
for (int i = 0; i < 1000000; i++) {
list.add(random.nextDouble());
}

System.gc();
Thread.sleep(10000); // to allow GC do its job
}
We created our ArrayList as a static field – which will never be collected by the
JVM Garbage Collector during the lifetime of the JVM process, even after the
calculations it was used for are done. We also invoked Thread.sleep(10000) to allow
the GC to perform a full collection and try to reclaim everything that can be
reclaimed.

Let’s run the test and analyze the JVM with our profiler:

Java static memory leak

Notice how, at the very beginning, all memory is, of course, free.

Then, in just 2 seconds, the iteration process runs and finishes – loading
everything into the list (naturally this will depend on the machine you’re running
the test on).

After that, a full garbage collection cycle is triggered, and the test continues to
execute, to allow this cycle time to run and finish. As you can see, the list is
not reclaimed and the memory consumption doesn’t go down.

Let’s now see the exact same example, only this time, the ArrayList isn’t
referenced by a static variable. Instead, it’s a local variable that gets created,
used and then discarded:

@Test
public void givenNormalField_whenLotsOfOperations_thenGCWorksFine() throws
InterruptedException {
addElementsToTheList();
System.gc();
Thread.sleep(10000); // to allow GC do its job
}

private void addElementsToTheList(){


ArrayList<Double> list = new ArrayList<Double>(1000000);
for (int i = 0; i < 1000000; i++) {
list.add(random.nextDouble());
}
}
Once the method finishes its job, we’ll observe the major GC collection, around
50th second on the image below:

Java static no memory leak

Notice how the GC is now able to reclaim some of the memory utilized by the JVM.

How to prevent it?


Now that you understand the scenario, there are of course ways to prevent it from
occurring.

First, we need to pay close attention to our usage of static; declaring any
collection or heavy object as static ties its lifecycle to the lifecycle of the JVM
itself, and makes the entire object graph impossible to collect.

We also need to be aware of collections in general – that’s a common way to


unintentionally hold on to references for longer than we need to.

2.2. Calling String.intern() on Long String


The second group of scenarios that frequently causes memory leaks involves String
operations – specifically the String.intern() API.

Let’s have a look at a quick example:

@Test
public void givenLengthString_whenIntern_thenOutOfMemory()
throws IOException, InterruptedException {
Thread.sleep(15000);

String str
= new Scanner(new File("src/test/resources/large.txt"), "UTF-8")
.useDelimiter("\\A").next();
str.intern();

System.gc();
Thread.sleep(15000);
}
Here, we simply try to load a large text file into running memory and then return a
canonical form, using .intern().

The intern API will place the str String in the JVM memory pool – where it can’t be
collected – and again, this will cause the GC to be unable to free up enough
memory:

Java String intern memory leak

We can clearly see that in the first 15th seconds JVM is stable, then we load the
file and JVM perform garbage collection (20th second).

Finally, the str.intern() is invoked, which leads to the memory leak – the stable
line indicating high heap memory usage, which will never be released.

How to prevent it?


Please remember that interned String objects are stored in PermGen space – if our
application is intended to perform a lot of operations on large strings, we might
need to increase the size of the permanent generation:

-XX:MaxPermSize=<size>
The second solution is to use Java 8 – where the PermGen space is replaced by the
Metaspace – which won’t lead to any OutOfMemoryError when using intern on Strings:

Finally, there are also several options of avoiding the .intern() API on Strings as
well.

2.3. Unclosed Streams


Forgetting to close a stream is a very common scenario, and certainly, one that
most developers can relate to. The problem was partially removed in Java 7 when the
ability to automatically close all types of streams was introduced into the try-
with-resource clause.
Why partially? Because the try-with-resources syntax is optional:

@Test(expected = OutOfMemoryError.class)
public void givenURL_whenUnclosedStream_thenOutOfMemory()
throws IOException, URISyntaxException {
String str = "";
URLConnection conn
= new URL("http://norvig.com/big.txt").openConnection();
BufferedReader br = new BufferedReader(
new InputStreamReader(conn.getInputStream(), StandardCharsets.UTF_8));

while (br.readLine() != null) {


str += br.readLine();
}

//
}
Let’s see how the memory of the application looks when loading a large file from an
URL:

Java unclosed streams memory leak

As we can see, the heap usage is gradually increasing over time – which is the
direct impact of the memory leak caused by not closing the stream.

Let’s dig a bit deeper into this scenario because it’s not as clear-cut as the
rest. Technically, an unclosed stream will result in two types of leaks – a low-
level resource leak and memory leak.

The low-level resource leak is simply the leak of an OS-level resource – such as
file descriptors, open connections, etc. These resources can also leak, just like
memory does.

Of course, the JVM uses memory to keep track of these underlying resources as well,
which is why this also results in a memory leak.

How to prevent it?


We always need to remember to close streams manually, or to make a use of the auto-
close feature introduced in Java 8:

try (BufferedReader br = new BufferedReader(


new InputStreamReader(conn.getInputStream(), StandardCharsets.UTF_8))) {
// further implementation
} catch (IOException e) {
e.printStackTrace();
}
In this case, the BufferedReader will be automatically closed at the end of the try
statement, without the need to close it in an explicit finally block.

2.4. Unclosed Connections


This scenario is quite similar to the previous one, with the primary difference of
dealing with unclosed connections (e.g. to a database, to an FTP server, etc.).
Again, improper implementation can do a lot of harm, leading to memory problems.

Let’s see a quick example:

@Test(expected = OutOfMemoryError.class)
public void givenConnection_whenUnclosed_thenOutOfMemory()
throws IOException, URISyntaxException {

URL url = new URL("ftp://speedtest.tele2.net");


URLConnection urlc = url.openConnection();
InputStream is = urlc.getInputStream();
String str = "";

//
}
The URLConnection remains open, and the result is, predictably, a memory leak:

Java unclosed connections memory leak

Notice how the Garbage Collector cannot do anything to release unused, but
referenced memory. The situation is immediately clear after the 1st minute – the
number of GC operations rapidly decreases, causing increased Heap memory use, which
leads to the OutOfMemoryError.

How to prevent it?


The answer here is simple – we need to always close connections in a disciplined
manner.

2.5. Adding Objects with no hashCode() and equals() into a HashSet


A simple but very common example that can lead to a memory leak is to use a HashSet
with objects that are missing their hashCode() or equals() implementations.

Specifically, when we start adding duplicate objects into a Set – this will only
ever grow, instead of ignoring duplicates as it should. We also won’t be able to
remove these objects, once added.

Let’s create a simple class without either equals or hashCode:

public class Key {


public String key;

public Key(String key) {


Key.key = key;
}
}
Now, let’s see the scenario:

@Test(expected = OutOfMemoryError.class)
public void givenMap_whenNoEqualsNoHashCodeMethods_thenOutOfMemory()
throws IOException, URISyntaxException {
Map<Object, Object> map = System.getProperties();
while (true) {
map.put(new Key("key"), "value");
}
}
This simple implementation will lead to the following scenario at runtime:

Java no hascode equals memory leak

Notice how the garbage collector stopped being able to reclaim memory around 1:40,
and notice the memory leak; the number of GC collections dropped almost four times
immediately after.

How to prevent it?


In these situations, the solution is simple – it’s crucial to provide the
hashCode() and equals() implementations.

One tool worth mentioning here is Project Lombok – this provides a lot of default
implementation by annotations, e.g. @EqualsAndHashCode.

3. How to Find Leaking Sources in Your Application


Diagnosing memory leaks is a lengthy process that requires a lot of practical
experience, debugging skills and detailed knowledge of the application.

Let’s see which techniques can help you in addition to standard profiling.

3.1. Verbose Garbage Collection


One of the quickest ways to identify a memory leak is to enable verbose garbage
collection.

By adding the -verbose:gc parameter to the JVM configuration of our application,


we’re enabling a very detailed trace of GC. Summary reports are shown in default
error output file, which should help you understand how your memory is being
managed.

3.2. Do Profiling
The second technique is the one we’ve been using throughout this article – and
that’s profiling. The most popular profiler is Visual VM – which is a good place to
start moving past command-line JDK tools and into lightweight profiling.

In this article, we used another profiler – YourKit – which has some additional,
more advanced features compared to Visual VM.

3.3. Review Your Code


Finally, this is more of a general good practice than a specific technique to deal
with memory leaks.

Simply put – review your code thoroughly, practice regular code reviews and make
good use of static analysis tools to help you understand your code and your system.

Conclusion
In this tutorial, we had a practical look at how memory leaks happen on the JVM.
Understanding how these scenarios happen is the first step in the process of
dealing with them.

Then, having the techniques and tools to really see what’s happening at runtime, as
the leak occurs, is critical as well. Static analysis and careful code-focused
reviews can only do so much, and – at the end of the day – it’s the runtime that
will show you the more complex leaks that aren’t immediately identifiable in the
code.

http://www.oracle.com/technetwork/articles/java/trywithresources-401775.html

Better Resource Management with Java SE 7: Beyond Syntactic Sugar


Published May 2011
By Julien Ponge

This article presents the Java 7 answer to the automatic resource management
problem in the form of a new language construct, proposed as part of Project Coin,
called the try-with-resources statement.

Downloads:
Download: Java SE 7 Preview

Download: Example Source Files (zip)

Introduction
The typical Java application manipulates several types of resources such as files,
streams, sockets, and database connections. Such resources must be handled with
great care, because they acquire system resources for their operations. Thus, you
need to ensure that they get freed even in case of errors. Indeed, incorrect
resource management is a common source of failures in production applications, with
the usual pitfalls being database connections and file descriptors remaining opened
after an exception has occurred somewhere else in the code. This leads to
application servers being frequently restarted when resource exhaustion occurs,
because operating systems and server applications generally have an upper-bound
limit for resources.

Correct practices for the management of resources and exceptions in Java have been
well documented. For any resource that was successfully initialized, a
corresponding invocation to its close() method is required. This requires
disciplined usage of try/catch/finally blocks to ensure that any execution path
from a resource opening eventually reaches a call to a method that closes it.
Static analysis tools, such as FindBugs, are of great help in identifying such type
of errors. Yet often, both inexperienced and experienced developers get resource
management code wrong, resulting at best in resource leaks.

However, it should be acknowledged that writing correct code for resources requires
lots of boilerplate code in the form of nested try/catch/finally blocks, as we will
see. Writing such code correctly quickly becomes a problem of its own. Meanwhile,
other programming languages, such as Python and Ruby, have been offering language-
level facilities known as automatic resource management to address this issue.

This article presents the Java Platform, Standard Edition (Java SE) 7 answer to the
automatic resource management problem in the form of a new language construct
proposed as part of Project Coin and called the try-with-resources statement. As we
will see, it goes well beyond being just more syntactic sugar, like the enhanced
for loops of Java SE 5. Indeed, exceptions can mask each other, making the
identification of root problem causes sometimes hard to debug.

The article starts with an overview of resource and exception management before
introducing the essentials of the try-with-resources statement from the Java
developer point of view. It then shows how a class can be made ready for supporting
such statements. Next, it discusses the issues of exception masking and how Java SE
7 evolved to fix them. Finally, it demystifies the syntactic sugar behind the
language extension and provides a discussion and a conclusion.

Note: The source code for the examples described in this article can be downloaded
here: sources.zip

Managing Resources and Exceptions


Let us get started with the following code excerpt:

private void incorrectWriting() throws IOException {


DataOutputStream out = new DataOutputStream(new FileOutputStream("data"));
out.writeInt(666);
out.writeUTF("Hello");
out.close();
}
At first sight, this method does not do much harm: It opens a file called data and
then writes an integer and a string. The design of the stream classes in the
java.io package makes it possible for them to be combined using the decorator
design-pattern.

As an example, we could have added an output stream for compressing data between a
DataOutputStream and a FileOutputStream. When a stream is closed, it also closes
the stream that it is decorating. Going back again to the example, when close() is
called on the instance of DataOutputStream, so is the close() method from
FileOutputStream.

There is, however, a serious issue in this method regarding the call to the close()
method. Suppose an exception is thrown while writing the integer or the string
because the underlying file system is full. Then, the close() method has no chance
of being called.

This is not so much of an issue regarding DataOutputStream, because it operates


only on instances of OutputStream to encode and write primitive data types into
arrays of bytes. The real problem is on FileOutputStream, because it internally
holds an operating system resource on a file descriptor that is freed only when
close() is called. Hence, this method leaks resources.

This issue is mostly harmless in the case of short-lived programs, but it could
lead to an entire server having to be restarted in the case of long-running
applications, as found on Java Platform, Enterprise Edition (Java EE) application
servers, because the maximum number of open file descriptors permitted by the
underlying operating system would be reached.

A correct way to rewrite the previous method would be as follows:

private void correctWriting() throws IOException {


DataOutputStream out = null;
try {
out = new DataOutputStream(new FileOutputStream("data"));
out.writeInt(666);
out.writeUTF("Hello");
} finally {
if (out != null) {
out.close();
}
}
}
In every case, a thrown exception is propagated to the invoker of the method, but
the finally block after the try block ensures that the close() method of the data
output stream is called. In turn, this ensures that the underlying file output
stream’s close() method is called too, resulting in the proper freeing of the
operating system resources associated with a file.

try-with-resources Statement for the Impatient


There is, admittedly, a lot of boilerplate code in the previous example to ensure
that resources are properly closed. With more streams, network sockets, or Java
Database Connectivity (JDBC) connections, such boilerplate code makes it harder to
read the business logic of a method. Even worse, it requires discipline from
developers because it is easy to write the error handling and resource closing
logic incorrectly.

In the meantime, other programming languages have introduced constructs for


simplifying the handling of such cases. As an example, the previous method would be
written as follows in Ruby:

def writing_in_ruby
File.open('rdata', 'w') do |f|
f.write(666)
f.write("Hello")
end
end
And it would be written like this in Python:

def writing_in_python():
with open("pdata", "w") as f:
f.write(str(666))
f.write("Hello")
In Ruby, the File.open method takes a block of code to be executed, and ensures
that the opened file is closed even if the block’s execution emits an exception.

The Python example is similar in that the special with statement takes an object
that has a close method and a code block. Again, it ensures proper resource closing
no matter if an exception is thrown or not.

Java SE 7 introduced a similar language construct as part of Project Coin. The


example can be rewritten as follows:

private void writingWithARM() throws IOException {


try (DataOutputStream out
= new DataOutputStream(new FileOutputStream("data"))) {
out.writeInt(666);
out.writeUTF("Hello");
}
}
The new construct extends try blocks to declare resources much like is the case
with for loops. Any resource declared within a try block opening will be closed.
Hence, the new construct shields you from having to pair try blocks with
corresponding finally blocks that are dedicated to proper resource management. A
semicolon separates each resource, for example:

try (
FileOutputStream out = new FileOutputStream("output");
FileInputStream in1 = new FileInputStream(“input1”);
FileInputStream in2 = new FileInputStream(“input2”)
) {
// Do something useful with those 3 streams!
} // out, in1 and in2 will be closed in any case
Finally, such a try-with-resources statement may be followed by catch and finally
blocks, just like regular try statements prior to Java SE 7.

Making an Auto-Closeable Class


As you probably guessed already, a try-with-resources statement cannot manage every
class. A new interface called java.lang.AutoCloseable was introduced in Java SE 7.
All it does is provide a void method named close() that may throw a checked
exception (java.lang.Exception). Any class willing to participate in try-with-
resources statements should implement this interface. It is strongly recommended
that implementing classes and sub-interfaces declare a more precise exception type
than java.lang.Exception, or, even better, declare no exception type at all if
invoking close() should not fail.

Such close() methods have been retro-fitted into many classes of the standard Java
SE run-time environment , including the java.io, java.nio, javax.crypto,
java.security, java.util.zip, java.util.jar, javax.net, and java.sql packages. The
major advantage of this approach is that existing code continues working just as
before, while new code can easily take advantage of the try-with-resources
statement.

Let us consider the following example:

public class AutoClose implements AutoCloseable {

@Override
public void close() {
System.out.println(">>> close()");
throw new RuntimeException("Exception in close()");
}

public void work() throws MyException {


System.out.println(">>> work()");
throw new MyException("Exception in work()");
}

public static void main(String[] args) {


try (AutoClose autoClose = new AutoClose()) {
autoClose.work();
} catch (MyException e) {
e.printStackTrace();
}
}
}
class MyException extends Exception {

public MyException() {
super();
}

public MyException(String message) {


super(message);
}
}
The AutoClose class implements AutoCloseable and can thus be used as part of a try-
with-resources statement, as illustrated in the main() method. Intentionally, we
added some console output, and we throw exceptions both in the work() and close()
methods of the class. Running the program yields the following output:

>>> work()
>>> close()
MyException: Exception in work()
at AutoClose.work(AutoClose.java:11)
at AutoClose.main(AutoClose.java:16)
Suppressed: java.lang.RuntimeException: Exception in close()
at AutoClose.close(AutoClose.java:6)
at AutoClose.main(AutoClose.java:17)
The output clearly proves that close() was indeed called before entering the catch
block that should handle the exception. Yet, the Java developer discovering Java SE
7 might be surprised to see the exception stack trace line prefixed by “Suppressed:
(…)”. It matches the exception thrown by the close() method, but you could never
encounter such a form of stack trace prior to Java SE 7. What is going on here?
Exception Masking

To understand what happened in the previous example, let us get rid of the try-
with-resources statement for a moment, and manually rewrite a correct resource
management code. First, let us extract the following static method to be invoked by
the main method:

public static void runWithMasking() throws MyException {


AutoClose autoClose = new AutoClose();
try {
autoClose.work();
} finally {
autoClose.close();
}
}
Then, let us transform the main method accordingly:

public static void main(String[] args) {


try {
runWithMasking();
} catch (Throwable t) {
t.printStackTrace();
}
}
Now, running the program gives the following output:

>>> work()
>>> close()
java.lang.RuntimeException: Exception in close()
at AutoClose.close(AutoClose.java:6)
at AutoClose.runWithMasking(AutoClose.java:19)
at AutoClose.main(AutoClose.java:52)
This code, which is idiomatic for proper resource management prior to Java SE 7,
shows the problem of one exception being masked by another exception. Indeed, the
client code to the runWithMasking() method is notified of an exception being thrown
in the close() method, while in reality, a first exception had been thrown in the
work() method.

However, only one exception can be thrown at a time, meaning that even correct code
misses information while handling exceptions. Developers lose significant time
debugging when a main exception is masked by a further exception being thrown while
closing a resource. The astute reader could question such claims, because
exceptions can be nested, after all. However, nested exceptions should be used for
causality between one exception and another, typically to wrap a low-level
exception within one aimed at higher layers of an application architecture. A good
example is a JDBC driver wrapping a socket exception into a JDBC connection. Here,
there are really two exceptions: one in work() and one in close(), and there is
absolutely no causality relationship between them.

Supporting "Suppressed" Exceptions


Because exception masking is such an important problem in practice, Java SE 7
extends exceptions so that “suppressed” exceptions can be attached to primary
exceptions. What we called a “masked” exception previously is actually an exception
to be suppressed and attached to a primary exception.

The extensions to java.lang.Throwable are as follows:


public final void addSuppressed(Throwable exception) appends a suppressed exception
to another one, so as to avoid exception masking.
public final Throwable[] getSuppressed() gets the suppressed exceptions that were
added to an exception.
These extensions were introduced especially for supporting the try-with-resources
statement and fixing exception-masking problems.

Going back to the previous runWithMasking() method, let us rewrite it with support
for suppressed exceptions in mind:

public static void runWithoutMasking() throws MyException {


AutoClose autoClose = new AutoClose();
MyException myException = null;
try {
autoClose.work();
} catch (MyException e) {
myException = e;
throw e;
} finally {
if (myException != null) {
try {
autoClose.close();
} catch (Throwable t) {
myException.addSuppressed(t);
}
} else {
autoClose.close();
}
}
}
Clearly, this represents a fair amount of code just for properly handling two
exception-throwing methods of a single auto-closeable class! A local variable is
used to capture the primary exception, that is, the one that the work() method may
throw. If such an exception is thrown, it is captured and then thrown again
immediately, so as to delegate the remaining work to the finally block.

Entering the finally block, the reference to the primary exception is checked. If
an exception was thrown, the exception that the close() method may throw would be
attached to it as a suppressed exception. Otherwise, the close() method is invoked,
and if it throws an exception, then it actually is the primary exception, thus not
masking another one.

Let us run the modified program with this new method:

>>> work()
>>> close()
MyException: Exception in work()
at AutoClose.work(AutoClose.java:11)
at AutoClose.runWithoutMasking(AutoClose.java:27)
at AutoClose.main(AutoClose.java:58)
Suppressed: java.lang.RuntimeException: Exception in close()
at AutoClose.close(AutoClose.java:6)
at AutoClose.runWithoutMasking(AutoClose.java:34)
... 1 more
As you can see, we manually reproduced the behavior of the try-with-resources
statement earlier.

Syntantic Sugar Demystified


The runWithoutMasking() method that we implemented reproduces the behavior of the
try-with-resources statement by properly closing resources and preventing exception
masking. In reality, the Java compiler expands to code that matches the code of
runWithoutMasking() for the following method, which uses a try-with-resources
statement:

public static void runInARM() throws MyException {


try (AutoClose autoClose = new AutoClose()) {
autoClose.work();
}
}
This can be checked through decompilation. While we could compare the byte code
using javap, which is part of the Java Development Kit (JDK) binary tools, let us
use a byte code-to-Java source code decompiler instead. The code of runInARM() is
extracted as follows by the JD-GUI tool (after reformatting):

public static void runInARM() throws MyException {


AutoClose localAutoClose = new AutoClose();
Object localObject1 = null;
try {
localAutoClose.work();
} catch (Throwable localThrowable2) {
localObject1 = localThrowable2;
throw localThrowable2;
} finally {
if (localAutoClose != null) {
if (localObject1 != null) {
try {
localAutoClose.close();
} catch (Throwable localThrowable3) {
localObject1.addSuppressed(localThrowable3);
}
} else {
localAutoClose.close();
}
}
}
}
As we can see, the code that we manually wrote shares the same resource management
canvas as the one inferred by the compiler on a try-with-resources statement. It
should also be noted that the compiler handles the case of possibly null resource
references to avoid null pointer exceptions when invoking close() on a null
reference by adding extra if statements in finally blocks to check whether a given
resource is null or not. We did not do that in our manual implementation, because
there is no chance that the resource is null. The compiler systematically generates
such code, however.

Let us now consider another example, this time involving three resources:

private static void compress(String input, String output) throws IOException {


try(
FileInputStream fin = new FileInputStream(input);
FileOutputStream fout = new FileOutputStream(output);
GZIPOutputStream out = new GZIPOutputStream(fout)
) {
byte[] buffer = new byte[4096];
int nread = 0;
while ((nread = fin.read(buffer)) != -1) {
out.write(buffer, 0, nread);
}
}
}
This method manipulates three resources to compress a file: one stream for reading,
one stream for compressing, and one stream to an output file. This code is correct
from a resource management perspective. Prior to Java SE 7, you would have had to
write code similar to the code we obtained by decompiling the class containing this
method, again with JD-GUI:

private static void compress(String paramString1, String paramString2)


throws IOException {
FileInputStream localFileInputStream = new FileInputStream(paramString1);
Object localObject1 = null;
try {
FileOutputStream localFileOutputStream = new
FileOutputStream(paramString2);
Object localObject2 = null;
try {
GZIPOutputStream localGZIPOutputStream = new
GZIPOutputStream(localFileOutputStream);
Object localObject3 = null;
try {
byte[] arrayOfByte = new byte[4096];
int i = 0;
while ((i = localFileInputStream.read(arrayOfByte)) != -1) {
localGZIPOutputStream.write(arrayOfByte, 0, i);
}
} catch (Throwable localThrowable6) {
localObject3 = localThrowable6;
throw localThrowable6;
} finally {
if (localGZIPOutputStream != null) {
if (localObject3 != null) {
try {
localGZIPOutputStream.close();
} catch (Throwable localThrowable7) {
localObject3.addSuppressed(localThrowable7);
}
} else {
localGZIPOutputStream.close();
}
}
}
} catch (Throwable localThrowable4) {
localObject2 = localThrowable4;
throw localThrowable4;
} finally {
if (localFileOutputStream != null) {
if (localObject2 != null) {
try {
localFileOutputStream.close();
} catch (Throwable localThrowable8) {
localObject2.addSuppressed(localThrowable8);
}
} else {
localFileOutputStream.close();
}
}
}
} catch (Throwable localThrowable2) {
localObject1 = localThrowable2;
throw localThrowable2;
} finally {
if (localFileInputStream != null) {
if (localObject1 != null) {
try {
localFileInputStream.close();
} catch (Throwable localThrowable9) {
localObject1.addSuppressed(localThrowable9);
}
} else {
localFileInputStream.close();
}
}
}
}
The benefits of the try-with-resources statement in Java SE 7 are self-explanatory
for such an example: You have less code to write, the code is easier to read, and,
last but not least, the code does not leak resources!

Discussion

The definition of the close() method in the java.lang.AutoCloseable interface


mentions that a java.lang.Exception may be thrown. However, the previous AutoClose
example declared this method without mentioning any checked exception, which we did
on purpose, partly to illustrate exceptions masking.

The specification for auto-closeable classes suggests that throwing


java.lang.Exception be avoided in favor of specific checked exceptions, and that no
checked exception be mentioned if the close() method is not expected to fail. It
also advises not to declare any exception that should not be suppressed, with
java.lang.InterruptedException being the best example. Indeed, suppressing it and
attaching it to another exception is likely to cause the thread interruption event
to be ignored and put an application in an inconsistent state.

A legitimate question regarding the use of the try-with-resources statement is that


of performance impact compared to manually crafting proper resource management
code. There is actually no performance impact, because the compiler infers the
minimal correct code for properly handling all exceptions, as we illustrated
through decompilation in the previous examples.

At the end of the day, try-with-resources statements are syntactic sugar just like
the enhanced for loops introduced in Java SE 5 for expanding loops over iterators.

That being said, we can limit the complexity of a try-with-resources statement


expansion. Generally, the more a try block declares resources, the more complex the
generated code will be. The previous compress() method could be rewritten with just
two resources instead of three, generating less exception-handling blocks:

private static void compress(String input, String output) throws IOException {


try(
FileInputStream fin = new FileInputStream(input);
GZIPOutputStream out = new GZIPOutputStream(new
FileOutputStream(output))
) {
byte[] buffer = new byte[4096];
int nread = 0;
while ((nread = fin.read(buffer)) != -1) {
out.write(buffer, 0, nread);
}
}
}
As was the case before the advent of the try-with-resources statement in Java, a
general rule of thumb is that developers should always understand the trade-offs
when chaining resources instantiations. To do that, the best approach is to read
the specifications of each resource’s close() method to understand the semantics
and implications.

Going back to the writingWithARM() example at the beginning of the article,


chaining is safe because DataOutputStream is unlikely to throw an exception on
close(). However, this is not the case with the last example, because
GZIPOutputStream attempts to write the remaining compressed data as part of the
close() method. If an exception was thrown earlier while writing the compressed
file, the close() method in GZIPOutputStream is more than likely to throw a further
exception as well, resulting in the close() method in FileOutputStream not being
called and leaking a file descriptor resource.

A good practice is to have a separate declaration in a try-with-resources statement


for each resource that holds critical system resources, such as a file descriptor,
a socket, or a JDBC connection where you must make sure that a close() method is
eventually called. Otherwise, provided that the related resources APIs permit this,
chaining allocations is not just a convenience: It also yields more compact code
while preventing resource leaks.

Conclusion

This article introduced a new language construct in Java SE 7 for the safe
management of resources. This extension has more impact than being just yet more
syntactic sugar. Indeed, it generates correct code on behalf of the developer,
eliminating the need to write boilerplate code that is easy to get wrong. More
importantly, this change has been accompanied with evolutions to attach one
exception to another, thus providing an elegant solution to the well-known problem
of exceptions masking each other.

See Also
Here are some additional resources:

Java SE 7 Preview: http://jdk7.java.net/preview/


Java SE 7 API: http://download.java.net/jdk7/docs/api/
Original proposal for Automatic Resource Management by Joshua Bloch, February 27,
2009: http://mail.openjdk.java.net/pipermail/coin-dev/2009-February/000011.html and
https://docs.google.com/View?id=ddv8ts74_3fs7483dp
Project Coin: Updated ARM Spec, July 15, 2010:
http://blogs.sun.com/darcy/entry/project_coin_updated_arm_spec
Project Coin: JSR 334 EDR Now Available, January 11, 2010:
http://blogs.sun.com/darcy/entry/project_coin_edr
Project Coin: How to Terminate try-with-resources, January 31, 2011:
http://blogs.sun.com/darcy/entry/project_coin_how_to_terminate
Project Coin: try-with-resources on a Null Resource, February 16, 2011:
http://blogs.sun.com/darcy/entry/project_coin_null_try_with
Project Coin: JSR 334 in Public Review, March 24, 2011:
http://blogs.sun.com/darcy/entry/project_coin_jsr_334_pr
Project Coin: http://openjdk.java.net/projects/coin/
JSR334 early draft preview:
http://jcp.org/aboutJava/communityprocess/edr/jsr334/index.html
JSR334 public review:
http://jcp.org/aboutJava/communityprocess/pr/jsr334/index.html
Java Puzzlers: Traps, Pitfalls, and Corner Cases by Joshua Bloch and Neal Gafter
(Addison-Wesley Professional, 2005)
Effective Java Programming Language Guide by Joshua Bloch (Addison-Wesley
Professional, 2001)
The decorator design pattern: http://en.wikipedia.org/wiki/Decorator_pattern
FindBugs, a static code analysis tool: http://findbugs.sourceforge.net/
JD-GUI, a Java byte-code decompiler: http://java.decompiler.free.fr/?q=jdgui
Python: http://www.python.org/
Ruby: http://www.ruby-lang.org/
About the Author

Julien Ponge is a long-time open source craftsman. He created the IzPack installer
framework and has participated in several other projects, including the GlassFish
application server in cooperation with Sun Microsystems. Holding a Ph.D. in
computer science from UNSW Sydney and UBP Clermont-Ferrand, he is currently an
associate professor in Computer Science and Engineering at INSA de Lyon and a
researcher as part of the INRIA Amazones team. Speaking both industrial and
academic languages, he is highly motivated in further developing synergies between
those worlds.

https://dzone.com/articles/4-techniques-for-writing-better-java

4 Techniques for Writing Better Java


Touching on topics from inheritance and overriding to final classes and methods,
here is some advice on how to be a better Java coder.
by Justin Albano · Jul. 18, 17 · Java Zone · Tutorial
Like (87)

Comment (27)

Save Tweet 35.52k Views


Join the DZone community and get the full member experience. JOIN FOR FREE
Take 60 minutes to understand the Power of the Actor Model with "Designing Reactive
Systems: The Role Of Actors In Distributed Architecture". Brought to you in
partnership with Lightbend.
Day-in and day-out, most of the Java we write uses a small fraction of the
capability of the language's full suite of possibilities. Each Stream we
instantiate and each @Autowired annotation we prefix to our instance variables
suffice to accomplish most of our goals. There are times, however, when we must
resort to those sparingly used portions of the language: The hidden parts of the
language that serves a specific purpose.

This article explores four techniques that can be used when caught in a bind and be
introduced into a code-base to improve both the ease of development and
readability. Not all of these techniques will be applicable in every situation, or
even most. For example, there may be only a few methods that will lend themselves
to covariant return types or only a few generic classes that fit the pattern for
using intersectional generic types, while others, such as final methods and classes
and try-with-resources blocks, will improve the readability and clearness of
intention of most code-bases. In either case, it is important to not only know that
these techniques exist, but know when to judiciously apply them.

1. Covariant Return Types


Even the most introductory Java how-to book will include pages of material on
inheritance, interfaces, abstract classes, and method overriding, but rarely do
even advanced texts explore the more intricate possibilities when overriding a
method. For example, the following snippet will not come as a surprise to even the
most novice Java developer:
public interface Animal {
public String makeNoise();
}
public class Dog implements Animal {
@Override
public String makeNoise() {
return "Woof";
}
}
public class Cat implements Animal {
@Override
public String makeNoise() {
return "Meow";
}
}

This is the fundamental concept of polymorphism: A method on an object can be


called according to its interface (Animal::makeNoise), but the actual behavior of
the method call depends on the implementation type (Dog::makeNoise). For example,
the output of the following method will change depending on if a Dog object or a
Cat object is passed to the method:

public class Talker {


public static void talk(Animal animal) {
System.out.println(animal.makeNoise());
}
}
Talker.talk(new Dog()); // Output: Woof
Talker.talk(new Cat()); // Output: Meow

While this is a technique commonly used in many Java applications, there is a less
well-known action that can be taken when overriding a method: Altering the return
type. Although this may appear to be an open-ended way to override a method, there
are some serious constraints on the return type of an overridden method. According
to the Java 8 SE Language Specification (pg. 248):

If a method declaration d 1 with return type R 1 overrides or hides the declaration


of another method d 2 with return type R 2, then d 1 must be return-type-
substitutable for d 2, or a compile-time error occurs.
where a return-type-substitutable (Ibid., pg. 240) is defined as

If R1 is void then R2 is void


If R1 is a primitive type then R2 is identical to R1
If R1 is a reference type then one of the following is true:
R1 adapted to the type parameters of d2 is a subtype of R2.
R1 can be converted to a subtype of R2 by unchecked conversion
d1 does not have the same signature as d2 and R1 = |R2|
Arguably the most interesting case is that of Rules 3.a. and 3.b.: When overriding
a method, a subtype of the return type can be declared as the overridden return
type. For example:

public interface CustomCloneable {


public Object customClone();
}
public class Vehicle implements CustomCloneable {
private final String model;
public Vehicle(String model) {
this.model = model;
}
@Override
public Vehicle customClone() {
return new Vehicle(this.model);
}
public String getModel() {
return this.model;
}
}
Vehicle originalVehicle = new Vehicle("Corvette");
Vehicle clonedVehicle = originalVehicle.customClone();
System.out.println(clonedVehicle.getModel());

Although the original return type of clone() is Object, we are able to call
getModel() on our cloned Vehicle (without an explicit cast) because we have
overridden the return type of Vehicle::clone to be Vehicle. This removes the need
for messy casts, where we know that the return type we are looking for is a
Vehicle, even though it is declared to be an Object (which amounts to a safe cast
based on a priori information but is strictly speaking unsafe):

Vehicle clonedVehicle = (Vehicle) originalVehicle.customClone();

Note that we can still declare the type of the vehicle to be a Object and the
return type would revert to its original type of Object:

Object clonedVehicle = originalVehicle.customClone();


System.out.println(clonedVehicle.getModel()); // ERROR: getModel not a method of
Object

Note that the return type cannot be overloaded with respect to a generic parameter,
but it can be with respect to a generic class. For example, if the base class or
interface method returns a List<Animal>, the return type of a subclass may be
overridden to ArrayList<Animal>, but it may not be overridden to List<Dog>.

2. Intersectional Generic Types


Creating a generic class is an excellent way of creating a set of classes that
interact with composed objects in a similar manner. For example, a List<T> simply
stores and retrieves objects of type T without an understanding of the nature of
the elements it contains. In some cases, we want to constrain our generic type
parameter (T) to have specific characteristics. For example, given the following
interface

public interface Writer {


public void write();
}

We may want to create a specific collection of Writers in following with the


Composite Pattern:

public class WriterComposite<T extends Writer> implements Writer {


private final List<T> writers;
public WriterComposite(List<T> writers) {
this.writers = writer;
}
@Override
public void write() {
for (Writer writer: this.writers) {
writer.write();
}
}
}

We can now traverse a tree of Writers, not knowing whether the specific Writer we
encounter is a standalone Writer (a leaf) or a collection of Writers (a composite).
What if we also wanted our composite to act as a composite for readers as well as
writers? For example, if we had the following interface

public interface Reader {


public void read();
}

How could we modify our WriterComposite to be a ReaderWriterComposite? One


technique would be to create a new interface, ReaderWriter, that fuses the Reader
and Writer interface together:

public interface ReaderWriter implements Reader, Writer {}

Then we can modify our existing WriterComposite to be the following:

public class ReaderWriterComposite<T extends ReaderWriter> implements ReaderWriter


{
private final List<T> readerWriters;
public WriterComposite(List<T> readerWriters) {
this.readerWriters = readerWriters;
}
@Override
public void write() {
for (Writer writer: this.readerWriters) {
writer.write();
}
}
@Override
public void read() {
for (Reader reader: this.readerWriters) {
reader.read();
}
}
}

Although this does accomplish our goal, we have created bloat in our code: We
created an interface with the sole purpose of merging two existing interfaces
together. With more and more interfaces, we can start to see a combinatoric
explosion of bloat. For example, if we create a new Modifier interface, we would
now need to create ReaderModifier, WriterModifier, and ReaderWriter interfaces.
Notice that these interfaces do not add any functionality: They simply merge
existing interfaces.

To remove this bloat, we would need to be able to specify that our


ReaderWriterComposite accepts generic type parameters if and only if they are both
Reader and Writer. Intersectional generic types allow us to do just that. In order
to specify that the generic type parameter must implement both the Reader and
Writer interfaces, we use the & operator between the generic type constraints:

public class ReaderWriterComposite<T extends Reader & Writer> implements Reader,


Writer {
private final List<T> readerWriters;
public WriterComposite(List<T> readerWriters) {
this.readerWriters = readerWriters;
}
@Override
public void write() {
for (Writer writer: this.readerWriters) {
writer.write();
}
}
@Override
public void read() {
for (Reader reader: this.readerWriters) {
reader.read();
}
}
}

Without bloating our inheritance tree, we are now able to constrain our generic
type parameter to implement multiple interfaces. Note that the same constraint can
be specified if one of the interfaces is an abstract class or concrete class. For
example, if we changed our Writer interface into an abstract class resembling the
following

public abstract class Writer {


public abstract void write();
}

We can still constrain our generic type parameter to be both a Reader and a Writer,
but the Writer (since it is an abstract class and not an interface) must be
specified first (also note that our ReaderWriterComposite now extends the Writer
abstract class and implements the Reader interface, rather than implementing both):

public class ReaderWriterComposite<T extends Writer & Reader> extends Writer


implements Reader {
// Same class body as before
}

It is also important to note that this intersectional generic type can be used for
more than two interfaces (or one abstract class and more than one interface). For
example, if we wanted our composite to also include the Modifier interface, we
could write our class definition as follows:

public class ReaderWriterComposite<T extends Reader & Writer & Modifier> implements
Reader, Writer, Modifier {
private final List<T> things;
public ReaderWriterComposite(List<T> things) {
this.things = things;
}
@Override
public void write() {
for (Writer writer: this.things) {
writer.write();
}
}
@Override
public void read() {
for (Reader reader: this.things) {
reader.read();
}
}
@Override
public void modify() {
for (Modifier modifier: this.things) {
modifier.modify();
}
}
}

Although it is legal to perform the above, this may be a sign of a code smell (an
object that is a Reader, a Writer, and a Modifier is likely to be something much
more specific, such as a File).

For more information on intersectional generic types, see the Java 8 language
specification.

3. Auto-Closeable Classes
Creating a resource class is a common practice, but maintaining the integrity of
that resource can be a challenging prospect, especially when exception handling is
involved. For example, suppose we create a resource class, Resource, and want to
perform an action on that resource that may throw an exception (the instantiation
process may also throw an exception):

public class Resource {


public Resource() throws Exception {
System.out.println("Created resource");
}
public void someAction() throws Exception {
System.out.println("Performed some action");
}
public void close() {
System.out.println("Closed resource");
}
}

In either case (if the exception is thrown or not thrown), we want to close our
resource to ensure there are no resource leaks. The normal process is to enclose
our close() method in a finally block, ensuring that no matter what happens, our
resource is closed before the enclosed scope of execution is completed:

Resource resource = null;


try {
resource = new Resource();
resource.someAction();
}
catch (Exception e) {
System.out.println("Exception caught");
}
finally {
resource.close();
}

By simple inspection, there is a lot of boilerplate code that detracts from the
readability of the execution of someAction() on our Resource object. To remedy this
situation, Java 7 introduced the try-with-resources statement, whereby a resource
can be created in the try statement and is automatically closed before the try
execution scope is left. For a class to be able to use the try-with-resources, it
must implement the AutoCloseable interface:

public class Resource implements AutoCloseable {


public Resource() throws Exception {
System.out.println("Created resource");
}
public void someAction() throws Exception {
System.out.println("Performed some action");
}
@Override
public void close() {
System.out.println("Closed resource");
}
}

With our Resource class now implementing the AutoCloseable interface, we can clean
up our code to ensure our resource is closed prior to leaving the try execution
scope:

try (Resource resource = new Resource()) {


resource.someAction();
}
catch (Exception e) {
System.out.println("Exception caught");
}

Compared to the non-try-with-resources technique, this process is much less


cluttered and maintains the same safety (the resource is always closed upon
completion of the try execution scope). If the above try-with-resources statement
is executed, we obtain the following output:

Created resource
Performed some action
Closed resource

In order to demonstrate the safety of this try-with-resources technique, we can


change our someAction() method to throw an Exception:

public class Resource implements AutoCloseable {


public Resource() throws Exception {
System.out.println("Created resource");
}
public void someAction() throws Exception {
System.out.println("Performed some action");
throw new Exception();
}
@Override
public void close() {
System.out.println("Closed resource");
}
}

If we rerun the try-with-resources statement again, we obtain the following output:

Created resource
Performed some action
Closed resource
Exception caught

Notice that even though an Exception was thrown while executing the someAction()
method, our resource was closed and then the Exception was caught. This ensures
that prior to leaving the try execution scope, our resource is guaranteed to be
closed. It is also important to note that a resource can implement the Closeable
interface and still use a try-with-resources statement. The difference between
implementing the AutoCloseable interface and the Closeable interface is a matter of
the type of the exception thrown from the close() method signature: Exception and
IOException, respectively. In our case, we have simply changed the signature of the
close() method to not throw an exception.

4. Final Classes and Methods


In nearly all cases, the classes we create can be extended by another developer and
customized to fit the needs of that developer (we can extend our own classes), even
it was not our intent for our classes to be extended. While this suffices for most
cases, there may be times when we do not want a method to be overridden, or more
generally, have one of our classes extended. For example, if we create a File class
that encapsulates the reading and writing of a file on the file system, we may not
want any subclasses to override our read(int bytes) and write(String data) methods
(if the logic in these methods is changed, it may cause the file system to become
corrupted). In this case, we mark our non-extendable methods as final:

public class File {


public final String read(int bytes) {
// Execute the read on the file system
return "Some read data";
}
public final void write(String data) {
// Execute the write to the file system
}
}

Now, if another class wishes to override either the read or the write methods, a
compilation error is thrown: Cannot override the final method from File. Not only
have we documented that our methods should not be overridden, but the compiler has
also ensured that this intention is enforced at compile time.

Expanding this idea to an entire class, there may be times when we do not want a
class we create to be extended. Not only does this make every method of our class
non-extendable, but it also ensures that no subtype of our class can ever be
created. For example, if we are creating a security framework that consumes a key
generator, we may not want any outside developer to extend our key generator and
override the generation algorithm (the custom functionality may be
cryptographically inferior and compromise the system):

public final class KeyGenerator {


private final String seed;
public KeyGenerator(String seed) {
this.seed = seed;
}
public CryptographicKey generate() {
// ...Do some cryptographic work to generate the key...
}
}

By making our KeyGenerator class final, the compiler will ensure that no class can
extend our class and pass itself to our framework as a valid cryptographic key
generator. While it may appear to be sufficient to simply mark the generate()
method as final, this does not stop a developer from creating a custom key
generator and passing it off as a valid generator. Being that our system is
security-oriented, it is a good idea to be as distrustful of the outside world as
possible (a clever developer might be able to change the generation algorithm by
changing the functionality of other methods in the KeyGenerator class if those
methods we present).

Although this appears to be a blatant disregard for the Open/Closed Principle (and
it is), there is a good reason for doing so. As can be seen in our security example
above, there are many times where we do not have the luxury of allowing the outside
world to do what it wants with our application and we must be very deliberate in
our decision making about inheritance. Writers such as Josh Bolch even go so far as
to say that a class should either be deliberately designed to be extended or else
it should be explicitly closed for extension (Effective Java). Although he
purposely overstated this idea (see Documenting for Inheritance or Disallowing It),
he makes a great point: We should be very deliberate about which of our classes
should be extended, and which of our methods are open for overriding.

Conclusion
While most of the code we write utilizes only a fraction of the capabilities of
Java, it suffices to solve most of the problems that we encounter. There are times
though that we need to dig a little deeper into the language and dust off those
forgotten or unknown parts of the language to solve a specific problem. Some of
these techniques, such as covariant return types and intersectional generic types
may be used in one-off situations, while others, such as auto-closeable resources
and final methods and classes can and should be used to more often to produce more
readable and more precise code. Combining these techniques with daily programming
practices aids in not only a better understanding of our intentions but also
better, more well-written Java.

http://www.javapractices.com/topic/TopicAction.do?Id=43

Recovering resources

Expensive resources should be reclaimed as soon as possible, by an explict call to


a clean-up method defined for this purpose. If this is not done, then system
performance can degrade. In the worst cases, the system can even fail entirely.
Resources include:

input-output streams
database result sets, statements, and connections
threads
graphic resources
sockets
Resources which are created locally within a method must be cleaned up within the
same method, by calling a method appropriate to the resource itself, such as close
or dispose. (The exact name of the method is arbitrary, but it usually has those
conventional names.) This is usually done automatically, using the try-with-
resources feature, added in JDK 7.
If try-with-resources isn't available, then you need to clean up resources
explicitly, by calling a clean-up method in a finally clause.

For the case of a resource which is a field, however, there's more work to do:

implement a clean-up method which the user must call when finished with the object,
with a name such as close or dispose
the caller should be able to query an object to see if its clean-up method has been
executed
non-private methods (other than the clean-up method itself) should throw an
IllegalStateException if the clean-up method has already been invoked
as a safety net, implement finalize to call the clean-up method as well; if the
user of the class neglects to call the clean-up method, then this may allow
recovery of the resource by the system
never rely solely on finalize
This example shows a class which retains a database connection during its lifetime.
(This example is artificial. Actually writing such a class would not seem necessary
in practice, since connection pools already perform such clean-up in the
background. It's used merely to demonstrate the ideas mentioned above.)

import java.sql.*;
import java.text.*;
import java.util.*;

/**
* This class has an enforced life cycle: after destroy is
* called, no useful method can be called on this object
* without throwing an IllegalStateException.
*/
public final class DbConnection {

public DbConnection () {
//build a connection and assign it to a field
//elided.. fConnection = ConnectionPool.getInstance().getConnection();
}

/**
* Ensure the resources of this object are cleaned up in an orderly manner.
*
* The user of this class must call destroy when finished with
* the object. Calling destroy a second time is permitted, but is
* a no-operation.
*/
public void destroy() throws SQLException {
if (fIsDestroyed) {
return;
}
else{
if (fConnection != null) fConnection.close();
fConnection = null;
//flag that destory has been called, and that
//no further calls on this object are valid
fIsDestroyed = true;
}
}

/**
* Fetches something from the db.
*
* This is an example of a non-private method which must ensure that
* <code>destroy</code> has not yet been called
* before proceeding with execution.
*/
synchronized public Object fetchBlah(String aId) throws SQLException {
validatePlaceInLifeCycle();
//..elided
return null;
}

/**
* If the user fails to call <code>destroy</code>, then implementing
* finalize will act as a safety net, but this is not foolproof.
*/
protected void finalize() throws Throwable{
try{
destroy();
}
finally{
super.finalize();
}
}

/**
* Allow the user to determine if <code>destroy</code> has been called.
*/
public boolean isDestoyed() {
return fIsDestroyed;
}

// PRIVATE

/**
* Connection which is constructed and managed by this object.
* The user of this class must call destroy in order to release this
* Connection resource.
*/
private Connection fConnection;

/**
* This object has a specific "life cycle", such that methods must be called
* in the order: others + destroy. fIsDestroyed keeps track of the lifecycle,
* and non-private methods must check this value at the start of execution.
* If destroy is called more than once, a no-operation occurs.
*/
private boolean fIsDestroyed;

/**
* Once <code>destroy</code> has been called, the services of this class
* are no longer available.
*
* @throws IllegalStateException if <code>destroy</code> has
* already been called.
*/
private void validatePlaceInLifeCycle(){
if (fIsDestroyed) {
String message = "Method cannot be called after destroy has been called.";
throw new IllegalStateException(message);
}
}
}

https://www.ibm.com/developerworks/library/j-jtp03216/index.html

Good housekeeping practices


Are your resources overstaying their welcome?

Brian Goetz
Published on March 21, 2006
Share this page
FacebookTwitterLinked InGoogle+E-mail this page

Comments
Content series:
This content is part # of # in the series: Java theory and practice
Stay tuned for additional content in this series.
This content is part of the series:Java theory and practice
Stay tuned for additional content in this series.

Our parents used to remind us to put our toys away when we were done with them. If
you look closely enough, the motivation for such nagging was probably not so much
an abstract desire to keep things clean as much as the practical limitation that
there is only so much floor space in the house, and if it is covered with toys, it
can't be used for other things -- like walking around.

Given enough space, the motivation to clean up one's mess is lessened. The more
space you have, the less motivation you have to always keep it clean. Arlo
Guthrie's famous ballad Alice's Restaurant Massacre illustrates this point:

Havin' all that room, seein' as how they took out all the pews, they decided that
they didn't have to take out their garbage ... for a long time.
For better or worse, garbage collection can make us a little sloppy about cleaning
up after ourselves.

Explicitly releasing resources


The vast majority of resources used in Java programs are objects, and garbage
collection does a fine job of cleaning them up. Go ahead, use as many Strings as
you want. The garbage collector eventually figures out when they've outlived their
usefulness, with no help from you, and reclaims the memory they used.

On the other hand, nonmemory resources like file handles and socket handles must be
explicitly released by the program, using methods with names like close(),
destroy(), shutdown(), or release(). Some classes, such as the file handle stream
implementations in the platform class library, provide finalizers as a "safety net"
so that if the program forgets to release the resource, the finalizer can still do
the job when the garbage collector determines that the program is finished with it.
But even though file handles provide finalizers to clean up after you if you
forget, it is still better to close them explicitly when you are done with them.
Doing so closes them much earlier than they otherwise would be, reducing the chance
of resource exhaustion.

For some resources, waiting until finalization to release them is not an option.
For virtual resources like lock acquisitions and semaphore permits, a Lock or
Semaphore is not likely to get garbage collected until it is too late; for
resources like database connections, you would surely run out of resources if you
waited for finalization. Many database servers only accept a certain number of
connections, based on licensed capacity. If a server application were to open a new
database connection for each request and then just drop it on the floor when done,
the database would likely reach its capacity long before the no-longer-needed
connections were closed by the finalizer.

Resources confined to a method


Most resources are not held for the lifetime of the application; instead, they are
acquired for the lifetime of an activity. When an application opens a file handle
to read in so it can process a document, it typically reads from the file and then
has no further need for the file handle.

In the easiest case, the resource is acquired, used, and hopefully released in the
same method call, such as the loadPropertiesBadly() method in Listing 1:

Listing 1. Incorrectly acquiring, using, and releasing a resource in a single


method -- don't do this
1
2
3
4
5
6
7
8
public static Properties loadPropertiesBadly(String fileName)
throws IOException {
FileInputStream stream = new FileInputStream(fileName);
Properties props = new Properties();
props.load(stream);
stream.close();
return props;
}
Unfortunately, this example has a potential resource leak. If all goes well, the
stream will be closed before the method returns. But if the props.load() method
throws an IOException, then the stream will not be closed (until the garbage
collector runs its finalizer). The solution is to use the try...finally mechanism
to ensure that the stream is closed no matter what goes wrong, as shown in Listing
2:

Listing 2. Correctly acquiring, using, and releasing a resource in a single method


1
2
3
4
5
6
7
8
9
10
11
12
public static Properties loadProperties(String fileName)
throws IOException {
FileInputStream stream = new FileInputStream(fileName);
try {
Properties props = new Properties();
props.load(stream);
return props;
}
finally {
stream.close();
}
}
Note that the resource acquisition (opening the file) is outside the try block; if
it were placed inside the try block, then the finally block would run even if
resource acquisition threw an exception. Not only would this approach be
inappropriate (you can't release a resource you haven't acquired), but the code in
the finally block is then likely to throw an exception of its own, such as
NullPointerException. An exception thrown from a finally block supersedes the
exception that caused the block to exit, which means the original exception is lost
and cannot be used to aid in the debugging effort.

Not always as easy as it looks


Using finally to release resources acquired in a method is reliable but can easily
get unwieldy when multiple resources are involved. Consider a method that uses a
JDBC Connection to execute a query and iterate the ResultSet. It acquires a
Connection, uses it to create a Statement, and executes the Statement to yield a
ResultSet. But the intermediate JDBC objects Statement and ResultSet have close()
methods of their own, and they should be released when you are done with them.
However, the "obvious" way to clean up, shown in Listing 3, doesn't work:

Listing 3. Unsuccessful attempt to release multiple resources -- don't do this


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public void enumerateFoo() throws SQLException {
Statement statement = null;
ResultSet resultSet = null;
Connection connection = getConnection();
try {
statement = connection.createStatement();
resultSet = statement.executeQuery("SELECT * FROM Foo");
// Use resultSet
}
finally {
if (resultSet != null)
resultSet.close();
if (statement != null)
statement.close();
connection.close();
}

}
The reason this "solution" doesn't work is that the close() methods of ResultSet
and Statement can themselves throw SQLException, which could cause the later
close() statements in the finally block not to execute. That leaves you with
several choices, all of which are annoying: wrap each close() with a try..catch
block, nest the try...finally blocks as shown in Listing 4, or write some sort of
mini-framework for managing the resource acquisition and release.

Listing 4. Reliable (if unwieldy) means of releasing multiple resources


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
public void enumerateBar() throws SQLException {
Statement statement = null;
ResultSet resultSet = null;
Connection connection = getConnection();
try {
statement = connection.createStatement();
resultSet = statement.executeQuery("SELECT * FROM Bar");
// Use resultSet
}
finally {
try {
if (resultSet != null)
resultSet.close();
}
finally {
try {
if (statement != null)
statement.close();
}
finally {
connection.close();
}
}
}
}

private Connection getConnection() {


return null;
}
Nearly everything can throw an exception
We all know that we should use finally to release heavyweight objects like database
connections, but we're not always so careful about using it to close streams (after
all, the finalizer will get that for us, right?). It's also easy to forget to use
finally when the code that uses the resource doesn't throw checked exceptions.
Listing 5 shows the implementation of the add() method for a bounded collection
that uses Semaphore to enforce the bound and efficiently allow clients to wait for
space to become available:

Listing 5. Vulnerable implementation of a bounded collection -- don't do this


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public class LeakyBoundedSet<T> {
private final Set<T> set = ...
private final Semaphore sem;

public LeakyBoundedSet(int bound) {


sem = new Semaphore(bound);
}

public boolean add(T o) throws InterruptedException {


sem.acquire();
boolean wasAdded = set.add(o);
if (!wasAdded)
sem.release();
return wasAdded;
}
}
LeakyBoundedSet first waits for a permit to be available (indicating that there is
space in the collection), then tries to add the element to the collection. If the
add operation fails because the element was already in the collection, it releases
the permit (because it did not actually use the space it had reserved).

The problem with LeakyBoundedSet doesn't necessarily jump out immediately: What if
Set.add() throws an exception? This scenario could happen because of a flaw in the
Set implementation, or a flaw in the equals() or hashCode() implementation (or the
compareTo() implementation, in the case of a SortedSet) for the element being
added, or an element already in the Set. The solution, of course, is to use finally
to release the semaphore permit; an easy enough -- but all-too-often-forgotten --
approach. These types of mistakes are rarely disclosed during testing, making them
time bombs waiting to go off. Listing 6 shows a more reliable implementation of
BoundedSet:

Listing 6. Using a Semaphore to reliably bound a Set


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
public class BoundedSet<T> {
private final Set<T> set = ...
private final Semaphore sem;

public BoundedHashSet(int bound) {


sem = new Semaphore(bound);
}

public boolean add(T o) throws InterruptedException {


sem.acquire();
boolean wasAdded = false;
try {
wasAdded = set.add(o);
return wasAdded;
}
finally {
if (!wasAdded)
sem.release();
}
}
}
Code auditing tools like FindBugs (see Related topics) can detect some instances of
improper resource release, such as opening a stream in a method and not closing it.
Resources with arbitrary lifecycles
For resources with arbitrary lifecycles, we're back to where we were with C --
managing resource lifecycles manually. In a server application where clients make a
persistent network connection to the server for the duration of a session (like a
multiplayer game server), any resources acquired on a per-user basis (including the
socket connection) must be released when the user logs out. Good organization can
help; if the sole reference to per-user resources is held in an ActiveUser object,
they can be released when the ActiveUser is released (whether explicitly or through
garbage collection).

Resources with arbitrary lifecycles are almost certainly going to be stored in (or
reachable from) a global collection somewhere. To avoid resource leaks, it is
therefore critical to identify when the resource is no longer needed and remove it
from this global collection. (A previous article, "Plugging memory leaks with weak
references," offers some helpful techniques.) At this point, because you know the
resource is about to be released, any nonmemory resources associated with the
resource can also be released at this time.

Resource ownership
A key technique for ensuring timely resource release is to maintain a strict
hierarchy of ownership; with ownership comes the responsibility to release the
resource. If an application creates a thread pool and the thread pool creates
threads, the threads are resources that must be released (allowed to terminate)
before the program can exit. But the application doesn't own the threads; the
thread pool does, and therefore the thread pool must take responsibility for
releasing them. Of course, it can't release them until the thread pool itself is
released by the application.

Maintaining an ownership hierarchy, where each resource owns the resources it


acquires and is responsible for releasing them, helps keep the mess from getting
out of control. A consequence of this rule is that each resource that cannot be
released solely by garbage collection, which includes any resource that directly or
indirectly owns a resource that cannot be released solely by garbage collection,
must provide some sort of lifecycle support, such as a close() method.

Finalizers
If the platform libraries provide finalizers for cleaning up open file handlers,
which greatly reduces the risk of forgetting to close them explicitly, why aren't
finalizers used more often? There are a number of reasons, foremost of which is
that finalizers are very tricky to write correctly (and very easy to write
incorrectly). Not only is it difficult to code them correctly, but the timing of
finalization is not deterministic, and there is no guarantee that finalizers will
ever even run. And finalization adds overhead to instantiation and garbage
collection of finalizable objects. Don't rely on finalizers as the primary means of
releasing resources.

Summary
Garbage collection does an awful lot of the cleanup for us, but some resources
still require explicit release, such as file handles, socket handles, threads,
database connections, and semaphore permits. We can often get away with using
finally blocks to release a resource if its lifetime is tied to that of a specific
call frame, but longer-lived resources require a strategy for ensuring their
eventual release. For any object that may directly or indirectly own an object that
requires explicit release, you must provide lifecycle methods -- close(),
release(), destroy(), and the like -- to ensure reliable cleanup.

https://stackoverflow.com/questions/1567979/how-to-free-memory-in-java
Is there a way to free memory in Java, similar to C's free() function? Or is
setting the object to null and relying on GC the only option?

java garbage-collection
share|improve this question
asked Oct 14 '09 at 17:58

Felix
60.4k35131161
119
Ok... let's get one thing straight. Just because you think something is bad
practice and not something to encourage doing, does not make it worthy of a vote
down. This is a clear and valid question, asking if there is a way to release
memory in Java with out relying on garbage collection. While it may be discouraged
and generally not useful or a good idea, you cannot know that there are not
scenarios where it may be required with out knowing what Felix knows. Felix may not
even be planning on using it. He may just want to know if it's possible. It, in no
way, deserves a vote down. – Daniel Bingham Oct 14 '09 at 18:05
3
For clarification, that's aimed at whomever voted this down - not previous comments
necessarily. – Daniel Bingham Oct 14 '09 at 18:06
add a comment |
13 Answers
active oldest votes
¿No encuentras la respuesta? Pregunta en Stack Overflow en español.


up vote 81 down vote accepted
Java uses managed memory, so the only way you can allocate memory is by using the
new operator, and the only way you can deallocate memory is by relying on the
garbage collector.

This memory management whitepaper (PDF) may help explain what's going on.

You can also call System.gc() to suggest that the garbage collector run
immediately. However, the Java Runtime makes the final decision, not your code.

According to the Java documentation,

Calling the gc method suggests that the Java Virtual Machine expend effort toward
recycling unused objects in order to make the memory they currently occupy
available for quick reuse. When control returns from the method call, the Java
Virtual Machine has made a best effort to reclaim space from all discarded objects.

share|improve this answer


edited Jul 15 '14 at 21:39
answered Oct 14 '09 at 18:01

Daniel Pryden
40.6k766115
5
It does force the Garbage Collector to run. It does not force it to free memory
though... – Pablo Santa Cruz Oct 14 '09 at 18:04
10
No Pablo, it does not force the GC to run. – Jesper Oct 14 '09 at 18:07
1
OK. I see. Thanks for pointing that out. – Pablo Santa Cruz Oct 14 '09 at 18:11
1
I've been told by a very reliable person that all HotSpotVM's garbage collectors
ignore System.gc() entirely. – Esko Jul 12 '10 at 19:55
1
On winXp java SE GC runs every System.gc() or almost every but API doc does not
guarantee it. – teodozjan Jan 13 '12 at 10:02
| show 4 more comments

up vote 57 down vote


No one seems to have mentioned explicitly setting object references to null, which
is a legitimate technique to "freeing" memory you may want to consider.

For example, say you'd declared a List<String> at the beginning of a method which
grew in size to be very large, but was only required until half-way through the
method. You could at this point set the List reference to null to allow the garbage
collector to potentially reclaim this object before the method completes (and the
reference falls out of scope anyway).

Note that I rarely use this technique in reality but it's worth considering when
dealing with very large data structures.

share|improve this answer


answered Oct 14 '09 at 19:50

Adamski
43.6k1187139
6
If you really are doing alot of work on an object which is only used for part of a
method I suggest either; your method is too compilicated, break the method into the
before and after portions, or use a block for the first half of code (the later is
more useful for test scripts) – Peter Lawrey Jul 13 '10 at 19:54
5
The place where setting an object reference to null is important is when it's
referenced from another long-lived object (or possibly from a static var). Eg, if
you have a long-lived array of large objects, and you cease using one of those
objects, you should set the array reference to null to make the object available
for GC. – Hot Licks Jan 25 '14 at 21:02
add a comment |
up vote 21 down vote
System.gc();
Runs the garbage collector.

Calling the gc method suggests that the Java Virtual Machine expend effort toward
recycling unused objects in order to make the memory they currently occupy
available for quick reuse. When control returns from the method call, the Java
Virtual Machine has made a best effort to reclaim space from all discarded objects.

Not recommended.

Edit: I wrote the original response in 2009. It's now 2015.

Garbage collectors have gotten steadily better in the ~20 years Java's been around.
At this point, if you're manually calling the garbage collector, you may want to
consider other approaches:

If you're forcing GC on a limited number of machines, it may be worth having a load


balancer point away from the current machine, waiting for it to finish serving to
connected clients, timeout after some period for hanging connections, and then just
hard-restart the JVM. This is a terrible solution, but if you're looking at
System.gc(), forced-restarts may be a possible stopgap.
Consider using a different garbage collector. For example, the (new in the last six
years) G1 collector is a low-pause model; it uses more CPU overall, but does it's
best to never force a hard-stop on execution. Since server CPUs now almost all have
multiple cores, this is A Really Good Tradeoff to have available.
Look at your flags tuning memory use. Especially in newer versions of Java, if you
don't have that many long-term running objects, consider bumping up the size of
newgen in the heap. newgen (young) is where new objects are allocated. For a
webserver, everything created for a request is put here, and if this space is too
small, Java will spend extra time upgrading the objects to longer-lived memory,
where they're more expensive to kill. (If newgen is slightly too small, you're
going to pay for it.) For example, in G1:
XX:G1NewSizePercent (defaults to 5; probably doesn't matter.)
XX:G1MaxNewSizePercent (defaults to 60; probably raise this.)
Consider telling the garbage collector you're not okay with a longer pause. This
will cause more-frequent GC runs, to allow the system to keep the rest of it's
constraints. In G1:
XX:MaxGCPauseMillis (defaults to 200.)
share|improve this answer
edited Jul 19 '15 at 22:46
answered Oct 14 '09 at 18:01

Dean J
20.1k135289
1
Commenting on my own post, this often doesn't do anything, and calling it
repeatedly can cause the JVM to become unstable and whatnot. It may also run over
your dog; approach with caution. – Dean J Oct 14 '09 at 18:02
1
I would put heavy emphasis on the "suggests" part of "Calling the gc method
suggests that the JVM expand effort" – matt b Oct 14 '09 at 18:09
2
@Jesper, Dean's answer states "suggests". In fact he posted the exact documentation
from the method's javadocs... – matt b Oct 14 '09 at 18:10
2
@Software Monkey: Yes, I could have just edited it. But since Dean J was obviously
active (posting only a few minutes ago), I figured it was a courtesy to ask him to
do it. If he hadn't, I would have come back here and made the edit and deleted my
comment. – Daniel Pryden Oct 14 '09 at 18:24
1
It would also we worth saying WHY it is not recommended. If the JVM pays attention
to the "suggestion" to run the GC, it will almost certainly make your app run
slower, possibly by many orders of magnitude! – Stephen C Oct 14 '09 at 22:57
| show 6 more comments
up vote 10 down vote
*"I personally rely on nulling variables as a placeholder for future proper
deletion. For example, I take the time to nullify all elements of an array before
actually deleting (making null) the array itself."

This is unnecessary. The way the Java GC works is it finds objects that have no
reference to them, so if I have an Object x with a reference (=variable) a that
points to it, the GC won't delete it, because there is a reference to that object:

a -> x
If you null a than this happens:

a -> null
x
So now x doesn't have a reference pointing to it and will be deleted. The same
thing happens when you set a to reference to a different object than x.
So if you have an array arr that references to objects x, y and z and a variable a
that references to the array it looks like that:

a -> arr -> x


-> y
-> z
If you null a than this happens:

a -> null
arr -> x
-> y
-> z
So the GC finds arr as having no reference set to it and deletes it, which gives
you this structure:

a -> null
x
y
z
Now the GC finds x, y and z and deletes them aswell. Nulling each reference in the
array won't make anything better, it will just use up CPU time and space in the
code (that said, it won't hurt further than that. The GC will still be able to
perform the way it should).

share|improve this answer


answered Jan 25 '14 at 20:28

Dakkaron
3,00421837
add a comment |
up vote 6 down vote
A valid reason for wanting to free memory from any programm (java or not ) is to
make more memory available to other programms on operating system level. If my java
application is using 250MB I may want to force it down to 1MB and make the 249MB
available to other apps.

share|improve this answer


answered Mar 27 '12 at 8:18

Yios
9017
If you need to explicitly free a chunk of 249MB, in a Java program, memory
management wouldn't be the first thing I'd want to work on. – Marc DiMillo Feb 8
'13 at 12:07
3
But freeing storage inside your Java heap does not (in the general case) make the
storage available to other apps. – Hot Licks Jan 25 '14 at 21:03
add a comment |
up vote 6 down vote
I have done experimentation on this.

It's true that System.gc(); only suggests to run the Garbage Collector.

But calling System.gc(); after setting all references to null, will improve
performance and memory occupation.

share|improve this answer


edited Mar 27 at 23:27
jontro
7,18843258
answered Mar 10 '14 at 19:05

Hemant Yadav
6113
add a comment |
up vote 4 down vote
To extend upon the answer and comment by Yiannis Xanthopoulos and Hot Licks (sorry,
I cannot comment yet!), you can set VM options like this example:

-XX:+UseG1GC -XX:MinHeapFreeRatio=15 -XX:MaxHeapFreeRatio=30


In my jdk 7 this will then release unused VM memory if more than 30% of the heap
becomes free after GC when the VM is idle. You will probably need to tune these
parameters.

While I didn't see it emphasized in the link below, note that some garbage
collectors may not obey these parameters and by default java may pick one of these
for you, should you happen to have more than one core (hence the UseG1GC argument
above).

VM arguments

Update: For java 1.8.0_73 I have seen the JVM occasionally release small amounts
with the default settings. Appears to only do it if ~70% of the heap is unused
though.. don't know if it would be more aggressive releasing if the OS was low on
physical memory.

share|improve this answer


edited Jan 12 '17 at 14:12
answered Apr 15 '14 at 12:03

nsandersen
271617
add a comment |
up vote 3 down vote
If you really want to allocate and free a block of memory you can do this with
direct ByteBuffers. There is even a non-portable way to free the memory.

However, as has been suggested, just because you have to free memory in C, doesn't
mean it a good idea to have to do this.

If you feel you really have a good use case for free(), please include it in the
question so we can see what you are rtying to do, it is quite likely there is a
better way.

share|improve this answer


edited Jul 13 '10 at 20:11

Darron
18.2k54252
answered Jul 13 '10 at 19:58

Peter Lawrey
416k53518875
add a comment |
up vote 2 down vote
Entirely from javacoffeebreak.com/faq/faq0012.html
A low priority thread takes care of garbage collection automatically for the user.
During idle time, the thread may be called upon, and it can begin to free memory
previously allocated to an object in Java. But don't worry - it won't delete your
objects on you!

When there are no references to an object, it becomes fair game for the garbage
collector. Rather than calling some routine (like free in C++), you simply assign
all references to the object to null, or assign a new class to the reference.

Example :

public static void main(String args[])


{
// Instantiate a large memory using class
MyLargeMemoryUsingClass myClass = new MyLargeMemoryUsingClass(8192);

// Do some work
for ( .............. )
{
// Do some processing on myClass
}

// Clear reference to myClass


myClass = null;

// Continue processing, safe in the knowledge


// that the garbage collector will reclaim myClass
}
If your code is about to request a large amount of memory, you may want to request
the garbage collector begin reclaiming space, rather than allowing it to do so as a
low-priority thread. To do this, add the following to your code

System.gc();
The garbage collector will attempt to reclaim free space, and your application can
continue executing, with as much memory reclaimed as possible (memory fragmentation
issues may apply on certain platforms).

share|improve this answer


edited Apr 20 '15 at 8:57
answered Apr 17 '15 at 18:23

Stefan Falk
5,4751262136
add a comment |
up vote 1 down vote
In my case, since my Java code is meant to be ported to other languages in the near
future (Mainly C++), I at least want to pay lip service to freeing memory properly
so it helps the porting process later on.

I personally rely on nulling variables as a placeholder for future proper deletion.


For example, I take the time to nullify all elements of an array before actually
deleting (making null) the array itself.

But my case is very particular, and I know I'm taking performance hits when doing
this.

share|improve this answer


answered Jul 27 '12 at 11:39
Oskuro
69257
add a comment |
up vote 1 down vote
* "For example, say you'd declared a List at the beginning of a method which grew
in size to be very large, but was only required until half-way through the method.
You could at this point set the List reference to null to allow the garbage
collector to potentially reclaim this object before the method completes (and the
reference falls out of scope anyway)." *

This is correct, but this solution may not be generalizable. While setting a List
object reference to null -will- make memory available for garbage collection, this
is only true for a List object of primitive types. If the List object instead
contains reference types, setting the List object = null will not dereference -any-
of the reference types contained -in- the list. In this case, setting the List
object = null will orphan the contained reference types whose objects will not be
available for garbage collection unless the garbage collection algorithm is smart
enough to determine that the objects have been orphaned.

share|improve this answer


answered Aug 31 '12 at 21:11

Gothri
291
1
This is actually not true. The Java garbage collector is smart enough to handle
that correctly. If you null the List (and the objects within the List don't have
other references to them) the GC can reclaim all the objects within the List. It
may choose to not do that at the present time, but it will reclaim them eventually.
Same goes for cyclic references. Basically, the way the GC works is to esplicitly
look for orphraned objects and then reclaim them. This is the whole job of a GC.
The way you describe it would render a GC utterly useless. – Dakkaron Jun 12 '15 at
9:49
add a comment |
up vote 1 down vote
Althrough java provides automatic garbage collection sometimes you will want to
know how large the object is and how much of it is left .Free memory using
programatically import java.lang; and Runtime r=Runtime.getRuntime(); to obtain
values of memory using mem1=r.freeMemory(); to free memory call the r.gc(); method
and the call freeMemory()

share|improve this answer


answered Sep 25 '14 at 5:18

Benjamin
1,7871714
add a comment |
up vote 1 down vote
Recommendation from JAVA is to assign to null

From https://docs.oracle.com/cd/E19159-01/819-3681/abebi/index.html

Explicitly assigning a null value to variables that are no longer needed helps the
garbage collector to identify the parts of memory that can be safely reclaimed.
Although Java provides memory management, it does not prevent memory leaks or using
excessive amounts of memory.

An application may induce memory leaks by not releasing object references. Doing so
prevents the Java garbage collector from reclaiming those objects, and results in
increasing amounts of memory being used. Explicitly nullifying references to
variables after their use allows the garbage collector to reclaim memory.

One way to detect memory leaks is to employ profiling tools and take memory
snapshots after each transaction. A leak-free application in steady state will show
a steady active heap memory after garbage collections.

https://www.javaworld.com/article/2076697/core-java/object-finalization-and-
cleanup.html

Object finalization and cleanup


How to design classes for proper object cleanup

MORE LIKE THIS


Letters to the Editor
Java 101: Trash talk, Part 1
Designing object initialization
Three months ago, I began a mini-series of articles about designing objects with a
discussion of design principles that focused on proper initialization at the
beginning of an object's life. In this Design Techniques article, I'll be focusing
on the design principles that help you ensure proper cleanup at the end of an
object's life.

Why clean up?


Every object in a Java program uses computing resources that are finite. Most
obviously, all objects use some memory to store their images on the heap. (This is
true even for objects that declare no instance variables. Each object image must
include some kind of pointer to class data, and can include other implementation-
dependent information as well.) But objects may also use other finite resources
besides memory. For example, some objects may use resources such as file handles,
graphics contexts, sockets, and so on. When you design an object, you must make
sure it eventually releases any finite resources it uses so the system won't run
out of those resources.

Because Java is a garbage-collected language, releasing the memory associated with


an object is easy. All you need to do is let go of all references to the object.
Because you don't have to worry about explicitly freeing an object, as you must in
languages such as C or C++, you needn't worry about corrupting memory by
accidentally freeing the same object twice. You do, however, need to make sure you
actually release all references to the object. If you don't, you can end up with a
memory leak, just like the memory leaks you get in a C++ program when you forget to
explicitly free objects. Nevertheless, so long as you release all references to an
object, you needn't worry about explicitly "freeing" that memory.

Similarly, you needn't worry about explicitly freeing any constituent objects
referenced by the instance variables of an object you no longer need. Releasing all
references to the unneeded object will in effect invalidate any constituent object
references contained in that object's instance variables. If the now-invalidated
references were the only remaining references to those constituent objects, the
constituent objects will also be available for garbage collection. Piece of cake,
right?

The rules of garbage collection


Although garbage collection does indeed make memory management in Java a lot easier
than it is in C or C++, you aren't able to completely forget about memory when you
program in Java. To know when you may need to think about memory management in
Java, you need to know a bit about the way garbage collection is treated in the
Java specifications.

Garbage collection is not mandated

The first thing to know is that no matter how diligently you search through the
Java Virtual Machine Specification (JVM Spec), you won't be able to find any
sentence that commands, Every JVM must have a garbage collector. The Java Virtual
Machine Specification gives VM designers a great deal of leeway in deciding how
their implementations will manage memory, including deciding whether or not to even
use garbage collection at all. Thus, it is possible that some JVMs (such as a bare-
bones smart card JVM) may require that programs executed in each session "fit" in
the available memory.

Of course, you can always run out of memory, even on a virtual memory system. The
JVM Spec does not state how much memory will be available to a JVM. It just states
that whenever a JVM does run out of memory, it should throw an OutOfMemoryError.

Nevertheless, to give Java applications the best chance of executing without


running out of memory, most JVMs will use a garbage collector. The garbage
collector reclaims the memory occupied by unreferenced objects on the heap, so that
memory can be used again by new objects, and usually de-fragments the heap as the
program runs.

Garbage collection algorithm is not defined

Another command you won't find in the JVM specification is All JVMs that use
garbage collection must use the XXX algorithm. The designers of each JVM get to
decide how garbage collection will work in their implementations. Garbage
collection algorithm is one area in which JVM vendors can strive to make their
implementation better than the competition's. This is significant for you as a Java
programmer for the following reason:

Because you don't generally know how garbage collection will be performed inside a
JVM, you don't know when any particular object will be garbage collected.

So what? you might ask. The reason you might care when an object is garbage
collected has to do with finalizers. (A finalizer is defined as a regular Java
instance method named finalize() that returns void and takes no arguments.) The
Java specifications make the following promise about finalizers:

Before reclaiming the memory occupied by an object that has a finalizer, the
garbage collector will invoke that object's finalizer.

Given that you don't know when objects will be garbage collected, but you do know
that finalizable objects will be finalized as they are garbage collected, you can
make the following grand deduction:

You don't know when objects will be finalized.

You should imprint this important fact on your brain and forever allow it to inform
your Java object designs.

Finalizers to avoid
The central rule of thumb concerning finalizers is this:

Don't design your Java programs such that correctness depends upon "timely"
finalization.
In other words, don't write programs that will break if certain objects aren't
finalized by certain points in the life of the program's execution. If you write
such a program, it may work on some implementations of the JVM but fail on others.

Don't rely on finalizers to release non-memory resources

An example of an object that breaks this rule is one that opens a file in its
constructor and closes the file in its finalize() method. Although this design
seems neat, tidy, and symmetrical, it potentially creates an insidious bug. A Java
program generally will have only a finite number of file handles at its disposal.
When all those handles are in use, the program won't be able to open any more
files.

A Java program that makes use of such an object (one that opens a file in its
constructor and closes it in its finalizer) may work fine on some JVM
implementations. On such implementations, finalization would occur often enough to
keep a sufficient number of file handles available at all times. But the same
program may fail on a different JVM whose garbage collector doesn't finalize often
enough to keep the program from running out of file handles. Or, what's even more
insidious, the program may work on all JVM implementations now but fail in a
mission-critical situation a few years (and release cycles) down the road.

Other finalizer rules of thumb

Two other decisions left to JVM designers are selecting the thread (or threads)
that will execute the finalizers and the order in which finalizers will be run.
Finalizers may be run in any order -- sequentially by a single thread or
concurrently by multiple threads. If your program somehow depends for correctness
on finalizers being run in a particular order, or by a particular thread, it may
work on some JVM implementations but fail on others.

You should also keep in mind that Java considers an object to be finalized whether
the finalize() method returns normally or completes abruptly by throwing an
exception. Garbage collectors ignore any exceptions thrown by finalizers and in no
way notify the rest of the application that an exception was thrown. If you need to
ensure that a particular finalizer fully accomplishes a certain mission, you must
write that finalizer so that it handles any exceptions that may arise before the
finalizer completes its mission.

One more rule of thumb about finalizers concerns objects left on the heap at the
end of the application's lifetime. By default, the garbage collector will not
execute the finalizers of any objects left on the heap when the application exits.
To change this default, you must invoke the runFinalizersOnExit() method of class
Runtime or System, passing true as the single parameter. If your program contains
objects whose finalizers must absolutely be invoked before the program exits, be
sure to invoke runFinalizersOnExit() somewhere in your program.

So what are finalizers good for?


By now you may be getting the feeling that you don't have much use for finalizers.
While it is likely that most of the classes you design won't include a finalizer,
there are some reasons to use finalizers.

One reasonable, though rare, application for a finalizer is to free memory


allocated by native methods. If an object invokes a native method that allocates
memory (perhaps a C function that calls malloc()), that object's finalizer could
invoke a native method that frees that memory (calls free()). In this situation,
you would be using the finalizer to free up memory allocated on behalf of an object
-- memory that will not be automatically reclaimed by the garbage collector.
Another, more common, use of finalizers is to provide a fallback mechanism for
releasing non-memory finite resources such as file handles or sockets. As mentioned
previously, you shouldn't rely on finalizers for releasing finite non-memory
resources. Instead, you should provide a method that will release the resource. But
you may also wish to include a finalizer that checks to make sure the resource has
already been released, and if it hasn't, that goes ahead and releases it. Such a
finalizer guards against (and hopefully will not encourage) sloppy use of your
class. If a client programmer forgets to invoke the method you provided to release
the resource, the finalizer will release the resource if the object is ever garbage
collected. The finalize() method of the LogFileManager class, shown later in this
article, is an example of this kind of finalizer.

Avoid finalizer abuse


The existence of finalization produces some interesting complications for JVMs and
some interesting possibilities for Java programmers. What finalization grants to
programmers is power over the life and death of objects. In short, it is possible
and completely legal in Java to resurrect objects in finalizers -- to bring them
back to life by making them referenced again. (One way a finalizer could accomplish
this is by adding a reference to the object being finalized to a static linked list
that is still "live.") Although such power may be tempting to exercise because it
makes you feel important, the rule of thumb is to resist the temptation to use this
power. In general, resurrecting objects in finalizers constitutes finalizer abuse.

The main justification for this rule is that any program that uses resurrection can
be redesigned into an easier-to-understand program that doesn't use resurrection. A
formal proof of this theorem is left as an exercise to the reader (I've always
wanted to say that), but in an informal spirit, consider that object resurrection
will be as random and unpredictable as object finalization. As such, a design that
uses resurrection will be difficult to figure out by the next maintenance
programmer who happens along -- who may not fully understand the idiosyncrasies of
garbage collection in Java.

If you feel you simply must bring an object back to life, consider cloning a new
copy of the object instead of resurrecting the same old object. The reasoning
behind this piece of advice is that garbage collectors in the JVM invoke the
finalize() method of an object only once. If that object is resurrected and becomes
available for garbage collection a second time, the object's finalize() method will
not be invoked again.

Managing non-memory resources


Because heap memory is automatically reclaimed by the garbage collector, the main
thing you need to worry about when you design an object's end-of-lifetime behavior
is to ensure that finite non-memory resources, such as file handles or sockets, are
released. You can take any of three basic approaches when you design an object that
needs to use a finite non-memory resource:

Obtain and release the resource within each method that needs the resource
Provide a method that obtains the resource and another that releases it
Obtain the resource at creation time and provide a method that releases it
Approach 1: Obtain and release within each relevant method
As a general rule, the releasing of non-memory finite resources should be done as
soon as possible after their use because the resources are, by definition, finite.
If possible, you should try to obtain a resource, use it, then release it all
within the method that needs the resource.

A log file class: An example of Approach 1

An example of a class where Approach 1 might make sense is a log file class. Such a
class takes care of formatting and writing log messages to a file. The name of the
log file is passed to the object as it is instantiated. To write a message to the
log file, a client invokes a method in the log file class, passing the message as a
String. Here's an example:

import java.io.FileOutputStream;
import java.io.PrintWriter;
import java.io.IOException;
class LogFile {
private String fileName;
LogFile(String fileName) {
this.fileName = fileName;
}
// The writeToFile() method will catch any IOException
// so that clients aren't forced to catch IOException
// everywhere they write to the log file. For now,
// just fail silently. In the future, could put
// up an informative non-modal dialog box that indicates
// a logging error occurred. - bv 4/15/98
void writeToFile(String message) {
FileOutputStream fos = null;
PrintWriter pw = null;
try {
fos = new FileOutputStream(fileName, true);
try {
pw = new PrintWriter(fos, false);
pw.println("------------------");
pw.println(message);
pw.println();
}
finally {
if (pw != null) {
pw.close();
}
}
}
catch (IOException e) {
}
finally {
if (fos != null) {
try {
fos.close();
}
catch (IOException e) {
}
}
}
}
}
Class LogFile is a simple example of Approach 1. A more production-ready LogFile
class might do things such as:

Insert the date and time each log message was written
Allow messages to be assigned a level of importance (such as ERROR, INFO, or DEBUG)
and enable a level to be set that will prevent unwanted detail (such as DEBUG
messages) from making it into the log file
Manage in some way the size of the log file, i.e., by copying it to a different
filename and starting fresh each time the log file achieves a certain size
The main feature of this simple version of class LogFile is that it surrounds each
log message with a series of dashes and a blank line.
Using finally to ensure resource release

Note that in the writeToFile() method, the releasing of the resource is done in
finally clauses. This is to make sure the finite resource (file handle) is actually
released no matter how the code is exited. If an IOException is thrown, the file
will be closed.

Pros and cons of Approach 1

The approach to resource management taken by class LogFile (Approach 1 from the
above list) helps make your class easy to use, because client programmers don't
have to worry about explicitly obtaining or releasing the resource. In both
Approach 2 and 3 from the list above client programmers must remember to explicitly
invoke a method to release the resource. In addition -- and what can be far more
difficult -- client programmers must figure out when their programs no longer need
a resource.

A problem with Approach 1 is that obtaining and releasing the resource each time
you need it may be too inefficient. Another problem is that, in some situations,
you may need to hold onto the resource between invocations of methods that use the
resource (such as writeToFile()), so no other object can have access to it. In such
cases, one of the other two approaches is preferable.

Approach 2: Offer methods for obtaining and releasing resources


In Approach 2 from the list above, you provide one method for obtaining the
resource and another method for releasing it. This approach enables the same class
instance to obtain and release a resource multiple times. Here's an example:

import java.io.FileOutputStream;
import java.io.PrintWriter;
import java.io.IOException;
class LogFileManager {
private FileOutputStream fos;
private PrintWriter pw;
private boolean logFileOpen = false;
LogFileManager() {
}
LogFileManager(String fileName) throws IOException {
openLogFile(fileName);
}
void openLogFile(String fileName) throws IOException {
if (!logFileOpen) {
try {
fos = new FileOutputStream(fileName, true);
pw = new PrintWriter(fos, false);
logFileOpen = true;
}
catch (IOException e) {
if (pw != null) {
pw.close();
pw = null;
}
if (fos != null) {
fos.close();
fos = null;
}
throw e;
}
}
}
void closeLogFile() throws IOException {
if (logFileOpen) {
pw.close();
pw = null;
fos.close();
fos = null;
logFileOpen = false;
}
}
boolean isOpen() {
return logFileOpen;
}
void writeToFile(String message) throws IOException {
pw.println("------------------");
pw.println(message);
pw.println();
}
protected void finalize() throws Throwable {
if (logFileOpen) {
try {
closeLogFile();
}
finally {
super.finalize();
}
}
}
}
In this example, class LogFileManager declares methods openLogFile() and
closeLogFile(). Given this design, you could write to multiple log files with one
instance of this class. This design also allows a client to monopolize the resource
for as long as it wants. A client can write several consecutive messages to the log
file without fear that another thread or process will slip in any intervening
messages. Once a client successfully opens a log file with openLogFile(), that log
file belongs exclusively to that client until the client invokes closeLogFile().

Note that LogFileManager uses a finalizer as a fallback in case a client forgets to


invoke closeLogFile(). As mentioned earlier in this article, this is one of the
more common uses of finalizers.

Note also that after invoking closeLogFile(), LogFileManager's finalizer invokes


super.finalize(). Invoking superclass finalizers is good practice in any finalizer,
even in cases (such as this) where no superclass exists other than Object. The JVM
does not automatically invoke superclass finalizers, so you must do so explicitly.
If someone ever inserts a class that declares a finalizer between LogFileManager
and Object in the inheritance hierarchy, the new object's finalizer will already be
invoked by LogFileManager's existing finalizer.

Making super.finalize() the last action of a finalizer ensures that subclasses will
be finalized before superclasses. Although in most cases the placement of
super.finalize() won't matter, in some rare cases, a subclass finalizer may require
that its superclass be as yet unfinalized. So, as a general rule of thumb, place
super.finalize() last.

Approach 3: Claim resource on creation, offer method for release


In the last approach, Approach 3 from the above list, the object obtains the
resource upon creation and declares a method that releases the resource. Here's an
example:

import java.io.FileOutputStream;
import java.io.PrintWriter;
import java.io.IOException;
class LogFileTransaction {
private FileOutputStream fos;
private PrintWriter pw;
private boolean logFileOpen = false;
LogFileTransaction(String fileName) throws IOException {
try {
fos = new FileOutputStream(fileName, true);
pw = new PrintWriter(fos, false);
logFileOpen = true;
}
catch (IOException e) {
if (pw != null) {
pw.close();
pw = null;
}
if (fos != null) {
fos.close();
fos = null;
}
throw e;
}
}
void closeLogFile() throws IOException {
if (logFileOpen) {
pw.close();
pw = null;
fos.close();
fos = null;
logFileOpen = false;
}
}
boolean isOpen() {
return logFileOpen;
}
void writeToFile(String message) throws IOException {
pw.println("------------------");
pw.println(message);
pw.println();
}
protected void finalize() throws Throwable {
if (logFileOpen) {
try {
closeLogFile();
}
finally {
super.finalize();
}
}
}
}
This class is called LogFileTransaction because every time a client wants to write
a chunk of messages to the log file (and then let others use that log file), it
must create a new LogFileTransaction. Thus, this class models one transaction
between the client and the log file.
One interesting thing to note about Approach 3 is that this is the approach used by
the FileOutputStream and PrintWriter classes used by all three example log file
classes. In fact, if you look through the java.io package, you'll find that almost
all of the java.io classes that deal with file handles use Approach 3. (The two
exceptions are PipedReader and PipedWriter, which use Approach 2.)

Conclusion
The most important point to take away from this article is that if a Java object
needs to take some action at the end of its life, no automatic way exists in Java
that will guarantee that action is taken in a timely manner. You can't rely on
finalizers to take the action, at least not in a timely way. You will need to
provide a method that performs the action and encourage client programmers to
invoke the method when the object is no longer needed.

This article contained several guidelines that pertain to finalizers:

Don't design your Java programs such that correctness depends on "timely"
finalization
Don't assume that a finalizer will be run by any particular thread
Don't assume that finalizers will be run in any particular order
Avoid designs that require finalizers to resurrect objects; if you must use
resurrection, prefer cloning over straight resurrection
Remember that exceptions thrown by finalizers are ignored
If your program includes objects with finalizers that absolutely must be run before
the program exits, invoke runFinalizersOnExit(true) in class Runtime or System
Unless you are writing the finalizer for class Object, always invoke
super.finalize() at the end of your finalizers
Next month
In next month's Design Techniques I'll continue the mini-series of articles that
focus on designing classes and objects. Next month's article, the fifth of this
mini-series, will discuss when to use -- and when not to use -- exceptions.

A request for reader participation


Software design is subjective. Your idea of a well-designed program may be your
colleague's maintenance nightmare. In light of this fact, I am trying to make this
column as interactive as possible.

I encourage your comments, criticisms, suggestions, flames -- all kinds of feedback


-- about the material presented in this column. If you disagree with something, or
have something to add, please let me know.

Bill Venners has been writing software professionally for 12 years. Based in
Silicon Valley, he provides software consulting and training services under the
name Artima Software Company. Over the years he has developed software for the
consumer electronics, education, semiconductor, and life insurance industries. He
has programmed in many languages on many platforms: assembly language on various
microprocessors, C on Unix, C++ on Windows, Java on the Web. He is author of the
book: Inside the Java Virtual Machine, published by McGraw-Hill.

https://www.toptal.com/java/top-10-most-common-java-development-mistakes

Buggy Java Code: The Top 10 Most Common Mistakes That Java Developers Make
View all articles
Small 51ed554b53364872469af39c1e2dfeca
BY MIKHAIL SELIVANOV - FREELANCE SOFTWARE ENGINEER @ TOPTAL

#Android #Backend #Frontend #Java #JavaEE #Mistakes


1.1K
SHARES

Java is a programming language that was initially developed for interactive


television, but over time it has become widespread over everywhere software can be
used. Designed with the notion of object-oriented programming, abolishing the
complexities of other languages such as C or C++, garbage collection, and an
architecturally agnostic virtual machine, Java created a new way of programming.
Moreover, it has a gentle learning curve and appears to successfully adhere to its
own moto - “Write once, run everywhere”, which is almost always true; but Java
problems are still present. I’ll be addressing ten Java problems that I think are
the most common mistakes.

common java mistakes

Common Mistake #1: Neglecting Existing Libraries


It’s definitely a mistake for Java Developers to ignore the innumerable amount of
libraries written in Java. Before reinventing the wheel, try to search for
available libraries - many of them have been polished over the years of their
existence and are free to use. These could be logging libraries, like logback and
Log4j, or network related libraries, like Netty or Akka. Some of the libraries,
such as Joda-Time, have become a de facto standard.

The following is a personal experience from one of my previous projects. The part
of the code responsible for HTML escaping was written from scratch. It was working
well for years, but eventually it encountered a user input which caused it to spin
into an infinite loop. The user, finding the service to be unresponsive, attempted
to retry with the same input. Eventually, all the CPUs on the server allocated for
this application were being occupied by this infinite loop. If the author of this
naive HTML escape tool had decided to use one of the well known libraries available
for HTML escaping, such as HtmlEscapers from Google Guava, this probably wouldn’t
have happened. At the very least, true for most popular libraries with a community
behind it, the error would have been found and fixed earlier by the community for
this library.

Common Mistake #2: Missing the ‘break’ Keyword in a Switch-Case Block


These Java issues can be very embarrassing, and sometimes remain undiscovered until
run in production. Fallthrough behavior in switch statements is often useful;
however, missing a “break” keyword when such behavior is not desired can lead to
disastrous results. If you have forgotten to put a “break” in “case 0” in the code
example below, the program will write “Zero” followed by “One”, since the control
flow inside here will go through the entire “switch” statement until it reaches a
“break”. For example:

public static void switchCasePrimer() {


int caseIndex = 0;
switch (caseIndex) {
case 0:
System.out.println("Zero");
case 1:
System.out.println("One");
break;
case 2:
System.out.println("Two");
break;
default:
System.out.println("Default");
}
}
In most cases, the cleaner solution would be to use polymorphism and move code with
specific behaviors into separate classes. Java mistakes such as this one can be
detected using static code analyzers, e.g. FindBugs and PMD.

Common Mistake #3: Forgetting to Free Resources


Every time a program opens a file or network connection, it is important for Java
beginners to free the resource once you are done using it. Similar caution should
be taken if any exception were to be thrown during operations on such resources.
One could argue that the FileInputStream has a finalizer that invokes the close()
method on a garbage collection event; however, since we can’t be sure when a
garbage collection cycle will start, the input stream can consume computer
resources for an indefinite period of time. In fact, there is a really useful and
neat statement introduced in Java 7 particularly for this case, called try-with-
resources:

private static void printFileJava7() throws IOException {


try(FileInputStream input = new FileInputStream("file.txt")) {
int data = input.read();
while(data != -1){
System.out.print((char) data);
data = input.read();
}
}
}
This statement can be used with any object that implements the AutoClosable
interface. It ensures that each resource is closed by the end of the statement.

Related: 8 Essential Java Interview Questions


Common Mistake #4: Memory Leaks
Java uses automatic memory management, and while it’s a relief to forget about
allocating and freeing memory manually, it doesn’t mean that a beginning Java
developer should not be aware of how memory is used in the application. Problems
with memory allocations are still possible. As long as a program creates references
to objects that are not needed anymore, it will not be freed. In a way, we can
still call this memory leak. Memory leaks in Java can happen in various ways, but
the most common reason is everlasting object references, because the garbage
collector can’t remove objects from the heap while there are still references to
them. One can create such a reference by defining class with a static field
containing some collection of objects, and forgetting to set that static field to
null after the collection is no longer needed. Static fields are considered GC
roots and are never collected.

Another potential reason behind such memory leaks is a group of objects referencing
each other, causing circular dependencies so that the garbage collector can’t
decide whether these objects with cross-dependency references are needed or not.
Another issue is leaks in non-heap memory when JNI is used.

The primitive leak example could look like the following:

final ScheduledExecutorService scheduledExecutorService =


Executors.newScheduledThreadPool(1);
final Deque<BigDecimal> numbers = new LinkedBlockingDeque<>();
final BigDecimal divisor = new BigDecimal(51);
scheduledExecutorService.scheduleAtFixedRate(() -> {
BigDecimal number = numbers.peekLast();
if (number != null && number.remainder(divisor).byteValue() == 0) {
System.out.println("Number: " + number);
System.out.println("Deque size: " + numbers.size());
}
}, 10, 10, TimeUnit.MILLISECONDS);

scheduledExecutorService.scheduleAtFixedRate(() -> {
numbers.add(new BigDecimal(System.currentTimeMillis()));
}, 10, 10, TimeUnit.MILLISECONDS);

try {
scheduledExecutorService.awaitTermination(1, TimeUnit.DAYS);
} catch (InterruptedException e) {
e.printStackTrace();
}
This example creates two scheduled tasks. The first task takes the last number from
a deque called “numbers” and prints the number and deque size in case the number is
divisible by 51. The second task puts numbers into the deque. Both tasks are
scheduled at a fixed rate, and run every 10 ms. If the code is executed, you’ll see
that the size of the deque is permanently increasing. This will eventually cause
the deque to be filled with objects consuming all available heap memory. To prevent
this while preserving the semantics of this program, we can use a different method
for taking numbers from the deque: “pollLast”. Contrary to the method “peekLast”,
“pollLast” returns the element and removes it from the deque while “peekLast” only
returns the last element.

To learn more about memory leaks in Java, please refer to our article that
demystified this problem.

Common Mistake #5: Excessive Garbage Allocation


Excessive Garbage Allocation

Excessive garbage allocation may happen when the program creates a lot of short-
lived objects. The garbage collector works continuously, removing unneeded objects
from memory, which impacts applications’ performance in a negative way. One simple
example:

String oneMillionHello = "";


for (int i = 0; i < 1000000; i++) {
oneMillionHello = oneMillionHello + "Hello!";
}
System.out.println(oneMillionHello.substring(0, 6));
In Java, strings are immutable. So, on each iteration a new string is created. To
address this we should use a mutable StringBuilder:

StringBuilder oneMillionHelloSB = new StringBuilder();


for (int i = 0; i < 1000000; i++) {
oneMillionHelloSB.append("Hello!");
}
System.out.println(oneMillionHelloSB.toString().substring(0, 6));
While the first version requires quite a bit of time to execute, the version that
uses StringBuilder produces a result in a significantly less amount of time.

Common Mistake #6: Using Null References without Need


Avoiding excessive use of null is a good practice. For example, it’s preferable to
return empty arrays or collections from methods instead of nulls, since it can help
prevent NullPointerException.
Consider the following method that traverses a collection obtained from another
method, as shown below:

List<String> accountIds = person.getAccountIds();


for (String accountId : accountIds) {
processAccount(accountId);
}
If getAccountIds() returns null when a person has no account, then
NullPointerException will be raised. To fix this, a null-check will be needed.
However, if instead of a null it returns an empty list, then NullPointerException
is no longer a problem. Moreover, the code is cleaner since we don’t need to null-
check the variable accountIds.

To deal with other cases when one wants to avoid nulls, different strategies may be
used. One of these strategies is to use Optional type that can either be an empty
object or a wrap of some value:

Optional<String> optionalString = Optional.ofNullable(nullableString);


if(optionalString.isPresent()) {
System.out.println(optionalString.get());
}
In fact, Java 8 provides a more concise solution:

Optional<String> optionalString = Optional.ofNullable(nullableString);


optionalString.ifPresent(System.out::println);
Optional type has been a part of Java since version 8, but it has been well known
for a long time in the world of functional programming. Prior to this, it was
available in Google Guava for earlier versions of Java.

Common Mistake #7: Ignoring Exceptions


It is often tempting to leave exceptions unhandled. However, the best practice for
beginner and experienced Java developers alike is to handle them. Exceptions are
thrown on purpose, so in most cases we need to address the issues causing these
exceptions. Do not overlook these events. If necessary, you can either rethrow it,
show an error dialog to the user, or add a message to the log. At the very least,
it should be explained why the exception has been left unhandled in order to let
other developers know the reason.

selfie = person.shootASelfie();
try {
selfie.show();
} catch (NullPointerException e) {
// Maybe, invisible man. Who cares, anyway?
}
A clearer way of highlighting an exceptions’ insignificance is to encode this
message into the exceptions’ variable name, like this:

try { selfie.delete(); } catch (NullPointerException unimportant) { }


Common Mistake #8: Concurrent Modification Exception
This exception occurs when a collection is modified while iterating over it using
methods other than those provided by the iterator object. For example, we have a
list of hats and we want to remove all those that have ear flaps:

List<IHat> hats = new ArrayList<>();


hats.add(new Ushanka()); // that one has ear flaps
hats.add(new Fedora());
hats.add(new Sombrero());
for (IHat hat : hats) {
if (hat.hasEarFlaps()) {
hats.remove(hat);
}
}
Concurrent Modification Exception

If we run this code, “ConcurrentModificationException” will be raised since the


code modifies the collection while iterating it. The same exception may occur if
one of the multiple threads working with the same list is trying to modify the
collection while others iterate over it. Concurrent modification of collections in
multiple threads is a natural thing, but should be treated with usual tools from
the concurrent programming toolbox such as synchronization locks, special
collections adopted for concurrent modification, etc. There are subtle differences
to how this Java issue can be resolved in single threaded cases and multithreaded
cases. Below is a brief discussion of some ways this can be handled in a single
threaded scenario:

Like what you're reading?Get the latest updates first.

Enter your email address...


No spam. Just great engineering posts.
Collect objects and remove them in another loop
Collecting hats with ear flaps in a list to remove them later from within another
loop is an obvious solution, but requires an additional collection for storing the
hats to be removed:

List<IHat> hatsToRemove = new LinkedList<>();


for (IHat hat : hats) {
if (hat.hasEarFlaps()) {
hatsToRemove.add(hat);
}
}
for (IHat hat : hatsToRemove) {
hats.remove(hat);
}
Use Iterator.remove method
This approach is more concise, and it doesn’t need an additional collection to be
created:

Iterator<IHat> hatIterator = hats.iterator();


while (hatIterator.hasNext()) {
IHat hat = hatIterator.next();
if (hat.hasEarFlaps()) {
hatIterator.remove();
}
}
Use ListIterator’s methods
Using the list iterator is appropriate when the modified collection implements List
interface. Iterators that implement ListIterator interface support not only removal
operations, but also add and set operations. ListIterator implements the Iterator
interface so the example would look almost the same as the Iterator remove method.
The only difference is the type of hat iterator, and the way we obtain that
iterator with the “listIterator()” method. The snippet below shows how to replace
each hat with ear flaps with sombreros using “ListIterator.remove” and
“ListIterator.add” methods:

IHat sombrero = new Sombrero();


ListIterator<IHat> hatIterator = hats.listIterator();
while (hatIterator.hasNext()) {
IHat hat = hatIterator.next();
if (hat.hasEarFlaps()) {
hatIterator.remove();
hatIterator.add(sombrero);
}
}
With ListIterator, the remove and add method calls can be replaced with a single
call to set:

IHat sombrero = new Sombrero();


ListIterator<IHat> hatIterator = hats.listIterator();
while (hatIterator.hasNext()) {
IHat hat = hatIterator.next();
if (hat.hasEarFlaps()) {
hatIterator.set(sombrero); // set instead of remove and add
}
}
Use stream methods introduced in Java 8 With Java 8, programmers have the ability
to transform a collection into a stream and filter that stream according to some
criteria. Here is an example of how stream api could help us filter hats and avoid
“ConcurrentModificationException”.

hats = hats.stream().filter((hat -> !hat.hasEarFlaps()))


.collect(Collectors.toCollection(ArrayList::new));
The “Collectors.toCollection” method will create a new ArrayList with filtered
hats. This can be a problem if the filtering condition were to be satisfied by a
large number of items, resulting in a large ArrayList; thus, it should be use with
care. Use List.removeIf method presented in Java 8 Another solution available in
Java 8, and clearly the most concise, is the use of the “removeIf” method:

hats.removeIf(IHat::hasEarFlaps);
That’s it. Under the hood, it uses “Iterator.remove” to accomplish the behavior.

Use specialized collections


If at the very beginning we decided to use “CopyOnWriteArrayList” instead of
“ArrayList”, then there would have been no problem at all, since
“CopyOnWriteArrayList” provides modification methods (such as set, add, and remove)
that don’t change the backing array of the collection, but rather create a new
modified version of it. This allows iteration over the original version of the
collection and modifications on it at the same time, without the risk of
“ConcurrentModificationException”. The drawback of that collection is obvious -
generation of a new collection with each modification.

There are other collections tuned for different cases, e.g. “CopyOnWriteSet” and
“ConcurrentHashMap”.

Another possible mistake with concurrent collection modifications is to create a


stream from a collection, and during the stream iteration, modify the backing
collection. The general rule for streams is to avoid modification of the underlying
collection during stream querying. The following example will show an incorrect way
of handling a stream:

List<IHat> filteredHats = hats.stream().peek(hat -> {


if (hat.hasEarFlaps()) {
hats.remove(hat);
}
}).collect(Collectors.toCollection(ArrayList::new));
The method peek gathers all the elements and performs the provided action on each
one of them. Here, the action is attempting to remove elements from the underlying
list, which is erroneous. To avoid this, try some of the methods described above.

Common Mistake #9: Breaking Contracts


Sometimes, code that is provided by the standard library or by a third-party vendor
relies on rules that should be obeyed in order to make things work. For example, it
could be hashCode and equals contract that when followed, makes working guaranteed
for a set of collections from the Java collection framework, and for other classes
that use hashCode and equals methods. Disobeying contracts isn’t the kind of error
that always leads to exceptions or breaks code compilation; it’s more tricky,
because sometimes it changes application behavior without any sign of danger.
Erroneous code could slip into production release and cause a whole bunch of
undesired effects. This can include bad UI behavior, wrong data reports, poor
application performance, data loss, and more. Fortunately, these disastrous bugs
don’t happen very often. I already mentioned the hashCode and equals contract. It
is used in collections that rely on hashing and comparing objects, like HashMap and
HashSet. Simply put, the contract contains two rules:

If two objects are equal, then their hash codes should be equal.
If two objects have the same hash code, then they may or may not be equal.
Breaking the contract’s first rule leads to problems while attempting to retrieve
objects from a hashmap. The second rule signifies that objects with the same hash
code aren’t necessarily equal. Let us examine the effects of breaking the first
rule:

public static class Boat {


private String name;

Boat(String name) {
this.name = name;
}

@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;

Boat boat = (Boat) o;

return !(name != null ? !name.equals(boat.name) : boat.name != null);


}

@Override
public int hashCode() {
return (int) (Math.random() * 5000);
}
}
As you can see, class Boat has overridden equals and hashCode methods. However, it
has broken the contract, because hashCode returns random values for the same object
every time it’s called. The following code will most likely not find a boat named
“Enterprise” in the hashset, despite the fact that we added that kind of boat
earlier:

public static void main(String[] args) {


Set<Boat> boats = new HashSet<>();
boats.add(new Boat("Enterprise"));

System.out.printf("We have a boat named 'Enterprise' : %b\n",


boats.contains(new Boat("Enterprise")));
}
Another example of contract involves the finalize method. Here is a quote from the
official java documentation describing its function:

”The general contract of finalize is that it is invoked if and when the JavaTM
virtual machine has determined that there is no longer any means by which this
object can be accessed by any thread (that has not yet died), except as a result of
an action taken by the finalization of some other object or class which is ready to
be finalized. The finalize method may take any action, including making this object
available again to other threads; the usual purpose of finalize, however, is to
perform cleanup actions before the object is irrevocably discarded. For example,
the finalize method for an object that represents an input/output connection might
perform explicit I/O transactions to break the connection before the object is
permanently discarded.“

One could decide to use the finalize method for freeing resources like file
handlers, but that would be a bad idea. This is because there’s no time guarantees
on when finalize will be invoked, since it’s invoked during the garbage collection,
and GC’s time is indeterminable.

Common Mistake #10: Using Raw Type Instead of a Parameterized One


Raw types, according to Java specifications, are types that are either not
parametrized, or non-static members of class R that are not inherited from the
superclass or superinterface of R. There were no alternatives to raw types until
generic types were introduced in Java. It supports generic programming since
version 1.5, and generics were undoubtedly a significant improvement. However, due
to backward compatibility reasons, a pitfall has been left that could potentially
break the type system. Let’s look at the following example:

List listOfNumbers = new ArrayList();


listOfNumbers.add(10);
listOfNumbers.add("Twenty");
listOfNumbers.forEach(n -> System.out.println((int) n * 2));
Here we have a list of numbers defined as a raw ArrayList. Since its type isn’t
specified with type parameter, we can add any object into it. But in the last line
we cast elements to int, double it, and print the doubled number to standard
output. This code will compile without errors, but once running it will raise a
runtime exception because we attempted to cast a string to an integer. Obviously,
the type system is unable to help us write safe code if we hide necessary
information from it. To fix the problem we need to specify the type of objects
we’re going to store in the collection:

List<Integer> listOfNumbers = new ArrayList<>();

listOfNumbers.add(10);
listOfNumbers.add("Twenty");

listOfNumbers.forEach(n -> System.out.println((int) n * 2));


The only difference from the original is the line defining the collection:

List<Integer> listOfNumbers = new ArrayList<>();


The fixed code wouldn’t compile because we are trying to add a string into a
collection that is expected to store integers only. The compiler will show an error
and point at the line where we are trying to add the string “Twenty” to the list.
It’s always a good idea to parametrize generic types. That way, the compiler is
able to make all possible type checks, and the chances of runtime exceptions caused
by type system inconsistencies are minimized.

Conclusion
Java as a platform simplifies many things in software development, relying both on
sophisticated JVM and the language itself. However, its features, like removing
manual memory management or decent OOP tools, don’t eliminate all the problems and
issues a regular Java developer faces. As always, knowledge, practice and Java
tutorials like this are the best means to avoid and address application errors - so
know your libraries, read java, read JVM documentation, and write programs. Don’t
forget about static code analyzers either, as they could point to the actual bugs
and highlight potential bugs.

Great article, very detailed and informative. About item #7, I just want to ask
your opinion about Java's checked exceptions, are they really necessary? Personally
I think all exceptions should be unchecked ( Runtime exceptions ). Try-catch is
good, but forcing developers to escalated code blocks is a really bad idea, and
overall, it does more harm than good. How do you think?

4
•Reply•Share ›
Avatar
ncmathsadist Lê Anh Quân • 3 years ago
Disagree. Uncaught exceptions can cause unexpected crashes in programs for end
users. This is a big-time no-no. You do not repay your customers with death.
Runtimes exceptions should be reserved for programmer goofs. Even then, if user
abuse causes a run time exception (think NumberFormatException from user entry in a
dialog box), that exception should be caught and handled gracefully. Obviously, you
should never duck fileIO and socket exceptions. For the most part, Java's exception
rules make sense and prevent production software from crashing.

1
•Reply•Share ›
Avatar
Lê Anh Quân ncmathsadist • 3 years ago
It is true... in theory. Somehow due to my 10+ years Java experience, the checked
exceptions are the most frustrating thing:
- It slows me down, force me to handle exceptions when I'm not ready to (have to
focus on the logic the code mainly about) thus make me either handle them wrongly
or throw a wrapping RTE.
- "throws" exceptions are often not an option - due to API contracts.
- Very hard to centralize exception handling with checked exceptions. Solutions
often come with heavy weighted framework which then produce more troubles than it
solves.
- The in-line try-catch blocks are most horrible nightmare. I am OKAY with try-
catch if catching exception is the only thing the method is about, but few in-line
try-catch in a method... simply destroy the code and the software.

Anyway, maybe you are right, maybe I'm too bent toward the elegant looking code
like functional style and became too much emotional in this matter, but Java
doesn't need to mean ugly looking code. Right?

1
•Reply•Share ›
Avatar
Joseph S. ncmathsadist • 4 months ago
Checked exceptions were made with the intention of recovery for a problem, so they
could make sense if you can do something when a exception happen, for example, if
you get a database exception you could implement a mechanism to retry some time
later, however, these kind of cases don't happen frequently.

•Reply•Share ›
Avatar
tfa ncmathsadist • 3 years ago
Are there still developers who think checked exceptions were are good idea? Come
on!

•Reply•Share ›
Avatar
Preda Lê Anh Quân • 3 years ago
Yes they are necessary because you want to capture possible code breaks ideally at
compile time, and react to them, by capturing the exception and doing something
that makes sense in that event. Rather than handling all exceptions at runtime,
it's better to catch them earlier.

1
•Reply•Share ›
Avatar
Lê Anh Quân Preda • 3 years ago
It's great if all checked exceptions are "possible code break", but many times they
aren't. Think about writing a method to calculate days until Valentine using parse
with "2/14" and SimpleDateFormat. You will have to handle ParseException even
though you know it will never happen. Or catching IOException every time you do
something with a ByteArrayOutputstream... It's painful to have your code poluted
with unnecessary try catch

2
•Reply•Share ›
Avatar
Preda Lê Anh Quân • 3 years ago
While that might be true. Proceeding with precaution now always pays dividends
later, and I think the language designers had this in mind. However you can always
use ruby, and don't have to do this :)

•Reply•Share ›
Avatar
Lê Anh Quân Preda • 3 years ago
Dear Preda, I think you are terribly wrong:
1. About precaution: it is good, but please don't force. I bet your program will be
much more robust if NullPointerException is a checked one, but you'll soon find out
it's a horrible idea.
2. Language designers are not gods, they can be wrong. They didn't have generics or
lambdas in mind in the first place. In this case of checked exception, they are
wrong again.
3. Ruby? Really? What would I do if later I find out Ruby doesn't have static type?
Change to dot net?

1
•Reply•Share ›
Avatar
Mikhail Selivanov Lê Anh Quân • 3 years ago
Thanks, I'm glad that you liked the article. Regarding exceptions, I think in many
cases it's a good idea to encode operation error into the result value. Forcing
developers to handle errors is a nice feature for designing API, but it shouldn't
be overused.

•Reply•Share ›
Avatar
Josip Pokrajcic • 3 years ago
Regarding mistake #2 you said that the program will write “Zero” followed by “One”.
It should wrote "Zero" followed by "One" "Two" if I'm not mistaken.
EDIT: my bad, misread the code

2
•Reply•Share ›
Avatar
Carlos De Luna Saenz • 3 years ago
I would like to add a couple more: 11th: Use OF Java like a Structured language
instead a OOP language (it's weird and awfull when you get stock reviewing
"spaguetti code", for example... and 12th: Lack of use of Design Patterns: Design
patterns were made to make life easier and most of the "well known" frameworks use
them and allows you to use them (such Spring MVC or Hibernate for DAO pattern)...
then do it. Congratulations for an excelent article.

1
•Reply•Share ›
Avatar
Mikhail Selivanov Carlos De Luna Saenz • 3 years ago
Thank you for the kind words. Agree, knowledge of OOP and design patterns is an
important thing, not only for Java programmers.

•Reply•Share ›
Avatar
Peter Storch • 3 years ago
Be carefull with #1: In principle you are right about not inventing the wheel. But
I've seen projects having dependencies to 3 XML, 4 Logging and 5 JSON Libraries.
And often enough introducing one library adds a dependency to 10 others.

1
•Reply•Share ›
Avatar
Mikhail Selivanov Peter Storch • 3 years ago
It's not very clever to use 5 libraries that do the same thing, just stick to one
of them. However, there is another issue with third-party libraries. Each time you
add a dependency to you project, there is a possibility that it will pull a half of
the repository of it's own dependencies.

•Reply•Share ›
Avatar
stingersdestiny • 3 years ago
I think Number 7 needs further explanation. Its one thing to catch an exception and
not do anything and completely another whether one should catch NPE (or other
runtimeexcepions). In my opinion its extremely rare to justiy catching an NPE. It
is bad code. Your code should not be returning null and at the very least should be
verifying before its usage. Programmers should let NPE be thrown and then
investigate it instead of catching it

1
•Reply•Share ›
Avatar
Mikhail Selivanov stingersdestiny • 3 years ago
I'm agree about NPE and there is a #6 which is about how to avoid it by not using
null references.

•Reply•Share ›
Avatar
Chuck Batson • 3 years ago
FindBugs (http://findbugs.sourceforge...) is an invaluable tool and identifies many
common mistakes. I would add that Common Mistake #11 is not using FindBugs. :-)

1
•Reply•Share ›
Avatar
mydevgeek • a year ago
Great article. What do u thing about initialize object inside the loop vs
initialize object outside the loop and assign new values inside? I think, it's
optimized in java compiler. Need to verify.

•Reply•Share ›
Avatar
govindrajput • a year ago
Nice artical so very detailed for The fixed code wouldn’t compile because we are
trying to add a string into a collection that is expected to store integers only.
there is veru usefull that artical foe developer do it use.

•Reply•Share ›
Avatar
john stanley • 2 years ago
You had explains about the most common mistakes which are done by the java
developers.
for more information about java visit: java online training

•Reply•Share ›
Avatar
Ricardo Santos • 2 years ago
Also about item #7. When logging an exception you should include details about the
context of the exception and also include the exception in order not to lose the
stacktrace.
On my past years I've seen many of log.error(ex.getMessage()) instead of
log.error("Error reading file '" + path + "'", ex).

•Reply•Share ›
Avatar
Anand Kumar • 3 years ago
visit for lot more java interview questions and programs -
http://javadiscover.blogspo...

•Reply•Share ›
Avatar
Madonah • 3 years ago
Helped me a lot. Java is cool, hope you write also about C++ and C#. Thank you.

•Reply•Share ›

Avatar
lucas rafagnin • 3 years ago
Thanks guy!
https://www.infoq.com/news/2010/08/arm-blocks

Automatic Resource Management in JavaLike | by Alex Blewitt


on Aug 23, 2010. Estimated reading time: 4 minutes | 6 Discuss ShareShare | Read
laterReading List
CloseA note to our readers: You asked so we have developed a set of features that
allow you to reduce the noise: you can get email and web notifications for topics
you are interested in. Learn more about our new features.
Part of Project Coin is the ability to deal with Automatic Resource Management, or
simply ARM. The purpose is to make it easier to work with external resources which
need to be disposed or closed in case of errors or successful completion of a code
block. Consider the following trivial file-copy operation, from the Java Bytestream
Tutorial:

FileInputStream in = null;
FileOutputStream out = null;
try {
in = new FileInputStream("xanadu.txt");
out = new FileOutputStream("outagain.txt");
int c;
while ((c = in.read()) != -1)
out.write(c);
} finally {
if (in != null)
in.close();
if (out != null)
out.close();
}
Not only is there a lot of boiler plate, but the documentation for
InputStream.close() suggests that it can throw an IOException. (An exception is far
more likely on the OutputStream but in any case, there needs to be an outer catch
or propagation in order to successfully compile this code.)

The lexical scope of the try-catch-finally block also requires the variable for
FileInputStream in and FileOutputStream out to be declared lexically outside the
block itself. (If they were defined inside the try block, then they wouldn't be
available inside the catch or finally blocks.)

To eliminate this boilerplate code, and to tighten the lexical scoping of the
resources used inside the block, a new addition has been made to the try block in
the Java language. An initial specification of the try-with-resources blocks (or
ARM blocks) was made available via an ininitial implementation, which has
subsequently made its way into build 105 of JDK 7.

A new interface, java.lang.AutoCloseable, has been added to the proposed API, which
defines a single method close() which throws Exception. This has been retro-fitted
as a parent of java.io.Closeable, which means that all InputStream and OutputStream
automatically take advantage of this behaviour. In addition, FileLock and
ImageInputStream have also been fitted with the AutoCloseable interface.

This permits the above example to be written as:

try (
FileInputStream in = new FileInputStream("xanadu.txt");
FileOutputStream out = new FileOutputStream("outagain.txt")
) {
int c;
while((c=in.read()) != -1 )
out.write();
}
At the end of the try block, whether by completion normally or otherwise, both the
out and in resources will have close() called automatically. Furthermore, unlike
our original example, both out.close() and in.close() are guaranteed to be
executed. (In the original example, had in.close() thrown an exception, then the
subsequent out.close() would not have been executed.)

There are some subtle aspects to this which are worth noting:

As it currently stands, a trailing semi-colon is not permitted after the final


resource in the resources section.
The resources block is separated with () rather than the more usual {}, to separate
it from the existing try body. If present, it must contain one or more resource
definitions.
Each resource definition is of the form type var = expression; you can't have
general statements in the resource block.
The resources are implicitly final; that is, they behave as if the final modifier
is present. Any attempt to assign to the resource variable is a compile-time error.
The resources must be a subtype of AutoCloseable; it is a compile-time error if
this is not the case.
The order of closure is the reverse order in which the resources are defined. In
other words, in the refined example, out.close() is called prior to in.close().
This permits nested streams to be built and then closed from outer-in, which makes
more sense than in order (e.g. for flushing buffers before the underlying stream is
closed).
Each block may generate n+1 exceptions, where n is the number of resources. This
can occur if the main body throws an exception, and each resource closure throws an
exception as well. In this situation, the body's exception will be thrown, but the
others will be added to the exception's suppressed exception list. They can be
accessed via getSuppressedExceptions().
Exception stack traces may now have code prefixed with Suppressed: in these cases;
the format of the serialized Throwable is now different as well. (This may impact
Java 6 clients invoking remote services on Java 7 runtimes, or vice-versa.)
javax.swing and java.sql do not participate in ARM at the current time; classes
need to opt-in by inheriting from AutoCloseable to be used by ARM. JDBC 4.1, if
part of JDK 7, will support ARM but it's not clear when this will happen.
The ability to remove boilerplate code from the Java developer's work-flow is
likely to be a minor productivity boost; but although it's available in JDK 7, it
will be sometime before source code can be written to take advantage of that fact.
Many libraries will need to be compiled to run against Java 6; and any use of the
automatic resource management will only be applicable for code compiled with
-target 7 (or similar). Once Java 6 is EOL and Java 8 has been released, then using
ARM will become an automatic way of working.

Related Topics:
Development
Culture & Methods
Java
Change
Related Editorial

Cloud Native Java Has A New Home: Jakarta EEGet Ready for Cloud Native, Service-
Meshed Java EnterpriseOracle Replaces JavaOne with Oracle Code OneModular Java
Development in ActionJava EE Guardians Moving Forward with Jakarta EE
Related Vendor Content

Introduction to Graph DatabasesPicking SQL or NoSQL? A Compose ViewAWS Monitoring:


Visualize the Health of Your Infrastructure Components InstantlySimple OAuth With
MongoDB & MySQL
Modern Java EE Design Patterns (By O'Reilly) - Download Now
Related Sponsor

Tell us what you think

Please enter a subject

Message
Community comments Watch Thread
Hmmm..... by Clint Farleigh Posted Aug 23, 2010 10:14
About freaking time!!!!!!! by Matt Giacomini Posted Aug 23, 2010 11:15
Learning from others by Patrick Dreyer Posted Aug 25, 2010 01:01
Re: Learning from others by James Watson Posted Aug 25, 2010 08:07
Re: Learning from others by David Birdsall Posted Aug 26, 2010 04:24
Re: Learning from others by Rob Elliot Posted Aug 27, 2010 03:29
Hmmm.....
Aug 23, 2010 10:14 by Clint Farleigh
I think I've seen this somewhere before... maybe about 5 years ago? :-)
Like
Reply
Back to top
About freaking time!!!!!!!
Aug 23, 2010 11:15 by Matt Giacomini
.
Like
Reply
Back to top
Learning from others
Aug 25, 2010 01:01 by Patrick Dreyer
One of the first lessons we as parents teach to our children is: Learn from others.
Why does this not apply to programming languages?

Try-catch-finally is about error handling and not about ARM.


Thus, why complicate the try-catch-finally instead introducing "using" as .NET is
using it?
Have a look at msdn.microsoft.com/en-us/library/yh598w02%28VS.... it's simple,
clear and straight forward.
This way we can even rename "java.lang.AutoCloseable" to "java.lang.Closeable" as
it's not about auto-closing but about a clearly programmed/stated behavior being
part of "using".

It's all about taking advantage of the fillets of each language.

Note: I'm not going into a debate .NET vs. Java - I won't.
Like
Reply
Back to top
Re: Learning from others
Aug 25, 2010 08:07 by James Watson
Try-catch-finally is about error handling and not about ARM.

I have to agree that using the try keyword for this doesn't seem very good. I have
to guess that the reason a new keyword wasn't used (such as 'using') is because the
fear that existing code will not compile. Although, this did not prevent enum from
being added which was more likely (I guess) to be used as a name in existing code.
Like
Reply
Back to top
Re: Learning from others
Aug 26, 2010 04:24 by David Birdsall
what if it could be implemented using closures? Would coin's new syntax be able to
do something similar to this, but without the extra boilerplate:

Closeable.close(new FileInputStream("xanadu.txt")) {
public void read(FileInputStream in) {
in.read();
}
});

...where Closeable.close is a static generic method that returned an object


specifically used for closing after reading. Could even be a static import, but
you'd still have to define the read(T) override.

At least it would be re-using another feature of the language (closures) and not
introducing another keyword.

When I see the using() {} statement in C# it makes me think of blocks in languages


like Ruby/Groovy.
Like
Reply
Back to top
Re: Learning from others
Aug 27, 2010 03:29 by Rob Elliot
I'd have thought it would just be a method on Closeable. So you could do something
like:

FileInputStream inputStream = new FileInputStream("xanadu.txt");


inputStream.with({ FileInputStream in ->
in.read();
});

Potentially if you made the instance variable final you wouldn't even need to pass
it in as an argument to the lambda. The "with" method would call the closure in a
try block and close itself cleanly in the finally block.

It does seem odd that this should be implemented at the same time as lambdas if
lambdas would allow it to be implemented cleanly without further new syntax.
Perhaps there's some subtlety to it that I'm missing.

You might also like