Professional Documents
Culture Documents
com/memory-leaks-java/
In this article, we’re going to describe the most common memory leaks, understand
their causes, and look at a few techniques to detect/avoid them. We’re also going
to use the Java YourKit profiler throughout the article, to analyze the state of
our memory at runtime.
As we can see, we have two types of objects – referenced and unreferenced; the
Garbage Collector can remove objects that are unreferenced. Referenced objects
won’t be collected, even if they’re actually not longer used by the application.
Detecting memory leaks can be difficult. A number of tools perform static analysis
to determine potential leaks, but these techniques aren’t perfect because the most
important aspect is the actual runtime behavior of the running system.
So, let’s have a focused look at some of the standard practices of preventing
memory leaks, by analyzing some common scenarios.
-Xms<size>
-Xmx<size>
These parameters specify the initial Java Heap size as well as the maximum Heap
size.
@Test
public void givenStaticField_whenLotsOfOperations_thenMemoryLeak() throws
InterruptedException {
for (int i = 0; i < 1000000; i++) {
list.add(random.nextDouble());
}
System.gc();
Thread.sleep(10000); // to allow GC do its job
}
We created our ArrayList as a static field – which will never be collected by the
JVM Garbage Collector during the lifetime of the JVM process, even after the
calculations it was used for are done. We also invoked Thread.sleep(10000) to allow
the GC to perform a full collection and try to reclaim everything that can be
reclaimed.
Let’s run the test and analyze the JVM with our profiler:
Notice how, at the very beginning, all memory is, of course, free.
Then, in just 2 seconds, the iteration process runs and finishes – loading
everything into the list (naturally this will depend on the machine you’re running
the test on).
After that, a full garbage collection cycle is triggered, and the test continues to
execute, to allow this cycle time to run and finish. As you can see, the list is
not reclaimed and the memory consumption doesn’t go down.
Let’s now see the exact same example, only this time, the ArrayList isn’t
referenced by a static variable. Instead, it’s a local variable that gets created,
used and then discarded:
@Test
public void givenNormalField_whenLotsOfOperations_thenGCWorksFine() throws
InterruptedException {
addElementsToTheList();
System.gc();
Thread.sleep(10000); // to allow GC do its job
}
Notice how the GC is now able to reclaim some of the memory utilized by the JVM.
First, we need to pay close attention to our usage of static; declaring any
collection or heavy object as static ties its lifecycle to the lifecycle of the JVM
itself, and makes the entire object graph impossible to collect.
@Test
public void givenLengthString_whenIntern_thenOutOfMemory()
throws IOException, InterruptedException {
Thread.sleep(15000);
String str
= new Scanner(new File("src/test/resources/large.txt"), "UTF-8")
.useDelimiter("\\A").next();
str.intern();
System.gc();
Thread.sleep(15000);
}
Here, we simply try to load a large text file into running memory and then return a
canonical form, using .intern().
The intern API will place the str String in the JVM memory pool – where it can’t be
collected – and again, this will cause the GC to be unable to free up enough
memory:
We can clearly see that in the first 15th seconds JVM is stable, then we load the
file and JVM perform garbage collection (20th second).
Finally, the str.intern() is invoked, which leads to the memory leak – the stable
line indicating high heap memory usage, which will never be released.
-XX:MaxPermSize=<size>
The second solution is to use Java 8 – where the PermGen space is replaced by the
Metaspace – which won’t lead to any OutOfMemoryError when using intern on Strings:
Finally, there are also several options of avoiding the .intern() API on Strings as
well.
@Test(expected = OutOfMemoryError.class)
public void givenURL_whenUnclosedStream_thenOutOfMemory()
throws IOException, URISyntaxException {
String str = "";
URLConnection conn
= new URL("http://norvig.com/big.txt").openConnection();
BufferedReader br = new BufferedReader(
new InputStreamReader(conn.getInputStream(), StandardCharsets.UTF_8));
//
}
Let’s see how the memory of the application looks when loading a large file from an
URL:
As we can see, the heap usage is gradually increasing over time – which is the
direct impact of the memory leak caused by not closing the stream.
Let’s dig a bit deeper into this scenario because it’s not as clear-cut as the
rest. Technically, an unclosed stream will result in two types of leaks – a low-
level resource leak and memory leak.
The low-level resource leak is simply the leak of an OS-level resource – such as
file descriptors, open connections, etc. These resources can also leak, just like
memory does.
Of course, the JVM uses memory to keep track of these underlying resources as well,
which is why this also results in a memory leak.
@Test(expected = OutOfMemoryError.class)
public void givenConnection_whenUnclosed_thenOutOfMemory()
throws IOException, URISyntaxException {
//
}
The URLConnection remains open, and the result is, predictably, a memory leak:
Notice how the Garbage Collector cannot do anything to release unused, but
referenced memory. The situation is immediately clear after the 1st minute – the
number of GC operations rapidly decreases, causing increased Heap memory use, which
leads to the OutOfMemoryError.
Specifically, when we start adding duplicate objects into a Set – this will only
ever grow, instead of ignoring duplicates as it should. We also won’t be able to
remove these objects, once added.
@Test(expected = OutOfMemoryError.class)
public void givenMap_whenNoEqualsNoHashCodeMethods_thenOutOfMemory()
throws IOException, URISyntaxException {
Map<Object, Object> map = System.getProperties();
while (true) {
map.put(new Key("key"), "value");
}
}
This simple implementation will lead to the following scenario at runtime:
Notice how the garbage collector stopped being able to reclaim memory around 1:40,
and notice the memory leak; the number of GC collections dropped almost four times
immediately after.
One tool worth mentioning here is Project Lombok – this provides a lot of default
implementation by annotations, e.g. @EqualsAndHashCode.
Let’s see which techniques can help you in addition to standard profiling.
3.2. Do Profiling
The second technique is the one we’ve been using throughout this article – and
that’s profiling. The most popular profiler is Visual VM – which is a good place to
start moving past command-line JDK tools and into lightweight profiling.
In this article, we used another profiler – YourKit – which has some additional,
more advanced features compared to Visual VM.
Simply put – review your code thoroughly, practice regular code reviews and make
good use of static analysis tools to help you understand your code and your system.
Conclusion
In this tutorial, we had a practical look at how memory leaks happen on the JVM.
Understanding how these scenarios happen is the first step in the process of
dealing with them.
Then, having the techniques and tools to really see what’s happening at runtime, as
the leak occurs, is critical as well. Static analysis and careful code-focused
reviews can only do so much, and – at the end of the day – it’s the runtime that
will show you the more complex leaks that aren’t immediately identifiable in the
code.
http://www.oracle.com/technetwork/articles/java/trywithresources-401775.html
This article presents the Java 7 answer to the automatic resource management
problem in the form of a new language construct, proposed as part of Project Coin,
called the try-with-resources statement.
Downloads:
Download: Java SE 7 Preview
Introduction
The typical Java application manipulates several types of resources such as files,
streams, sockets, and database connections. Such resources must be handled with
great care, because they acquire system resources for their operations. Thus, you
need to ensure that they get freed even in case of errors. Indeed, incorrect
resource management is a common source of failures in production applications, with
the usual pitfalls being database connections and file descriptors remaining opened
after an exception has occurred somewhere else in the code. This leads to
application servers being frequently restarted when resource exhaustion occurs,
because operating systems and server applications generally have an upper-bound
limit for resources.
Correct practices for the management of resources and exceptions in Java have been
well documented. For any resource that was successfully initialized, a
corresponding invocation to its close() method is required. This requires
disciplined usage of try/catch/finally blocks to ensure that any execution path
from a resource opening eventually reaches a call to a method that closes it.
Static analysis tools, such as FindBugs, are of great help in identifying such type
of errors. Yet often, both inexperienced and experienced developers get resource
management code wrong, resulting at best in resource leaks.
However, it should be acknowledged that writing correct code for resources requires
lots of boilerplate code in the form of nested try/catch/finally blocks, as we will
see. Writing such code correctly quickly becomes a problem of its own. Meanwhile,
other programming languages, such as Python and Ruby, have been offering language-
level facilities known as automatic resource management to address this issue.
This article presents the Java Platform, Standard Edition (Java SE) 7 answer to the
automatic resource management problem in the form of a new language construct
proposed as part of Project Coin and called the try-with-resources statement. As we
will see, it goes well beyond being just more syntactic sugar, like the enhanced
for loops of Java SE 5. Indeed, exceptions can mask each other, making the
identification of root problem causes sometimes hard to debug.
The article starts with an overview of resource and exception management before
introducing the essentials of the try-with-resources statement from the Java
developer point of view. It then shows how a class can be made ready for supporting
such statements. Next, it discusses the issues of exception masking and how Java SE
7 evolved to fix them. Finally, it demystifies the syntactic sugar behind the
language extension and provides a discussion and a conclusion.
Note: The source code for the examples described in this article can be downloaded
here: sources.zip
As an example, we could have added an output stream for compressing data between a
DataOutputStream and a FileOutputStream. When a stream is closed, it also closes
the stream that it is decorating. Going back again to the example, when close() is
called on the instance of DataOutputStream, so is the close() method from
FileOutputStream.
There is, however, a serious issue in this method regarding the call to the close()
method. Suppose an exception is thrown while writing the integer or the string
because the underlying file system is full. Then, the close() method has no chance
of being called.
This issue is mostly harmless in the case of short-lived programs, but it could
lead to an entire server having to be restarted in the case of long-running
applications, as found on Java Platform, Enterprise Edition (Java EE) application
servers, because the maximum number of open file descriptors permitted by the
underlying operating system would be reached.
def writing_in_ruby
File.open('rdata', 'w') do |f|
f.write(666)
f.write("Hello")
end
end
And it would be written like this in Python:
def writing_in_python():
with open("pdata", "w") as f:
f.write(str(666))
f.write("Hello")
In Ruby, the File.open method takes a block of code to be executed, and ensures
that the opened file is closed even if the block’s execution emits an exception.
The Python example is similar in that the special with statement takes an object
that has a close method and a code block. Again, it ensures proper resource closing
no matter if an exception is thrown or not.
try (
FileOutputStream out = new FileOutputStream("output");
FileInputStream in1 = new FileInputStream(“input1”);
FileInputStream in2 = new FileInputStream(“input2”)
) {
// Do something useful with those 3 streams!
} // out, in1 and in2 will be closed in any case
Finally, such a try-with-resources statement may be followed by catch and finally
blocks, just like regular try statements prior to Java SE 7.
Such close() methods have been retro-fitted into many classes of the standard Java
SE run-time environment , including the java.io, java.nio, javax.crypto,
java.security, java.util.zip, java.util.jar, javax.net, and java.sql packages. The
major advantage of this approach is that existing code continues working just as
before, while new code can easily take advantage of the try-with-resources
statement.
@Override
public void close() {
System.out.println(">>> close()");
throw new RuntimeException("Exception in close()");
}
public MyException() {
super();
}
>>> work()
>>> close()
MyException: Exception in work()
at AutoClose.work(AutoClose.java:11)
at AutoClose.main(AutoClose.java:16)
Suppressed: java.lang.RuntimeException: Exception in close()
at AutoClose.close(AutoClose.java:6)
at AutoClose.main(AutoClose.java:17)
The output clearly proves that close() was indeed called before entering the catch
block that should handle the exception. Yet, the Java developer discovering Java SE
7 might be surprised to see the exception stack trace line prefixed by “Suppressed:
(…)”. It matches the exception thrown by the close() method, but you could never
encounter such a form of stack trace prior to Java SE 7. What is going on here?
Exception Masking
To understand what happened in the previous example, let us get rid of the try-
with-resources statement for a moment, and manually rewrite a correct resource
management code. First, let us extract the following static method to be invoked by
the main method:
>>> work()
>>> close()
java.lang.RuntimeException: Exception in close()
at AutoClose.close(AutoClose.java:6)
at AutoClose.runWithMasking(AutoClose.java:19)
at AutoClose.main(AutoClose.java:52)
This code, which is idiomatic for proper resource management prior to Java SE 7,
shows the problem of one exception being masked by another exception. Indeed, the
client code to the runWithMasking() method is notified of an exception being thrown
in the close() method, while in reality, a first exception had been thrown in the
work() method.
However, only one exception can be thrown at a time, meaning that even correct code
misses information while handling exceptions. Developers lose significant time
debugging when a main exception is masked by a further exception being thrown while
closing a resource. The astute reader could question such claims, because
exceptions can be nested, after all. However, nested exceptions should be used for
causality between one exception and another, typically to wrap a low-level
exception within one aimed at higher layers of an application architecture. A good
example is a JDBC driver wrapping a socket exception into a JDBC connection. Here,
there are really two exceptions: one in work() and one in close(), and there is
absolutely no causality relationship between them.
Going back to the previous runWithMasking() method, let us rewrite it with support
for suppressed exceptions in mind:
Entering the finally block, the reference to the primary exception is checked. If
an exception was thrown, the exception that the close() method may throw would be
attached to it as a suppressed exception. Otherwise, the close() method is invoked,
and if it throws an exception, then it actually is the primary exception, thus not
masking another one.
>>> work()
>>> close()
MyException: Exception in work()
at AutoClose.work(AutoClose.java:11)
at AutoClose.runWithoutMasking(AutoClose.java:27)
at AutoClose.main(AutoClose.java:58)
Suppressed: java.lang.RuntimeException: Exception in close()
at AutoClose.close(AutoClose.java:6)
at AutoClose.runWithoutMasking(AutoClose.java:34)
... 1 more
As you can see, we manually reproduced the behavior of the try-with-resources
statement earlier.
Let us now consider another example, this time involving three resources:
Discussion
At the end of the day, try-with-resources statements are syntactic sugar just like
the enhanced for loops introduced in Java SE 5 for expanding loops over iterators.
Conclusion
This article introduced a new language construct in Java SE 7 for the safe
management of resources. This extension has more impact than being just yet more
syntactic sugar. Indeed, it generates correct code on behalf of the developer,
eliminating the need to write boilerplate code that is easy to get wrong. More
importantly, this change has been accompanied with evolutions to attach one
exception to another, thus providing an elegant solution to the well-known problem
of exceptions masking each other.
See Also
Here are some additional resources:
Julien Ponge is a long-time open source craftsman. He created the IzPack installer
framework and has participated in several other projects, including the GlassFish
application server in cooperation with Sun Microsystems. Holding a Ph.D. in
computer science from UNSW Sydney and UBP Clermont-Ferrand, he is currently an
associate professor in Computer Science and Engineering at INSA de Lyon and a
researcher as part of the INRIA Amazones team. Speaking both industrial and
academic languages, he is highly motivated in further developing synergies between
those worlds.
https://dzone.com/articles/4-techniques-for-writing-better-java
Comment (27)
This article explores four techniques that can be used when caught in a bind and be
introduced into a code-base to improve both the ease of development and
readability. Not all of these techniques will be applicable in every situation, or
even most. For example, there may be only a few methods that will lend themselves
to covariant return types or only a few generic classes that fit the pattern for
using intersectional generic types, while others, such as final methods and classes
and try-with-resources blocks, will improve the readability and clearness of
intention of most code-bases. In either case, it is important to not only know that
these techniques exist, but know when to judiciously apply them.
While this is a technique commonly used in many Java applications, there is a less
well-known action that can be taken when overriding a method: Altering the return
type. Although this may appear to be an open-ended way to override a method, there
are some serious constraints on the return type of an overridden method. According
to the Java 8 SE Language Specification (pg. 248):
Although the original return type of clone() is Object, we are able to call
getModel() on our cloned Vehicle (without an explicit cast) because we have
overridden the return type of Vehicle::clone to be Vehicle. This removes the need
for messy casts, where we know that the return type we are looking for is a
Vehicle, even though it is declared to be an Object (which amounts to a safe cast
based on a priori information but is strictly speaking unsafe):
Note that we can still declare the type of the vehicle to be a Object and the
return type would revert to its original type of Object:
Note that the return type cannot be overloaded with respect to a generic parameter,
but it can be with respect to a generic class. For example, if the base class or
interface method returns a List<Animal>, the return type of a subclass may be
overridden to ArrayList<Animal>, but it may not be overridden to List<Dog>.
We can now traverse a tree of Writers, not knowing whether the specific Writer we
encounter is a standalone Writer (a leaf) or a collection of Writers (a composite).
What if we also wanted our composite to act as a composite for readers as well as
writers? For example, if we had the following interface
Although this does accomplish our goal, we have created bloat in our code: We
created an interface with the sole purpose of merging two existing interfaces
together. With more and more interfaces, we can start to see a combinatoric
explosion of bloat. For example, if we create a new Modifier interface, we would
now need to create ReaderModifier, WriterModifier, and ReaderWriter interfaces.
Notice that these interfaces do not add any functionality: They simply merge
existing interfaces.
Without bloating our inheritance tree, we are now able to constrain our generic
type parameter to implement multiple interfaces. Note that the same constraint can
be specified if one of the interfaces is an abstract class or concrete class. For
example, if we changed our Writer interface into an abstract class resembling the
following
We can still constrain our generic type parameter to be both a Reader and a Writer,
but the Writer (since it is an abstract class and not an interface) must be
specified first (also note that our ReaderWriterComposite now extends the Writer
abstract class and implements the Reader interface, rather than implementing both):
It is also important to note that this intersectional generic type can be used for
more than two interfaces (or one abstract class and more than one interface). For
example, if we wanted our composite to also include the Modifier interface, we
could write our class definition as follows:
public class ReaderWriterComposite<T extends Reader & Writer & Modifier> implements
Reader, Writer, Modifier {
private final List<T> things;
public ReaderWriterComposite(List<T> things) {
this.things = things;
}
@Override
public void write() {
for (Writer writer: this.things) {
writer.write();
}
}
@Override
public void read() {
for (Reader reader: this.things) {
reader.read();
}
}
@Override
public void modify() {
for (Modifier modifier: this.things) {
modifier.modify();
}
}
}
Although it is legal to perform the above, this may be a sign of a code smell (an
object that is a Reader, a Writer, and a Modifier is likely to be something much
more specific, such as a File).
For more information on intersectional generic types, see the Java 8 language
specification.
3. Auto-Closeable Classes
Creating a resource class is a common practice, but maintaining the integrity of
that resource can be a challenging prospect, especially when exception handling is
involved. For example, suppose we create a resource class, Resource, and want to
perform an action on that resource that may throw an exception (the instantiation
process may also throw an exception):
In either case (if the exception is thrown or not thrown), we want to close our
resource to ensure there are no resource leaks. The normal process is to enclose
our close() method in a finally block, ensuring that no matter what happens, our
resource is closed before the enclosed scope of execution is completed:
By simple inspection, there is a lot of boilerplate code that detracts from the
readability of the execution of someAction() on our Resource object. To remedy this
situation, Java 7 introduced the try-with-resources statement, whereby a resource
can be created in the try statement and is automatically closed before the try
execution scope is left. For a class to be able to use the try-with-resources, it
must implement the AutoCloseable interface:
With our Resource class now implementing the AutoCloseable interface, we can clean
up our code to ensure our resource is closed prior to leaving the try execution
scope:
Created resource
Performed some action
Closed resource
Created resource
Performed some action
Closed resource
Exception caught
Notice that even though an Exception was thrown while executing the someAction()
method, our resource was closed and then the Exception was caught. This ensures
that prior to leaving the try execution scope, our resource is guaranteed to be
closed. It is also important to note that a resource can implement the Closeable
interface and still use a try-with-resources statement. The difference between
implementing the AutoCloseable interface and the Closeable interface is a matter of
the type of the exception thrown from the close() method signature: Exception and
IOException, respectively. In our case, we have simply changed the signature of the
close() method to not throw an exception.
Now, if another class wishes to override either the read or the write methods, a
compilation error is thrown: Cannot override the final method from File. Not only
have we documented that our methods should not be overridden, but the compiler has
also ensured that this intention is enforced at compile time.
Expanding this idea to an entire class, there may be times when we do not want a
class we create to be extended. Not only does this make every method of our class
non-extendable, but it also ensures that no subtype of our class can ever be
created. For example, if we are creating a security framework that consumes a key
generator, we may not want any outside developer to extend our key generator and
override the generation algorithm (the custom functionality may be
cryptographically inferior and compromise the system):
By making our KeyGenerator class final, the compiler will ensure that no class can
extend our class and pass itself to our framework as a valid cryptographic key
generator. While it may appear to be sufficient to simply mark the generate()
method as final, this does not stop a developer from creating a custom key
generator and passing it off as a valid generator. Being that our system is
security-oriented, it is a good idea to be as distrustful of the outside world as
possible (a clever developer might be able to change the generation algorithm by
changing the functionality of other methods in the KeyGenerator class if those
methods we present).
Although this appears to be a blatant disregard for the Open/Closed Principle (and
it is), there is a good reason for doing so. As can be seen in our security example
above, there are many times where we do not have the luxury of allowing the outside
world to do what it wants with our application and we must be very deliberate in
our decision making about inheritance. Writers such as Josh Bolch even go so far as
to say that a class should either be deliberately designed to be extended or else
it should be explicitly closed for extension (Effective Java). Although he
purposely overstated this idea (see Documenting for Inheritance or Disallowing It),
he makes a great point: We should be very deliberate about which of our classes
should be extended, and which of our methods are open for overriding.
Conclusion
While most of the code we write utilizes only a fraction of the capabilities of
Java, it suffices to solve most of the problems that we encounter. There are times
though that we need to dig a little deeper into the language and dust off those
forgotten or unknown parts of the language to solve a specific problem. Some of
these techniques, such as covariant return types and intersectional generic types
may be used in one-off situations, while others, such as auto-closeable resources
and final methods and classes can and should be used to more often to produce more
readable and more precise code. Combining these techniques with daily programming
practices aids in not only a better understanding of our intentions but also
better, more well-written Java.
http://www.javapractices.com/topic/TopicAction.do?Id=43
Recovering resources
input-output streams
database result sets, statements, and connections
threads
graphic resources
sockets
Resources which are created locally within a method must be cleaned up within the
same method, by calling a method appropriate to the resource itself, such as close
or dispose. (The exact name of the method is arbitrary, but it usually has those
conventional names.) This is usually done automatically, using the try-with-
resources feature, added in JDK 7.
If try-with-resources isn't available, then you need to clean up resources
explicitly, by calling a clean-up method in a finally clause.
For the case of a resource which is a field, however, there's more work to do:
implement a clean-up method which the user must call when finished with the object,
with a name such as close or dispose
the caller should be able to query an object to see if its clean-up method has been
executed
non-private methods (other than the clean-up method itself) should throw an
IllegalStateException if the clean-up method has already been invoked
as a safety net, implement finalize to call the clean-up method as well; if the
user of the class neglects to call the clean-up method, then this may allow
recovery of the resource by the system
never rely solely on finalize
This example shows a class which retains a database connection during its lifetime.
(This example is artificial. Actually writing such a class would not seem necessary
in practice, since connection pools already perform such clean-up in the
background. It's used merely to demonstrate the ideas mentioned above.)
import java.sql.*;
import java.text.*;
import java.util.*;
/**
* This class has an enforced life cycle: after destroy is
* called, no useful method can be called on this object
* without throwing an IllegalStateException.
*/
public final class DbConnection {
public DbConnection () {
//build a connection and assign it to a field
//elided.. fConnection = ConnectionPool.getInstance().getConnection();
}
/**
* Ensure the resources of this object are cleaned up in an orderly manner.
*
* The user of this class must call destroy when finished with
* the object. Calling destroy a second time is permitted, but is
* a no-operation.
*/
public void destroy() throws SQLException {
if (fIsDestroyed) {
return;
}
else{
if (fConnection != null) fConnection.close();
fConnection = null;
//flag that destory has been called, and that
//no further calls on this object are valid
fIsDestroyed = true;
}
}
/**
* Fetches something from the db.
*
* This is an example of a non-private method which must ensure that
* <code>destroy</code> has not yet been called
* before proceeding with execution.
*/
synchronized public Object fetchBlah(String aId) throws SQLException {
validatePlaceInLifeCycle();
//..elided
return null;
}
/**
* If the user fails to call <code>destroy</code>, then implementing
* finalize will act as a safety net, but this is not foolproof.
*/
protected void finalize() throws Throwable{
try{
destroy();
}
finally{
super.finalize();
}
}
/**
* Allow the user to determine if <code>destroy</code> has been called.
*/
public boolean isDestoyed() {
return fIsDestroyed;
}
// PRIVATE
/**
* Connection which is constructed and managed by this object.
* The user of this class must call destroy in order to release this
* Connection resource.
*/
private Connection fConnection;
/**
* This object has a specific "life cycle", such that methods must be called
* in the order: others + destroy. fIsDestroyed keeps track of the lifecycle,
* and non-private methods must check this value at the start of execution.
* If destroy is called more than once, a no-operation occurs.
*/
private boolean fIsDestroyed;
/**
* Once <code>destroy</code> has been called, the services of this class
* are no longer available.
*
* @throws IllegalStateException if <code>destroy</code> has
* already been called.
*/
private void validatePlaceInLifeCycle(){
if (fIsDestroyed) {
String message = "Method cannot be called after destroy has been called.";
throw new IllegalStateException(message);
}
}
}
https://www.ibm.com/developerworks/library/j-jtp03216/index.html
Brian Goetz
Published on March 21, 2006
Share this page
FacebookTwitterLinked InGoogle+E-mail this page
Comments
Content series:
This content is part # of # in the series: Java theory and practice
Stay tuned for additional content in this series.
This content is part of the series:Java theory and practice
Stay tuned for additional content in this series.
Our parents used to remind us to put our toys away when we were done with them. If
you look closely enough, the motivation for such nagging was probably not so much
an abstract desire to keep things clean as much as the practical limitation that
there is only so much floor space in the house, and if it is covered with toys, it
can't be used for other things -- like walking around.
Given enough space, the motivation to clean up one's mess is lessened. The more
space you have, the less motivation you have to always keep it clean. Arlo
Guthrie's famous ballad Alice's Restaurant Massacre illustrates this point:
Havin' all that room, seein' as how they took out all the pews, they decided that
they didn't have to take out their garbage ... for a long time.
For better or worse, garbage collection can make us a little sloppy about cleaning
up after ourselves.
On the other hand, nonmemory resources like file handles and socket handles must be
explicitly released by the program, using methods with names like close(),
destroy(), shutdown(), or release(). Some classes, such as the file handle stream
implementations in the platform class library, provide finalizers as a "safety net"
so that if the program forgets to release the resource, the finalizer can still do
the job when the garbage collector determines that the program is finished with it.
But even though file handles provide finalizers to clean up after you if you
forget, it is still better to close them explicitly when you are done with them.
Doing so closes them much earlier than they otherwise would be, reducing the chance
of resource exhaustion.
For some resources, waiting until finalization to release them is not an option.
For virtual resources like lock acquisitions and semaphore permits, a Lock or
Semaphore is not likely to get garbage collected until it is too late; for
resources like database connections, you would surely run out of resources if you
waited for finalization. Many database servers only accept a certain number of
connections, based on licensed capacity. If a server application were to open a new
database connection for each request and then just drop it on the floor when done,
the database would likely reach its capacity long before the no-longer-needed
connections were closed by the finalizer.
In the easiest case, the resource is acquired, used, and hopefully released in the
same method call, such as the loadPropertiesBadly() method in Listing 1:
}
The reason this "solution" doesn't work is that the close() methods of ResultSet
and Statement can themselves throw SQLException, which could cause the later
close() statements in the finally block not to execute. That leaves you with
several choices, all of which are annoying: wrap each close() with a try..catch
block, nest the try...finally blocks as shown in Listing 4, or write some sort of
mini-framework for managing the resource acquisition and release.
The problem with LeakyBoundedSet doesn't necessarily jump out immediately: What if
Set.add() throws an exception? This scenario could happen because of a flaw in the
Set implementation, or a flaw in the equals() or hashCode() implementation (or the
compareTo() implementation, in the case of a SortedSet) for the element being
added, or an element already in the Set. The solution, of course, is to use finally
to release the semaphore permit; an easy enough -- but all-too-often-forgotten --
approach. These types of mistakes are rarely disclosed during testing, making them
time bombs waiting to go off. Listing 6 shows a more reliable implementation of
BoundedSet:
Resources with arbitrary lifecycles are almost certainly going to be stored in (or
reachable from) a global collection somewhere. To avoid resource leaks, it is
therefore critical to identify when the resource is no longer needed and remove it
from this global collection. (A previous article, "Plugging memory leaks with weak
references," offers some helpful techniques.) At this point, because you know the
resource is about to be released, any nonmemory resources associated with the
resource can also be released at this time.
Resource ownership
A key technique for ensuring timely resource release is to maintain a strict
hierarchy of ownership; with ownership comes the responsibility to release the
resource. If an application creates a thread pool and the thread pool creates
threads, the threads are resources that must be released (allowed to terminate)
before the program can exit. But the application doesn't own the threads; the
thread pool does, and therefore the thread pool must take responsibility for
releasing them. Of course, it can't release them until the thread pool itself is
released by the application.
Finalizers
If the platform libraries provide finalizers for cleaning up open file handlers,
which greatly reduces the risk of forgetting to close them explicitly, why aren't
finalizers used more often? There are a number of reasons, foremost of which is
that finalizers are very tricky to write correctly (and very easy to write
incorrectly). Not only is it difficult to code them correctly, but the timing of
finalization is not deterministic, and there is no guarantee that finalizers will
ever even run. And finalization adds overhead to instantiation and garbage
collection of finalizable objects. Don't rely on finalizers as the primary means of
releasing resources.
Summary
Garbage collection does an awful lot of the cleanup for us, but some resources
still require explicit release, such as file handles, socket handles, threads,
database connections, and semaphore permits. We can often get away with using
finally blocks to release a resource if its lifetime is tied to that of a specific
call frame, but longer-lived resources require a strategy for ensuring their
eventual release. For any object that may directly or indirectly own an object that
requires explicit release, you must provide lifecycle methods -- close(),
release(), destroy(), and the like -- to ensure reliable cleanup.
https://stackoverflow.com/questions/1567979/how-to-free-memory-in-java
Is there a way to free memory in Java, similar to C's free() function? Or is
setting the object to null and relying on GC the only option?
java garbage-collection
share|improve this question
asked Oct 14 '09 at 17:58
Felix
60.4k35131161
119
Ok... let's get one thing straight. Just because you think something is bad
practice and not something to encourage doing, does not make it worthy of a vote
down. This is a clear and valid question, asking if there is a way to release
memory in Java with out relying on garbage collection. While it may be discouraged
and generally not useful or a good idea, you cannot know that there are not
scenarios where it may be required with out knowing what Felix knows. Felix may not
even be planning on using it. He may just want to know if it's possible. It, in no
way, deserves a vote down. – Daniel Bingham Oct 14 '09 at 18:05
3
For clarification, that's aimed at whomever voted this down - not previous comments
necessarily. – Daniel Bingham Oct 14 '09 at 18:06
add a comment |
13 Answers
active oldest votes
¿No encuentras la respuesta? Pregunta en Stack Overflow en español.
✕
up vote 81 down vote accepted
Java uses managed memory, so the only way you can allocate memory is by using the
new operator, and the only way you can deallocate memory is by relying on the
garbage collector.
This memory management whitepaper (PDF) may help explain what's going on.
You can also call System.gc() to suggest that the garbage collector run
immediately. However, the Java Runtime makes the final decision, not your code.
Calling the gc method suggests that the Java Virtual Machine expend effort toward
recycling unused objects in order to make the memory they currently occupy
available for quick reuse. When control returns from the method call, the Java
Virtual Machine has made a best effort to reclaim space from all discarded objects.
Daniel Pryden
40.6k766115
5
It does force the Garbage Collector to run. It does not force it to free memory
though... – Pablo Santa Cruz Oct 14 '09 at 18:04
10
No Pablo, it does not force the GC to run. – Jesper Oct 14 '09 at 18:07
1
OK. I see. Thanks for pointing that out. – Pablo Santa Cruz Oct 14 '09 at 18:11
1
I've been told by a very reliable person that all HotSpotVM's garbage collectors
ignore System.gc() entirely. – Esko Jul 12 '10 at 19:55
1
On winXp java SE GC runs every System.gc() or almost every but API doc does not
guarantee it. – teodozjan Jan 13 '12 at 10:02
| show 4 more comments
For example, say you'd declared a List<String> at the beginning of a method which
grew in size to be very large, but was only required until half-way through the
method. You could at this point set the List reference to null to allow the garbage
collector to potentially reclaim this object before the method completes (and the
reference falls out of scope anyway).
Note that I rarely use this technique in reality but it's worth considering when
dealing with very large data structures.
Adamski
43.6k1187139
6
If you really are doing alot of work on an object which is only used for part of a
method I suggest either; your method is too compilicated, break the method into the
before and after portions, or use a block for the first half of code (the later is
more useful for test scripts) – Peter Lawrey Jul 13 '10 at 19:54
5
The place where setting an object reference to null is important is when it's
referenced from another long-lived object (or possibly from a static var). Eg, if
you have a long-lived array of large objects, and you cease using one of those
objects, you should set the array reference to null to make the object available
for GC. – Hot Licks Jan 25 '14 at 21:02
add a comment |
up vote 21 down vote
System.gc();
Runs the garbage collector.
Calling the gc method suggests that the Java Virtual Machine expend effort toward
recycling unused objects in order to make the memory they currently occupy
available for quick reuse. When control returns from the method call, the Java
Virtual Machine has made a best effort to reclaim space from all discarded objects.
Not recommended.
Garbage collectors have gotten steadily better in the ~20 years Java's been around.
At this point, if you're manually calling the garbage collector, you may want to
consider other approaches:
Dean J
20.1k135289
1
Commenting on my own post, this often doesn't do anything, and calling it
repeatedly can cause the JVM to become unstable and whatnot. It may also run over
your dog; approach with caution. – Dean J Oct 14 '09 at 18:02
1
I would put heavy emphasis on the "suggests" part of "Calling the gc method
suggests that the JVM expand effort" – matt b Oct 14 '09 at 18:09
2
@Jesper, Dean's answer states "suggests". In fact he posted the exact documentation
from the method's javadocs... – matt b Oct 14 '09 at 18:10
2
@Software Monkey: Yes, I could have just edited it. But since Dean J was obviously
active (posting only a few minutes ago), I figured it was a courtesy to ask him to
do it. If he hadn't, I would have come back here and made the edit and deleted my
comment. – Daniel Pryden Oct 14 '09 at 18:24
1
It would also we worth saying WHY it is not recommended. If the JVM pays attention
to the "suggestion" to run the GC, it will almost certainly make your app run
slower, possibly by many orders of magnitude! – Stephen C Oct 14 '09 at 22:57
| show 6 more comments
up vote 10 down vote
*"I personally rely on nulling variables as a placeholder for future proper
deletion. For example, I take the time to nullify all elements of an array before
actually deleting (making null) the array itself."
This is unnecessary. The way the Java GC works is it finds objects that have no
reference to them, so if I have an Object x with a reference (=variable) a that
points to it, the GC won't delete it, because there is a reference to that object:
a -> x
If you null a than this happens:
a -> null
x
So now x doesn't have a reference pointing to it and will be deleted. The same
thing happens when you set a to reference to a different object than x.
So if you have an array arr that references to objects x, y and z and a variable a
that references to the array it looks like that:
a -> null
arr -> x
-> y
-> z
So the GC finds arr as having no reference set to it and deletes it, which gives
you this structure:
a -> null
x
y
z
Now the GC finds x, y and z and deletes them aswell. Nulling each reference in the
array won't make anything better, it will just use up CPU time and space in the
code (that said, it won't hurt further than that. The GC will still be able to
perform the way it should).
Dakkaron
3,00421837
add a comment |
up vote 6 down vote
A valid reason for wanting to free memory from any programm (java or not ) is to
make more memory available to other programms on operating system level. If my java
application is using 250MB I may want to force it down to 1MB and make the 249MB
available to other apps.
Yios
9017
If you need to explicitly free a chunk of 249MB, in a Java program, memory
management wouldn't be the first thing I'd want to work on. – Marc DiMillo Feb 8
'13 at 12:07
3
But freeing storage inside your Java heap does not (in the general case) make the
storage available to other apps. – Hot Licks Jan 25 '14 at 21:03
add a comment |
up vote 6 down vote
I have done experimentation on this.
It's true that System.gc(); only suggests to run the Garbage Collector.
But calling System.gc(); after setting all references to null, will improve
performance and memory occupation.
Hemant Yadav
6113
add a comment |
up vote 4 down vote
To extend upon the answer and comment by Yiannis Xanthopoulos and Hot Licks (sorry,
I cannot comment yet!), you can set VM options like this example:
While I didn't see it emphasized in the link below, note that some garbage
collectors may not obey these parameters and by default java may pick one of these
for you, should you happen to have more than one core (hence the UseG1GC argument
above).
VM arguments
Update: For java 1.8.0_73 I have seen the JVM occasionally release small amounts
with the default settings. Appears to only do it if ~70% of the heap is unused
though.. don't know if it would be more aggressive releasing if the OS was low on
physical memory.
nsandersen
271617
add a comment |
up vote 3 down vote
If you really want to allocate and free a block of memory you can do this with
direct ByteBuffers. There is even a non-portable way to free the memory.
However, as has been suggested, just because you have to free memory in C, doesn't
mean it a good idea to have to do this.
If you feel you really have a good use case for free(), please include it in the
question so we can see what you are rtying to do, it is quite likely there is a
better way.
Darron
18.2k54252
answered Jul 13 '10 at 19:58
Peter Lawrey
416k53518875
add a comment |
up vote 2 down vote
Entirely from javacoffeebreak.com/faq/faq0012.html
A low priority thread takes care of garbage collection automatically for the user.
During idle time, the thread may be called upon, and it can begin to free memory
previously allocated to an object in Java. But don't worry - it won't delete your
objects on you!
When there are no references to an object, it becomes fair game for the garbage
collector. Rather than calling some routine (like free in C++), you simply assign
all references to the object to null, or assign a new class to the reference.
Example :
// Do some work
for ( .............. )
{
// Do some processing on myClass
}
System.gc();
The garbage collector will attempt to reclaim free space, and your application can
continue executing, with as much memory reclaimed as possible (memory fragmentation
issues may apply on certain platforms).
Stefan Falk
5,4751262136
add a comment |
up vote 1 down vote
In my case, since my Java code is meant to be ported to other languages in the near
future (Mainly C++), I at least want to pay lip service to freeing memory properly
so it helps the porting process later on.
But my case is very particular, and I know I'm taking performance hits when doing
this.
This is correct, but this solution may not be generalizable. While setting a List
object reference to null -will- make memory available for garbage collection, this
is only true for a List object of primitive types. If the List object instead
contains reference types, setting the List object = null will not dereference -any-
of the reference types contained -in- the list. In this case, setting the List
object = null will orphan the contained reference types whose objects will not be
available for garbage collection unless the garbage collection algorithm is smart
enough to determine that the objects have been orphaned.
Gothri
291
1
This is actually not true. The Java garbage collector is smart enough to handle
that correctly. If you null the List (and the objects within the List don't have
other references to them) the GC can reclaim all the objects within the List. It
may choose to not do that at the present time, but it will reclaim them eventually.
Same goes for cyclic references. Basically, the way the GC works is to esplicitly
look for orphraned objects and then reclaim them. This is the whole job of a GC.
The way you describe it would render a GC utterly useless. – Dakkaron Jun 12 '15 at
9:49
add a comment |
up vote 1 down vote
Althrough java provides automatic garbage collection sometimes you will want to
know how large the object is and how much of it is left .Free memory using
programatically import java.lang; and Runtime r=Runtime.getRuntime(); to obtain
values of memory using mem1=r.freeMemory(); to free memory call the r.gc(); method
and the call freeMemory()
Benjamin
1,7871714
add a comment |
up vote 1 down vote
Recommendation from JAVA is to assign to null
From https://docs.oracle.com/cd/E19159-01/819-3681/abebi/index.html
Explicitly assigning a null value to variables that are no longer needed helps the
garbage collector to identify the parts of memory that can be safely reclaimed.
Although Java provides memory management, it does not prevent memory leaks or using
excessive amounts of memory.
An application may induce memory leaks by not releasing object references. Doing so
prevents the Java garbage collector from reclaiming those objects, and results in
increasing amounts of memory being used. Explicitly nullifying references to
variables after their use allows the garbage collector to reclaim memory.
One way to detect memory leaks is to employ profiling tools and take memory
snapshots after each transaction. A leak-free application in steady state will show
a steady active heap memory after garbage collections.
https://www.javaworld.com/article/2076697/core-java/object-finalization-and-
cleanup.html
Similarly, you needn't worry about explicitly freeing any constituent objects
referenced by the instance variables of an object you no longer need. Releasing all
references to the unneeded object will in effect invalidate any constituent object
references contained in that object's instance variables. If the now-invalidated
references were the only remaining references to those constituent objects, the
constituent objects will also be available for garbage collection. Piece of cake,
right?
The first thing to know is that no matter how diligently you search through the
Java Virtual Machine Specification (JVM Spec), you won't be able to find any
sentence that commands, Every JVM must have a garbage collector. The Java Virtual
Machine Specification gives VM designers a great deal of leeway in deciding how
their implementations will manage memory, including deciding whether or not to even
use garbage collection at all. Thus, it is possible that some JVMs (such as a bare-
bones smart card JVM) may require that programs executed in each session "fit" in
the available memory.
Of course, you can always run out of memory, even on a virtual memory system. The
JVM Spec does not state how much memory will be available to a JVM. It just states
that whenever a JVM does run out of memory, it should throw an OutOfMemoryError.
Another command you won't find in the JVM specification is All JVMs that use
garbage collection must use the XXX algorithm. The designers of each JVM get to
decide how garbage collection will work in their implementations. Garbage
collection algorithm is one area in which JVM vendors can strive to make their
implementation better than the competition's. This is significant for you as a Java
programmer for the following reason:
Because you don't generally know how garbage collection will be performed inside a
JVM, you don't know when any particular object will be garbage collected.
So what? you might ask. The reason you might care when an object is garbage
collected has to do with finalizers. (A finalizer is defined as a regular Java
instance method named finalize() that returns void and takes no arguments.) The
Java specifications make the following promise about finalizers:
Before reclaiming the memory occupied by an object that has a finalizer, the
garbage collector will invoke that object's finalizer.
Given that you don't know when objects will be garbage collected, but you do know
that finalizable objects will be finalized as they are garbage collected, you can
make the following grand deduction:
You should imprint this important fact on your brain and forever allow it to inform
your Java object designs.
Finalizers to avoid
The central rule of thumb concerning finalizers is this:
Don't design your Java programs such that correctness depends upon "timely"
finalization.
In other words, don't write programs that will break if certain objects aren't
finalized by certain points in the life of the program's execution. If you write
such a program, it may work on some implementations of the JVM but fail on others.
An example of an object that breaks this rule is one that opens a file in its
constructor and closes the file in its finalize() method. Although this design
seems neat, tidy, and symmetrical, it potentially creates an insidious bug. A Java
program generally will have only a finite number of file handles at its disposal.
When all those handles are in use, the program won't be able to open any more
files.
A Java program that makes use of such an object (one that opens a file in its
constructor and closes it in its finalizer) may work fine on some JVM
implementations. On such implementations, finalization would occur often enough to
keep a sufficient number of file handles available at all times. But the same
program may fail on a different JVM whose garbage collector doesn't finalize often
enough to keep the program from running out of file handles. Or, what's even more
insidious, the program may work on all JVM implementations now but fail in a
mission-critical situation a few years (and release cycles) down the road.
Two other decisions left to JVM designers are selecting the thread (or threads)
that will execute the finalizers and the order in which finalizers will be run.
Finalizers may be run in any order -- sequentially by a single thread or
concurrently by multiple threads. If your program somehow depends for correctness
on finalizers being run in a particular order, or by a particular thread, it may
work on some JVM implementations but fail on others.
You should also keep in mind that Java considers an object to be finalized whether
the finalize() method returns normally or completes abruptly by throwing an
exception. Garbage collectors ignore any exceptions thrown by finalizers and in no
way notify the rest of the application that an exception was thrown. If you need to
ensure that a particular finalizer fully accomplishes a certain mission, you must
write that finalizer so that it handles any exceptions that may arise before the
finalizer completes its mission.
One more rule of thumb about finalizers concerns objects left on the heap at the
end of the application's lifetime. By default, the garbage collector will not
execute the finalizers of any objects left on the heap when the application exits.
To change this default, you must invoke the runFinalizersOnExit() method of class
Runtime or System, passing true as the single parameter. If your program contains
objects whose finalizers must absolutely be invoked before the program exits, be
sure to invoke runFinalizersOnExit() somewhere in your program.
The main justification for this rule is that any program that uses resurrection can
be redesigned into an easier-to-understand program that doesn't use resurrection. A
formal proof of this theorem is left as an exercise to the reader (I've always
wanted to say that), but in an informal spirit, consider that object resurrection
will be as random and unpredictable as object finalization. As such, a design that
uses resurrection will be difficult to figure out by the next maintenance
programmer who happens along -- who may not fully understand the idiosyncrasies of
garbage collection in Java.
If you feel you simply must bring an object back to life, consider cloning a new
copy of the object instead of resurrecting the same old object. The reasoning
behind this piece of advice is that garbage collectors in the JVM invoke the
finalize() method of an object only once. If that object is resurrected and becomes
available for garbage collection a second time, the object's finalize() method will
not be invoked again.
Obtain and release the resource within each method that needs the resource
Provide a method that obtains the resource and another that releases it
Obtain the resource at creation time and provide a method that releases it
Approach 1: Obtain and release within each relevant method
As a general rule, the releasing of non-memory finite resources should be done as
soon as possible after their use because the resources are, by definition, finite.
If possible, you should try to obtain a resource, use it, then release it all
within the method that needs the resource.
An example of a class where Approach 1 might make sense is a log file class. Such a
class takes care of formatting and writing log messages to a file. The name of the
log file is passed to the object as it is instantiated. To write a message to the
log file, a client invokes a method in the log file class, passing the message as a
String. Here's an example:
import java.io.FileOutputStream;
import java.io.PrintWriter;
import java.io.IOException;
class LogFile {
private String fileName;
LogFile(String fileName) {
this.fileName = fileName;
}
// The writeToFile() method will catch any IOException
// so that clients aren't forced to catch IOException
// everywhere they write to the log file. For now,
// just fail silently. In the future, could put
// up an informative non-modal dialog box that indicates
// a logging error occurred. - bv 4/15/98
void writeToFile(String message) {
FileOutputStream fos = null;
PrintWriter pw = null;
try {
fos = new FileOutputStream(fileName, true);
try {
pw = new PrintWriter(fos, false);
pw.println("------------------");
pw.println(message);
pw.println();
}
finally {
if (pw != null) {
pw.close();
}
}
}
catch (IOException e) {
}
finally {
if (fos != null) {
try {
fos.close();
}
catch (IOException e) {
}
}
}
}
}
Class LogFile is a simple example of Approach 1. A more production-ready LogFile
class might do things such as:
Insert the date and time each log message was written
Allow messages to be assigned a level of importance (such as ERROR, INFO, or DEBUG)
and enable a level to be set that will prevent unwanted detail (such as DEBUG
messages) from making it into the log file
Manage in some way the size of the log file, i.e., by copying it to a different
filename and starting fresh each time the log file achieves a certain size
The main feature of this simple version of class LogFile is that it surrounds each
log message with a series of dashes and a blank line.
Using finally to ensure resource release
Note that in the writeToFile() method, the releasing of the resource is done in
finally clauses. This is to make sure the finite resource (file handle) is actually
released no matter how the code is exited. If an IOException is thrown, the file
will be closed.
The approach to resource management taken by class LogFile (Approach 1 from the
above list) helps make your class easy to use, because client programmers don't
have to worry about explicitly obtaining or releasing the resource. In both
Approach 2 and 3 from the list above client programmers must remember to explicitly
invoke a method to release the resource. In addition -- and what can be far more
difficult -- client programmers must figure out when their programs no longer need
a resource.
A problem with Approach 1 is that obtaining and releasing the resource each time
you need it may be too inefficient. Another problem is that, in some situations,
you may need to hold onto the resource between invocations of methods that use the
resource (such as writeToFile()), so no other object can have access to it. In such
cases, one of the other two approaches is preferable.
import java.io.FileOutputStream;
import java.io.PrintWriter;
import java.io.IOException;
class LogFileManager {
private FileOutputStream fos;
private PrintWriter pw;
private boolean logFileOpen = false;
LogFileManager() {
}
LogFileManager(String fileName) throws IOException {
openLogFile(fileName);
}
void openLogFile(String fileName) throws IOException {
if (!logFileOpen) {
try {
fos = new FileOutputStream(fileName, true);
pw = new PrintWriter(fos, false);
logFileOpen = true;
}
catch (IOException e) {
if (pw != null) {
pw.close();
pw = null;
}
if (fos != null) {
fos.close();
fos = null;
}
throw e;
}
}
}
void closeLogFile() throws IOException {
if (logFileOpen) {
pw.close();
pw = null;
fos.close();
fos = null;
logFileOpen = false;
}
}
boolean isOpen() {
return logFileOpen;
}
void writeToFile(String message) throws IOException {
pw.println("------------------");
pw.println(message);
pw.println();
}
protected void finalize() throws Throwable {
if (logFileOpen) {
try {
closeLogFile();
}
finally {
super.finalize();
}
}
}
}
In this example, class LogFileManager declares methods openLogFile() and
closeLogFile(). Given this design, you could write to multiple log files with one
instance of this class. This design also allows a client to monopolize the resource
for as long as it wants. A client can write several consecutive messages to the log
file without fear that another thread or process will slip in any intervening
messages. Once a client successfully opens a log file with openLogFile(), that log
file belongs exclusively to that client until the client invokes closeLogFile().
Making super.finalize() the last action of a finalizer ensures that subclasses will
be finalized before superclasses. Although in most cases the placement of
super.finalize() won't matter, in some rare cases, a subclass finalizer may require
that its superclass be as yet unfinalized. So, as a general rule of thumb, place
super.finalize() last.
import java.io.FileOutputStream;
import java.io.PrintWriter;
import java.io.IOException;
class LogFileTransaction {
private FileOutputStream fos;
private PrintWriter pw;
private boolean logFileOpen = false;
LogFileTransaction(String fileName) throws IOException {
try {
fos = new FileOutputStream(fileName, true);
pw = new PrintWriter(fos, false);
logFileOpen = true;
}
catch (IOException e) {
if (pw != null) {
pw.close();
pw = null;
}
if (fos != null) {
fos.close();
fos = null;
}
throw e;
}
}
void closeLogFile() throws IOException {
if (logFileOpen) {
pw.close();
pw = null;
fos.close();
fos = null;
logFileOpen = false;
}
}
boolean isOpen() {
return logFileOpen;
}
void writeToFile(String message) throws IOException {
pw.println("------------------");
pw.println(message);
pw.println();
}
protected void finalize() throws Throwable {
if (logFileOpen) {
try {
closeLogFile();
}
finally {
super.finalize();
}
}
}
}
This class is called LogFileTransaction because every time a client wants to write
a chunk of messages to the log file (and then let others use that log file), it
must create a new LogFileTransaction. Thus, this class models one transaction
between the client and the log file.
One interesting thing to note about Approach 3 is that this is the approach used by
the FileOutputStream and PrintWriter classes used by all three example log file
classes. In fact, if you look through the java.io package, you'll find that almost
all of the java.io classes that deal with file handles use Approach 3. (The two
exceptions are PipedReader and PipedWriter, which use Approach 2.)
Conclusion
The most important point to take away from this article is that if a Java object
needs to take some action at the end of its life, no automatic way exists in Java
that will guarantee that action is taken in a timely manner. You can't rely on
finalizers to take the action, at least not in a timely way. You will need to
provide a method that performs the action and encourage client programmers to
invoke the method when the object is no longer needed.
Don't design your Java programs such that correctness depends on "timely"
finalization
Don't assume that a finalizer will be run by any particular thread
Don't assume that finalizers will be run in any particular order
Avoid designs that require finalizers to resurrect objects; if you must use
resurrection, prefer cloning over straight resurrection
Remember that exceptions thrown by finalizers are ignored
If your program includes objects with finalizers that absolutely must be run before
the program exits, invoke runFinalizersOnExit(true) in class Runtime or System
Unless you are writing the finalizer for class Object, always invoke
super.finalize() at the end of your finalizers
Next month
In next month's Design Techniques I'll continue the mini-series of articles that
focus on designing classes and objects. Next month's article, the fifth of this
mini-series, will discuss when to use -- and when not to use -- exceptions.
Bill Venners has been writing software professionally for 12 years. Based in
Silicon Valley, he provides software consulting and training services under the
name Artima Software Company. Over the years he has developed software for the
consumer electronics, education, semiconductor, and life insurance industries. He
has programmed in many languages on many platforms: assembly language on various
microprocessors, C on Unix, C++ on Windows, Java on the Web. He is author of the
book: Inside the Java Virtual Machine, published by McGraw-Hill.
https://www.toptal.com/java/top-10-most-common-java-development-mistakes
Buggy Java Code: The Top 10 Most Common Mistakes That Java Developers Make
View all articles
Small 51ed554b53364872469af39c1e2dfeca
BY MIKHAIL SELIVANOV - FREELANCE SOFTWARE ENGINEER @ TOPTAL
The following is a personal experience from one of my previous projects. The part
of the code responsible for HTML escaping was written from scratch. It was working
well for years, but eventually it encountered a user input which caused it to spin
into an infinite loop. The user, finding the service to be unresponsive, attempted
to retry with the same input. Eventually, all the CPUs on the server allocated for
this application were being occupied by this infinite loop. If the author of this
naive HTML escape tool had decided to use one of the well known libraries available
for HTML escaping, such as HtmlEscapers from Google Guava, this probably wouldn’t
have happened. At the very least, true for most popular libraries with a community
behind it, the error would have been found and fixed earlier by the community for
this library.
Another potential reason behind such memory leaks is a group of objects referencing
each other, causing circular dependencies so that the garbage collector can’t
decide whether these objects with cross-dependency references are needed or not.
Another issue is leaks in non-heap memory when JNI is used.
scheduledExecutorService.scheduleAtFixedRate(() -> {
numbers.add(new BigDecimal(System.currentTimeMillis()));
}, 10, 10, TimeUnit.MILLISECONDS);
try {
scheduledExecutorService.awaitTermination(1, TimeUnit.DAYS);
} catch (InterruptedException e) {
e.printStackTrace();
}
This example creates two scheduled tasks. The first task takes the last number from
a deque called “numbers” and prints the number and deque size in case the number is
divisible by 51. The second task puts numbers into the deque. Both tasks are
scheduled at a fixed rate, and run every 10 ms. If the code is executed, you’ll see
that the size of the deque is permanently increasing. This will eventually cause
the deque to be filled with objects consuming all available heap memory. To prevent
this while preserving the semantics of this program, we can use a different method
for taking numbers from the deque: “pollLast”. Contrary to the method “peekLast”,
“pollLast” returns the element and removes it from the deque while “peekLast” only
returns the last element.
To learn more about memory leaks in Java, please refer to our article that
demystified this problem.
Excessive garbage allocation may happen when the program creates a lot of short-
lived objects. The garbage collector works continuously, removing unneeded objects
from memory, which impacts applications’ performance in a negative way. One simple
example:
To deal with other cases when one wants to avoid nulls, different strategies may be
used. One of these strategies is to use Optional type that can either be an empty
object or a wrap of some value:
selfie = person.shootASelfie();
try {
selfie.show();
} catch (NullPointerException e) {
// Maybe, invisible man. Who cares, anyway?
}
A clearer way of highlighting an exceptions’ insignificance is to encode this
message into the exceptions’ variable name, like this:
hats.removeIf(IHat::hasEarFlaps);
That’s it. Under the hood, it uses “Iterator.remove” to accomplish the behavior.
There are other collections tuned for different cases, e.g. “CopyOnWriteSet” and
“ConcurrentHashMap”.
If two objects are equal, then their hash codes should be equal.
If two objects have the same hash code, then they may or may not be equal.
Breaking the contract’s first rule leads to problems while attempting to retrieve
objects from a hashmap. The second rule signifies that objects with the same hash
code aren’t necessarily equal. Let us examine the effects of breaking the first
rule:
Boat(String name) {
this.name = name;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
@Override
public int hashCode() {
return (int) (Math.random() * 5000);
}
}
As you can see, class Boat has overridden equals and hashCode methods. However, it
has broken the contract, because hashCode returns random values for the same object
every time it’s called. The following code will most likely not find a boat named
“Enterprise” in the hashset, despite the fact that we added that kind of boat
earlier:
”The general contract of finalize is that it is invoked if and when the JavaTM
virtual machine has determined that there is no longer any means by which this
object can be accessed by any thread (that has not yet died), except as a result of
an action taken by the finalization of some other object or class which is ready to
be finalized. The finalize method may take any action, including making this object
available again to other threads; the usual purpose of finalize, however, is to
perform cleanup actions before the object is irrevocably discarded. For example,
the finalize method for an object that represents an input/output connection might
perform explicit I/O transactions to break the connection before the object is
permanently discarded.“
One could decide to use the finalize method for freeing resources like file
handlers, but that would be a bad idea. This is because there’s no time guarantees
on when finalize will be invoked, since it’s invoked during the garbage collection,
and GC’s time is indeterminable.
listOfNumbers.add(10);
listOfNumbers.add("Twenty");
Conclusion
Java as a platform simplifies many things in software development, relying both on
sophisticated JVM and the language itself. However, its features, like removing
manual memory management or decent OOP tools, don’t eliminate all the problems and
issues a regular Java developer faces. As always, knowledge, practice and Java
tutorials like this are the best means to avoid and address application errors - so
know your libraries, read java, read JVM documentation, and write programs. Don’t
forget about static code analyzers either, as they could point to the actual bugs
and highlight potential bugs.
Great article, very detailed and informative. About item #7, I just want to ask
your opinion about Java's checked exceptions, are they really necessary? Personally
I think all exceptions should be unchecked ( Runtime exceptions ). Try-catch is
good, but forcing developers to escalated code blocks is a really bad idea, and
overall, it does more harm than good. How do you think?
4
•Reply•Share ›
Avatar
ncmathsadist Lê Anh Quân • 3 years ago
Disagree. Uncaught exceptions can cause unexpected crashes in programs for end
users. This is a big-time no-no. You do not repay your customers with death.
Runtimes exceptions should be reserved for programmer goofs. Even then, if user
abuse causes a run time exception (think NumberFormatException from user entry in a
dialog box), that exception should be caught and handled gracefully. Obviously, you
should never duck fileIO and socket exceptions. For the most part, Java's exception
rules make sense and prevent production software from crashing.
1
•Reply•Share ›
Avatar
Lê Anh Quân ncmathsadist • 3 years ago
It is true... in theory. Somehow due to my 10+ years Java experience, the checked
exceptions are the most frustrating thing:
- It slows me down, force me to handle exceptions when I'm not ready to (have to
focus on the logic the code mainly about) thus make me either handle them wrongly
or throw a wrapping RTE.
- "throws" exceptions are often not an option - due to API contracts.
- Very hard to centralize exception handling with checked exceptions. Solutions
often come with heavy weighted framework which then produce more troubles than it
solves.
- The in-line try-catch blocks are most horrible nightmare. I am OKAY with try-
catch if catching exception is the only thing the method is about, but few in-line
try-catch in a method... simply destroy the code and the software.
Anyway, maybe you are right, maybe I'm too bent toward the elegant looking code
like functional style and became too much emotional in this matter, but Java
doesn't need to mean ugly looking code. Right?
1
•Reply•Share ›
Avatar
Joseph S. ncmathsadist • 4 months ago
Checked exceptions were made with the intention of recovery for a problem, so they
could make sense if you can do something when a exception happen, for example, if
you get a database exception you could implement a mechanism to retry some time
later, however, these kind of cases don't happen frequently.
•Reply•Share ›
Avatar
tfa ncmathsadist • 3 years ago
Are there still developers who think checked exceptions were are good idea? Come
on!
•Reply•Share ›
Avatar
Preda Lê Anh Quân • 3 years ago
Yes they are necessary because you want to capture possible code breaks ideally at
compile time, and react to them, by capturing the exception and doing something
that makes sense in that event. Rather than handling all exceptions at runtime,
it's better to catch them earlier.
1
•Reply•Share ›
Avatar
Lê Anh Quân Preda • 3 years ago
It's great if all checked exceptions are "possible code break", but many times they
aren't. Think about writing a method to calculate days until Valentine using parse
with "2/14" and SimpleDateFormat. You will have to handle ParseException even
though you know it will never happen. Or catching IOException every time you do
something with a ByteArrayOutputstream... It's painful to have your code poluted
with unnecessary try catch
2
•Reply•Share ›
Avatar
Preda Lê Anh Quân • 3 years ago
While that might be true. Proceeding with precaution now always pays dividends
later, and I think the language designers had this in mind. However you can always
use ruby, and don't have to do this :)
•Reply•Share ›
Avatar
Lê Anh Quân Preda • 3 years ago
Dear Preda, I think you are terribly wrong:
1. About precaution: it is good, but please don't force. I bet your program will be
much more robust if NullPointerException is a checked one, but you'll soon find out
it's a horrible idea.
2. Language designers are not gods, they can be wrong. They didn't have generics or
lambdas in mind in the first place. In this case of checked exception, they are
wrong again.
3. Ruby? Really? What would I do if later I find out Ruby doesn't have static type?
Change to dot net?
1
•Reply•Share ›
Avatar
Mikhail Selivanov Lê Anh Quân • 3 years ago
Thanks, I'm glad that you liked the article. Regarding exceptions, I think in many
cases it's a good idea to encode operation error into the result value. Forcing
developers to handle errors is a nice feature for designing API, but it shouldn't
be overused.
•Reply•Share ›
Avatar
Josip Pokrajcic • 3 years ago
Regarding mistake #2 you said that the program will write “Zero” followed by “One”.
It should wrote "Zero" followed by "One" "Two" if I'm not mistaken.
EDIT: my bad, misread the code
2
•Reply•Share ›
Avatar
Carlos De Luna Saenz • 3 years ago
I would like to add a couple more: 11th: Use OF Java like a Structured language
instead a OOP language (it's weird and awfull when you get stock reviewing
"spaguetti code", for example... and 12th: Lack of use of Design Patterns: Design
patterns were made to make life easier and most of the "well known" frameworks use
them and allows you to use them (such Spring MVC or Hibernate for DAO pattern)...
then do it. Congratulations for an excelent article.
1
•Reply•Share ›
Avatar
Mikhail Selivanov Carlos De Luna Saenz • 3 years ago
Thank you for the kind words. Agree, knowledge of OOP and design patterns is an
important thing, not only for Java programmers.
•Reply•Share ›
Avatar
Peter Storch • 3 years ago
Be carefull with #1: In principle you are right about not inventing the wheel. But
I've seen projects having dependencies to 3 XML, 4 Logging and 5 JSON Libraries.
And often enough introducing one library adds a dependency to 10 others.
1
•Reply•Share ›
Avatar
Mikhail Selivanov Peter Storch • 3 years ago
It's not very clever to use 5 libraries that do the same thing, just stick to one
of them. However, there is another issue with third-party libraries. Each time you
add a dependency to you project, there is a possibility that it will pull a half of
the repository of it's own dependencies.
•Reply•Share ›
Avatar
stingersdestiny • 3 years ago
I think Number 7 needs further explanation. Its one thing to catch an exception and
not do anything and completely another whether one should catch NPE (or other
runtimeexcepions). In my opinion its extremely rare to justiy catching an NPE. It
is bad code. Your code should not be returning null and at the very least should be
verifying before its usage. Programmers should let NPE be thrown and then
investigate it instead of catching it
1
•Reply•Share ›
Avatar
Mikhail Selivanov stingersdestiny • 3 years ago
I'm agree about NPE and there is a #6 which is about how to avoid it by not using
null references.
•Reply•Share ›
Avatar
Chuck Batson • 3 years ago
FindBugs (http://findbugs.sourceforge...) is an invaluable tool and identifies many
common mistakes. I would add that Common Mistake #11 is not using FindBugs. :-)
1
•Reply•Share ›
Avatar
mydevgeek • a year ago
Great article. What do u thing about initialize object inside the loop vs
initialize object outside the loop and assign new values inside? I think, it's
optimized in java compiler. Need to verify.
•Reply•Share ›
Avatar
govindrajput • a year ago
Nice artical so very detailed for The fixed code wouldn’t compile because we are
trying to add a string into a collection that is expected to store integers only.
there is veru usefull that artical foe developer do it use.
•Reply•Share ›
Avatar
john stanley • 2 years ago
You had explains about the most common mistakes which are done by the java
developers.
for more information about java visit: java online training
•Reply•Share ›
Avatar
Ricardo Santos • 2 years ago
Also about item #7. When logging an exception you should include details about the
context of the exception and also include the exception in order not to lose the
stacktrace.
On my past years I've seen many of log.error(ex.getMessage()) instead of
log.error("Error reading file '" + path + "'", ex).
•Reply•Share ›
Avatar
Anand Kumar • 3 years ago
visit for lot more java interview questions and programs -
http://javadiscover.blogspo...
•Reply•Share ›
Avatar
Madonah • 3 years ago
Helped me a lot. Java is cool, hope you write also about C++ and C#. Thank you.
•Reply•Share ›
−
Avatar
lucas rafagnin • 3 years ago
Thanks guy!
https://www.infoq.com/news/2010/08/arm-blocks
FileInputStream in = null;
FileOutputStream out = null;
try {
in = new FileInputStream("xanadu.txt");
out = new FileOutputStream("outagain.txt");
int c;
while ((c = in.read()) != -1)
out.write(c);
} finally {
if (in != null)
in.close();
if (out != null)
out.close();
}
Not only is there a lot of boiler plate, but the documentation for
InputStream.close() suggests that it can throw an IOException. (An exception is far
more likely on the OutputStream but in any case, there needs to be an outer catch
or propagation in order to successfully compile this code.)
The lexical scope of the try-catch-finally block also requires the variable for
FileInputStream in and FileOutputStream out to be declared lexically outside the
block itself. (If they were defined inside the try block, then they wouldn't be
available inside the catch or finally blocks.)
To eliminate this boilerplate code, and to tighten the lexical scoping of the
resources used inside the block, a new addition has been made to the try block in
the Java language. An initial specification of the try-with-resources blocks (or
ARM blocks) was made available via an ininitial implementation, which has
subsequently made its way into build 105 of JDK 7.
A new interface, java.lang.AutoCloseable, has been added to the proposed API, which
defines a single method close() which throws Exception. This has been retro-fitted
as a parent of java.io.Closeable, which means that all InputStream and OutputStream
automatically take advantage of this behaviour. In addition, FileLock and
ImageInputStream have also been fitted with the AutoCloseable interface.
try (
FileInputStream in = new FileInputStream("xanadu.txt");
FileOutputStream out = new FileOutputStream("outagain.txt")
) {
int c;
while((c=in.read()) != -1 )
out.write();
}
At the end of the try block, whether by completion normally or otherwise, both the
out and in resources will have close() called automatically. Furthermore, unlike
our original example, both out.close() and in.close() are guaranteed to be
executed. (In the original example, had in.close() thrown an exception, then the
subsequent out.close() would not have been executed.)
There are some subtle aspects to this which are worth noting:
Related Topics:
Development
Culture & Methods
Java
Change
Related Editorial
Cloud Native Java Has A New Home: Jakarta EEGet Ready for Cloud Native, Service-
Meshed Java EnterpriseOracle Replaces JavaOne with Oracle Code OneModular Java
Development in ActionJava EE Guardians Moving Forward with Jakarta EE
Related Vendor Content
Message
Community comments Watch Thread
Hmmm..... by Clint Farleigh Posted Aug 23, 2010 10:14
About freaking time!!!!!!! by Matt Giacomini Posted Aug 23, 2010 11:15
Learning from others by Patrick Dreyer Posted Aug 25, 2010 01:01
Re: Learning from others by James Watson Posted Aug 25, 2010 08:07
Re: Learning from others by David Birdsall Posted Aug 26, 2010 04:24
Re: Learning from others by Rob Elliot Posted Aug 27, 2010 03:29
Hmmm.....
Aug 23, 2010 10:14 by Clint Farleigh
I think I've seen this somewhere before... maybe about 5 years ago? :-)
Like
Reply
Back to top
About freaking time!!!!!!!
Aug 23, 2010 11:15 by Matt Giacomini
.
Like
Reply
Back to top
Learning from others
Aug 25, 2010 01:01 by Patrick Dreyer
One of the first lessons we as parents teach to our children is: Learn from others.
Why does this not apply to programming languages?
Note: I'm not going into a debate .NET vs. Java - I won't.
Like
Reply
Back to top
Re: Learning from others
Aug 25, 2010 08:07 by James Watson
Try-catch-finally is about error handling and not about ARM.
I have to agree that using the try keyword for this doesn't seem very good. I have
to guess that the reason a new keyword wasn't used (such as 'using') is because the
fear that existing code will not compile. Although, this did not prevent enum from
being added which was more likely (I guess) to be used as a name in existing code.
Like
Reply
Back to top
Re: Learning from others
Aug 26, 2010 04:24 by David Birdsall
what if it could be implemented using closures? Would coin's new syntax be able to
do something similar to this, but without the extra boilerplate:
Closeable.close(new FileInputStream("xanadu.txt")) {
public void read(FileInputStream in) {
in.read();
}
});
At least it would be re-using another feature of the language (closures) and not
introducing another keyword.
Potentially if you made the instance variable final you wouldn't even need to pass
it in as an argument to the lambda. The "with" method would call the closure in a
try block and close itself cleanly in the finally block.
It does seem odd that this should be implemented at the same time as lambdas if
lambdas would allow it to be implemented cleanly without further new syntax.
Perhaps there's some subtlety to it that I'm missing.