You are on page 1of 18

HashMap hashCode collision by example

As we all know Hash is a part of Java Collection framework and stores key-value pairs. HashMap uses
hash Code value of key object to locate their possible in under line collection data structure, to be specific
it is nothing but array. Hash code value of key object decide index of array where value object get stored.
As per hashcode equals method implementation rules

Objects that are equal according to the equals method must return the same hashCode
value. &
If two objects are not equal according to equals, they are not required to return different
hashCode values.
As per above statement it is possible that two different object may have same hashcode values, this is
called hashcode collision. To overcome this problem concept of bucket has been introduced. All the
Value objects whose corresponding Keys hashcode values are same , they fall under same bucket.

Above diagram
explains hash code collision. There are three key-value entries are shown , out of which second and third
has same hashcode , that why they kept under same bucket.
To understand it in details , consider following example
1: import java.util.*;
2: class TestCollison
3: {
4:
public static void main(String[] args)
5:
{
6:
7:
HashMap map = new HashMap();
8:
Person p1 = new Person(1,"ABC");
9:
Person p2 = new Person(2,"DEF");
10:
Person p3 = new Person(1,"XYZ");
11:
Person p4 = new Person(1,"PQR");
12:
Person p5 = new Person(1,"PQR");
13:
System.out.println("Adding Entries ....");
14:
map.put(p1,"ONE");
15:
map.put(p2,"TWO");
16:
map.put(p3,"THREE");
17:
map.put(p4,"FOUR");
18:
map.put(p5,"FIVE");
19:
20:
System.out.println("\nComplete Map entries \n" + map);
21:
22:
System.out.println("\nAccessing non-collided key");

23:
System.out.println("Value = "+map.get(p2));
24:
System.out.println("\nAccessing collided key");
25:
System.out.println("Value = " + map.get(p1));
26:
}
27: }
28:
29:
30: class Person
31: {
32:
private int id;
33:
private String name;
34:
35:
public Person(int id, String name) { this.id = id; this.name = name;}
36:
37:
public String getName() { return name;}
38:
39:
public int getId() { return id;}
40:
41:
public void setId(int id) { this.id = id;}
42:
43:
public void setName (String name) { this.name = name; }
44:
45:
public int hashCode(){
46:
System.out.println("called hashCode for ="+ id+"."+name);
47:
return id;
48:
}
49:
50:
public boolean equals(Object obj ){
51:
System.out.println("called equals on ="+ id+"."+name + " to compare with = "+ ((Person)obj).getId()
+ "."+ ((Person)obj).getName());
52:
boolean result = false;
53:
if (obj instanceof Person)
54:
{
55:
if( ((Person)obj).getId() == id && ((Person)obj).getName().equals(name) )
56:
result = true;
57:
}
58:
return result;
59:
}
60:
public String toString() { return id+" - "+name;}
61: }

In this example we have defined class Person, it is being used as keys in map. I have intentionally
implemented hashcode() method so that hashcode collision will occur.

In test class i have defined four instance of person class and added them to hahsmap as keys and a
constant string as value. You can notice that instance p1,p3,p4 and p5 will have same hashcode value, as
hashcode() method consider only ID. As a result when you put p3 instance to map , it lands under same
bucket of instance p1. Same will be happened with p4 and p5 instance.

Have a look at the output of this program to understand in details.


1:
2:
3:
4:
5:
6:

---------- java ---------Adding Entries ....


called hashCode for =1.ABC
called hashCode for =2.DEF
called hashCode for =1.XYZ
called equals on =1.XYZ to compare with = 1.ABC

7: called hashCode for =1.PQR


8: called equals on =1.PQR to compare with = 1.XYZ
9: called equals on =1.PQR to compare with = 1.ABC
10: called hashCode for =1.PQR
11: called equals on =1.PQR to compare with = 1.PQR
12:
13: Complete Map entries
14: {1 - PQR=FIVE, 1 - XYZ=THREE, 1 - ABC=ONE, 2 - DEF=TWO}
15:
16: Accessing non-collided key
17: called hashCode for =2.DEF
18: Value = TWO
19:
20: Accessing collided key
21: called hashCode for =1.ABC
22: called equals on =1.ABC to compare with = 1.PQR
23: called equals on =1.ABC to compare with = 1.XYZ
24: Value = ONE
25:
26: Output completed (0 sec consumed)

Here you can see log trace of hashcode and equals method to understand HashMaps behavior. When
you put third entry to map , it calls equals method on all the keys which are already present in the same
bucket to find duplicate keys , see line no 6. Same behavior can be notice while adding fourth entry,
see line no 8 & 9.

Now consider fifth case where instance p5 is put against FIVE value. Instance p4 & p5 are equal as per
equals() method implementation so it is a duplicate key, so map should replace existing value with new
value. the same behavior you can find in output trace , see line no 11.

This example states that implementation of hashCode and equals methods are very important while using
Maps collection.

Benefits of immutability
Freedom to cache
Inherent thread safety
Safe in the presence of ill-behaved code
Good keys

Guidelines for writing immutable classes


Writing immutable classes is easy. A class will be immutable if all of the following are true:

All of its fields are final

The class is declared final

The this reference is not allowed to escape during construction

Any fields that contain references to mutable objects, such as arrays, collections, or mutable classes like Date:
o Are private
o Are never returned or otherwise exposed to callers
o Are the only reference to the objects that they reference
o Do not change the state of the referenced objects after construction

Listing 3. Right and wrong ways to code immutable objects

class ImmutableArrayHolder {
private final int[] theArray;
// Right way to write a constructor -- copy the array
public ImmutableArrayHolder(int[] anArray) {
this.theArray = (int[]) anArray.clone();
}
// Wrong way to write a constructor -- copy the reference
// The caller could change the array after the call to the constructor
public ImmutableArrayHolder(int[] anArray) {
this.theArray = anArray;
}
// Right way to write an accessor -- don't expose the array reference
public int getArrayLength() { return theArray.length }
public int getArray(int n) { return theArray[n]; }
// Right way to write an accessor -- use clone()
public int[] getArray()
{ return (int[]) theArray.clone(); }
// Wrong way to write an accessor -- expose the array reference
// A caller could get the array reference and then change the contents
public int[] getArray()
{ return theArray }
}

Q5) What are the advantages of immutability?


Ans) The advantages are:
1) Immutable objects are automatically thread-safe, the overhead caused due to use of synchronisation is avoided.
2) Once created the state of the immutable object can not be changed so there is no possibility of them getting into
an inconsistent state.
3) The references to the immutable objects can be easily shared or cached without having to copy or clone them as
there state can not be changed ever after construction.
4) The best use of the immutable objects is as the keys of a map.

3) Explain race condition in Java hashmap ?


not many people know about it that hashmap could run into race condition if it would be modified by two thread
simultaneous and one thread tries to resize or rehash the map because of capacity crossing threshold value. since
hashmap maintains a linked list of element in bucket and while copying from one hashmap to other or old to new
order of linked list got reversed, which could result in infinite loop if two threads are doing resizing at same time.

A Beautiful Race Condition


I recently gave a keynote at the ICIS 2009 conference in Shanghai. The topic was why multithreaded
programming seemed so easy, yet turned out to be so hard. The fun part was I investigated (per my last

post and this one) several old, personal concurrency demons I knew existed but wanted to know more
about.
One of those was, indeed, my favorite race condition. It doesn't escape me that its probably wholly
unhealthy to even *have* a favorite race condition (akin to having a favorite pimple or something) - but
nonetheless, the elegance of this one still makes my heart aflutter.
The scenario of this race is that we assume, not terribly inaccurately, that race conditions at times, can
cause corrupted data. However, what if we have a situation where we sort of don't mind some corrupted
data? A "good enough" application as it were.
The dangerous part of all this is if we assume (without digging in) what kind of data corruption can
happen. As you'll see, you might just not get the type of data corruption you were hoping for (which is
one of the sillier sentences I've ever written).
The particular instance of this kind of happy racing I've encountered is where someone uses a
java.util.HashMap as a cache. I've never done such a thing myself, but I heard about this race and thus
this analysis. They may use it with a linked-list or maybe just raw, but the baseline is that they figure a
synchronized HashMap will be expensive - and in their case, a race condition inside the HashMap will just
lose (or double up on) an entry now and then.
That is - a race condition between two (or more) threads might accidentally drop an entry causing an
extra cache miss - no biggie. Or, it may cause one thread to re-cache an entry that didn't need it. Also no
biggie. In other words, a slightly imprecise, yet very fast cache is ok by them. (of course, this assumption
is dead wrong - don't do that - read on for why!)
So they setup a HashMap in some global manner, and allow any number of nefarious threads bang away
on it. Let them put and get to their hearts content.
Now if you happen to know how HashMap works, if the size of the map exceeds a given threshold, it will
act to resize the map. It does that by creating a new bucket array of twice the previous size, and then
putting every old element into that new bucket array.
Here's the core of the loop that does that resize:

1:

// Transfer method in java.util.HashMap -

2:

// called to resize the hashmap

3:
4:
5:

for (int j = 0; j < src.length; j++) {


Entry e = src[j];

6:

if (e != null) {

7:

src[j] = null;

8:

do {

9:

Entry next = e.next;

10:

int i = indexFor(e.hash, newCapacity);

11:

e.next = newTable[i];

12:

newTable[i] = e;

13:

e = next;

14:

} while (e != null);

15:

16: }

Simply, after line 9, variable e points to a node that is about to be put into the new (double-wide) bucket
array. Variable
next

holds a reference to the next node in the existing table (because in line 11, we'll destroy that

relation).
The goal is that nodes in the new table get scattered around a bit. There's no care to keep any ordering
within a bucket (nor should there be). HashMap's don't care about ordering, they care about constant time
access.
Graphically, let's say we start with the HashMap below. This one only has 2 buckets (the default of
java.util.HashMap is 16) which will suffice for explanatory purposes (and save room).
As our loop starts, we assign e and next to A and B, respectively. The A node is about to be moved, the B
node is next.

We have created a double-sized bucket array (in this case size=4) and migrate node A in iteration 1.

Iteration 2 moves node B and Iteration 3 moves node C. Note that next=null is the ending condition of our
while loop for migrating any given bucket (read that again, its important to the end of the story).

Also important to the story, note that the migration inverted the order of Node's A and B. This was
incidental to the smart idea of inserting new nodes at the top of the list instead of traversing to find the
end each time and plunking them there. A normal put operation would still have to check that its inserting
(and not replacing) but given a resize can't replace, this saves us a lot of "find the end" traversals.
Finally, after iteration 3, our new HashMap looks like this:

Our resize accomplished precisely the mission it set out to. It took our 3-deep bucket and morphed it into
a 2-deep and 1-deep one.
Now, that's all well and good, but this article isn't about HashMap resizing (exactly), its about a race
condition.
So, let's assume that in our original happy HashMap (the one above with just 2 buckets) we have two
threads. And both of those threads enter the map for some operation. And both of those threads
simultaneously realize the map needs a resize. So, simultaneously they both go try to do that.
As an aside, the fact that this HashMap is unsynchronized opens it up to a scary array of unimaginable
visibility issues but that's another story. I'm sure that using an unsynchronized HashMap in this fashion
can wrack evil in ways unlike man has ever seen, I'm just addressing one possible race in one possible
scenario.
Ok.. back to the story.
So two threads, which we'll cleverly name Thread1 and Thread2 are off to do a resize. Let's say Thread1
beats Thread2 by a moment. And let's say Thread1 (by the way, the fun part about analyzing race
conditions is that nearly anything can happen - so you can say "Let's say" all darn day long and you'll
probably be right!) gets to line 10 and stops. Thats right, after executing line 9, Thread1 gets kicked out
of the (proverbial) CPU.

1:

// Transfer method in java.util.HashMap -

2:

// called to resize the hashmap

3:
4:

for (int j = 0; j < src.length; j++) {

5:

Entry e = src[j];

6:

if (e != null) {

7:

src[j] = null;

8:

do {

9:

Entry next = e.next;


// Thread1 STOPS RIGHT HERE

10:

int i = indexFor(e.hash, newCapacity);

11:

e.next = newTable[i];

12:

newTable[i] = e;

13:

e = next;

14:

} while (e != null);

15:

16: }

Since it passed line 9, Thread1 did get to set its e and nextvariables. The situation looks like this (I've
renamed e and next toe1 and next1 to keep them straight between the two threads as both threads have
their own e and next).

Again, Thread1 didn't actually get to move any nodes (by this time in the code, it did allocate a new
bucket array).
What happens next? Thread2, that's what. Luckily, what Thread2 does is simple - let's say it runs through
the full resize. All the way. It completes.
We get this:

Note that e1 and next1 still point to the same nodes. But those nodes got shuffled around. And most
importantly the next relation got reversed.
That is, when Thread1 started, it had node A with its next as node B. Now, its the opposite, node B has its
next as node A.
Sadly (and paramount to the plot of this story) is that Thread1 doesn't know that. If you're thinking that
the invertedness of Thread1's e1 and next1 are important, you're right.
Here's Thread1's next few iterations. We start with Thread2's bucket picture because thats really the
correct "next" relations for our nodes now.

10

Everything sort of looking ok.. except for our e and next at this point. The next iteration will plunk A into
the front of the bucket 3 list (it is after all, next). And will assign its next to whatever happens to already
be there - that is, node B.

11

Woah. Thar she blows.


So right about now Thread1 goes into, what we like to call in the biz, an "infinite loop". Any subsequent
get operations that hit this bucket start searching down the list and, go into, yep - an infinite loop. Any put
operation that first needs to scan the nodes to see if its going to do a replace, will, you guessed it, go into
an infinite loop. Basically, this map is a pit of infinite loopeness.
If you remember we noted that race conditions cause data corruption. Well, yeah, thats what we have
here - just very unlucky data corruption on pointer structures. I'm the first to admit this stuff is tricky - if
you found errors in my analysis I'll happily fix this post.
Now I had the happy fortune for a time of sharing an office with Josh Bloch who wrote java.util.HashMap.
When I innocently mentioned he had a bug in his code given this behavior, he did indeed (to use Josh's
word's) go non-linear on me.
And he was right. This is not a bug. HashMap is built specifically for its purpose and this implementation is
not intended as threadsafe. There's a gaggle of ways to make it threadsafe, but in plain, vanilla, (and very
fast) form - its not. And needless to say, you shouldn't be using it that way.
Race conditions are nothing to mess with and the worst ones are the ones that don't crash your program
but let it wander down some unintended path. Synchronization isn't just for fun you know.
And nefarious uses of HashMap aside, I still attest - this is, indeed, a beautiful race.
Addendum: I've been yelled at a few times for calling any race condition "beautiful". I'll defend myself by
our apparently human nature to generally call intricate complexity beautiful (i.e. waves crashing on a
shore, nature in general).
Most races end up being about data-corruption. This one is data-corruption that manifests as control-flowcorruption. And it does so fantastically without an error (infinite loops notwithstanding).
As the evolution analogy goes, if you drive a needle into a pocket watch - chances are you'll simply
destroy it. But there's a tiny chance you'll actually make it a better watch (clearly, not the case here). And
another tiny chance you'll simply make it something "different" - but still, per se, functioning.
Again, my use of "beautiful" might be more appropriate as "a complex mutation with surprising nondestruction" :)

Hashmap-infinite-loop-problem-case study
12

This article will provide you with complete root cause analysis and solution of ajava.util.HashMap infinite
loop problem affecting an Oracle OSB 11g environment running on IBM JRE 1.6 JVM.
This case study will also demonstrate how you can combine AIX ps mpcommand and Thread Dump analysis
to pinpoint you top CPU contributor Threads within your Java VM(s). It will also demonstrate how dangerous
using a non Thread safe HashMap data structure can be within a multi Thread environment / Java EE
container.

Environment specifications
-

Java EE server: Oracle Service Bus 11g


Middleware OS: AIX 6.1
Java VM: IBM JRE 1.6 SR9 64-bit
Platform type: Service Bus

Monitoring and troubleshooting tools


-

AIX nmon & topas (CPU monitoring)


AIX ps mp (CPU and Thread breakdown OS command)
IBM JVM Java core / Thread Dump (thread analysis and ps mp data corrleation)

Problem overview
-

Problem type: Very High CPU observed from our production environment
A high CPU problem was observed from AIX nmon monitoring hosting a Weblogic Oracle Service Bus 11g
middleware environment.

Gathering and validation of facts


As usual, a Java EE problem investigation requires gathering of technical and non-technical facts so we can
either derived other facts and/or conclude on the root cause. Before applying a corrective measure, the facts
below were verified in order to conclude on the root cause:

What is the client impact? HIGH


Recent change of the affected platform? Yes, platform was recently migrated from Oracle ALSB 2.6 (Solaris &
HotSpot 1.5) to Oracle OSB 11g (AIX OS & IBM JRE 1.6)
Any recent traffic increase to the affected platform? No
How does this high CPU manifest itself? A sudden CPU increase was observed and is not going down; even
after load goes down e.g. near zero level.
Did an Oracle OSB recycle resolve the problem? Yes, but problem is returning after few hours or few days
(unpredictable pattern)
Conclusion #1: The high CPU problem appears to be intermittent vs. pure correlation with load
Conclusion #2: Since high CPU remains after load goes down, this indicates either JVM threshold are
triggered along with point of non-return and / or the presence of some hang or infinite looping Threads
13

AIX CPU analysis


AIX nmon & topas OS command were used to monitor the CPU utilization of the system and Java process.
The CPU utilization was confirmed to go up as high as 100% utilization (saturation level).
Such high CPU level did remain very high until the JVM was recycled.

AIX CPU Java Thread breakdown analysis


One of the best troubleshooting approaches to deal with this type of issue is to generate an AIX ps
mp snapshot combined with Thread Dump. This was achieved by executing the command below:
ps -mp <Java PID> -o THREAD
Then immediately execute:
kill -3 <Java PID>
** This will generate a IBM JRE Thread Dump / Java core file (javacorexyz..) **
The AIX ps mp command output was generated as per below:
USER

PID

PPID

user 12910772

9896052

TID ST

CP PRI SC

- A

97

60 98

WCHAN

F
*

TT BND COMMAND

342001

- /usr/java6_64/bin/java

-Dweblogic.Nam
-

6684735 S

60

1 f1000f0a10006640

- -

6815801 Z

77

c00001

- -

6881341 Z

0 110

c00001

- -

6946899 S

82

1 f1000f0a10006a40

8410400

- -

8585337 S

82

1 f1000f0a10008340

8410400

- -

9502781 S

10485775 S

82

1 f1000f0a10009140

8410400

- -

82

1 f1000f0a1000a040

8410400

- -

10813677 S

82

1 f1000f0a1000a540

8410400

21299315 S

95

25493513 S

82

1 f1000f0a10018540

8410400

- -

25690227 S

86

1 f1000f0a10018840

8410400

- -

25755895 S

82

1 f1000f0a10018940

8410400

- -

26673327 S

82

1 f1000f0a10019740

8410400

62

1 f1000a01001d0598

8410400

410400

- -

14

As you can see in the above snapshot, 1 primary culprit Thread Id (21299315) was found taking ~95% of the
entire CPU.

Thread Dump analysis and PRSTAT correlation


Once the primary culprit Thread was identified, the next step was to correlate this data with the Thread Dump
data and identify the source / culprit at the code level.
But first, we had to convert the decimal format to HEXA format since IBM JRE Thread Dump native Thread Ids
are printed in HEXA format.
Culprit Thread Id 21299315 >> 0x1450073 (HEXA format)
A quick search within the generated Thread Dump file did reveal the culprit Thread as per below.
Weblogic ExecuteThread #97 Stack Trace can be found below:
3XMTHREADINFO
"[STUCK] ExecuteThread: '97' for queue:
'weblogic.kernel.Default (self-tuning)'" J9VMThread:0x00000001333FFF00,
15

j9thread_t:0x0000000117C00020, java/lang/Thread:0x0700000043184480, state:CW,


prio=1
3XMTHREADINFO1
(native thread ID:0x1450073, native priority:0x1,
native policy:UNKNOWN)
3XMTHREADINFO3
Java callstack:
4XESTACKTRACE
at
java/util/HashMap.findNonNullKeyEntry(HashMap.java:528(Compiled Code))
4XESTACKTRACE
at
java/util/HashMap.putImpl(HashMap.java:624(Compiled Code))
4XESTACKTRACE
at java/util/HashMap.put(HashMap.java:607(Compiled
Code))
4XESTACKTRACE
at
weblogic/socket/utils/RegexpPool.add(RegexpPool.java:20(Compiled Code))
4XESTACKTRACE
at
weblogic/net/http/HttpClient.resetProperties(HttpClient.java:129(Compiled Code))
4XESTACKTRACE
at
weblogic/net/http/HttpClient.openServer(HttpClient.java:374(Compiled Code))
4XESTACKTRACE
at
weblogic/net/http/HttpClient.New(HttpClient.java:252(Compiled Code))
4XESTACKTRACE
at
weblogic/net/http/HttpURLConnection.connect(HttpURLConnection.java:189(Compiled
Code))
4XESTACKTRACE
at
com/bea/wli/sb/transports/http/HttpOutboundMessageContext.send(HttpOutboundMessag
eContext.java(Compiled Code))
4XESTACKTRACE
at
com/bea/wli/sb/transports/http/wls/HttpTransportProvider.sendMessageAsync(HttpTra
nsportProvider.java(Compiled Code))
4XESTACKTRACE
at
sun/reflect/GeneratedMethodAccessor2587.invoke(Bytecode PC:58(Compiled Code))
4XESTACKTRACE
at
sun/reflect/DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java
:37(Compiled Code))
4XESTACKTRACE
at
java/lang/reflect/Method.invoke(Method.java:589(Compiled Code))
4XESTACKTRACE
at
com/bea/wli/sb/transports/Util$1.invoke(Util.java(Compiled Code))
4XESTACKTRACE
at $Proxy115.sendMessageAsync(Bytecode
PC:26(Compiled Code))
4XESTACKTRACE
at
com/bea/wli/sb/transports/LoadBalanceFailoverListener.sendMessageAsync(LoadBalanc
eFailoverListener.java:141(Compiled Code))
4XESTACKTRACE
at
com/bea/wli/sb/transports/LoadBalanceFailoverListener.onError(LoadBalanceFailover
Listener.java(Compiled Code))
16

4XESTACKTRACE
at
com/bea/wli/sb/transports/http/wls/HttpOutboundMessageContextWls$RetrieveHttpResp
onseWork.handleResponse(HttpOutboundMessageContextWls.java(Compiled Code))
4XESTACKTRACE
at
weblogic/net/http/AsyncResponseHandler$MuxableSocketHTTPAsyncResponse$RunnableCal
lback.run(AsyncResponseHandler.java:531(Compiled Code))
4XESTACKTRACE
at
weblogic/work/ContextWrap.run(ContextWrap.java:41(Compiled Code))
4XESTACKTRACE
at
weblogic/work/SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManager
Impl.java:528(Compiled Code))
4XESTACKTRACE
at
weblogic/work/ExecuteThread.execute(ExecuteThread.java:203(Compiled Code))
4XESTACKTRACE
at
weblogic/work/ExecuteThread.run(ExecuteThread.java:171(Compiled Code))

Thread Dump analysis HashMap infinite loop condition!


As you can see from the above Thread Stack Trace of Thread #97, the Thread is currently stuck in an infinite
loop / Thread race condition over a java.util.HashMap object (IBM JRE implementation).
This finding was quite interesting given this HashMap is actually created / own by the Weblogic 11g kernel
code itself >> weblogic/socket/utils/RegexpPool

Root cause: non Thread safe HashMap in Weblogic 11g


(10.3.5.0) code!
Following this finding and data gathering exercise, our team created a SR with Oracle support which did
confirm this defect within the Weblogic 11g code base.
As you may already know, usage of non Thread safe / non synchronized HashMap under concurrent Threads
condition is very dangerous and can easily lead to internal HashMap index corruption and / or infinite looping.
This is also a golden rule for any middleware software such as Oracle Weblogic, IBM WAS, Red Hat JBoss
which rely heavily on HashMap data structures from various Java EE and caching services.
The most common solution is to use the ConcurrentHashMap data structure which is designed for that type of
concurrent Thread execution context.

Solution
Since this problem was also affecting other Oracle Weblogic 11g customers, Oracle support was quite fast
providing us with a patch for our target WLS 11g version. Please find the patch description and detail:
Content:
17

========
This patch contains Smart Update patch AHNT for WebLogic Server 10.3.5.0
Description:
============
HIGH CPU USAGE AT HASHMAP.PUT() IN REGEXPPOOL.ADD()
Patch Installation Instructions:
================================
- copy content of this zip file with the exception of README file to your
SmartUpdate cache directory (MW_HOME/utils/bsu/cache_dir by default)
- apply patch using Smart Update utility

Conclusion
I hope this case study has helped you understand how to pinpoint culprit of high CPU Threads at the code
level when using AIX & IBM JRE and the importance of proper Thread safe data structure for high concurrent
Thread / processing applications.
Please dont hesitate to post any comment or question.

18

You might also like