ConcurrentHashMap is an alternative to HashTable and synchronizedMap. It is used in multi threaded applications and provided better performance when compared to HashTable and synchronisedMap.
ConcurrentHashMap Implementation In Java
In concurrentHashMap we have new feature called concurrency level . The ConcurrentHashMap is divided into segments based on the concurrency level.We need to be careful while providing the concurrency level as this impacts the performance of the map. The default concurrency level is 16 which means 16 threads can access the map simultaneously when each thread is operating on a different section, the map is basically divided in 16 segments and each segment is governed with a different lock. This mechanism boosts the performance of the ConcurrentHashMap in a thread safe environment. But the update operations like put(), remove() , putAll() or clear() are not synchronized and may not reflect the most recent change in the map.
Features in ConcurrentHashMap
It doesn’t not allow null key and values.
The iterator in ConcurrentHashMap is fail safe which means it doesn’t throw ConcurrentModificationException if the map is updated while iteration.
The update operations in ConcurrentHashMap are thread safe which means at a time only one thread can update a segment of the concurrentHashmap.
Any number of thread can perform the read operations without locking the map.
Since , concurrentHashMap do not blocks only a portion of the map, there is a chance of read overlapping with the update operation.In that case the result returned by get() method will only return the recently completed operation.
ConcurrentHashMap is not ordered.
When to use ConcurrentHashMap
It should be used when there are more reader threads than writer threads . If writer threads are equal to reader threads are more than reader thread then the performance of ConcurrentHashMap is same as the performance of Hashtable or synchronizedMap.
Iterators are basically used to traverse over a set of objects one by one. Java supports two types of Iterators
Fail Fast Iterator
Fail Safe Iterator
Fail Fast Iterator
Fail fast iterator do not allow any modifications to a collection while iterating/traversing the collection.By modification I mean any updating, addition of elements in the collection.If you try to modify the collection during iteration ,ConcurrentModificationException is thrown.
In the above code we are trying to add a new element in the map. The fail fast iterator throws ConcurrentModificationException if a new element is added while iterating the Collection.
Similarly, if you try to remove an element from the map (map.remove() method) while iterating the collection ,ConcurrentModificationException is thrown.
However, if you try to remove an element from the Collection while iterating it ( iterator.remove() method) , no exception is thrown.
Fail Safe Iterator
Fail safe iterator allow modifications to a collection while iterating the collection.You can do any updating, removal or addition in a collection while traversing the collection without any exceptions being thrown.The reason being the iteration is performed on the clone/copy of the collection not on the actual collection so the modification done of the actual collection go unnoticed by the Iterator.This means if the collection is modified while traversal you might see stale values during iteration .So,basically the iteration is weakly consistent.
I hope you enjoyed reading this article.Happy Coding 🙂
When it comes to Java ,there are two types of Object equality ,reference equality and object equality.
Two references that refer to the same object on the heap are considered equal.If you want to check if two references are referring to the same object using the ‘==’ operator. The ‘==’ operator compare the bits in the variable.If both refers to the same object the bits will be equal.
Example:- We have a student class and create an object and two references which refer to the same object.
The two references sRef1 and sRef2 refer to s1 so the comparison operator return true.This is Reference equality.
Object equality is used to check if two references referring to two different objects are meaningfully equivalent.If you want to treat two different objects as equal you must override hashCode() and equals() method.Both the method have a default implementation in the class Object.The default behaviour of hashcode method is that each object will get a unique number (most versions of Java assign a hashcode based on the object’s memory address on the heap so now two objects will have the same hash code).The default implementation of equals method returns true if two references refer to the same object .It uses ‘==’ operator to check for equality.
Example:- In the Student class we have overridden hashCode() and equals() method and created our custom implementation for object equality.
Two student objects will be considered equal when they have the same first name, last name and student id and the hash code of the two objects are the same.
You need to override the equals method to create your implementation of of object equality.
In the given example all the student objects s1,s2 and s3 are different as they have different firstname, lastname, department and different hashcode.
Now,I add a new constructor to send studentId as parameter to show object equality.
The two student objects s1 and s2 are created , which are meaning fully equivalent because as per the hashcode and equals method two objects are equal if they have same first name, last name and student Id and they have the same hashcode
Rules for hashCode() and equals() Method
If two objects are equal ,they must have matching hash codes.
If two objects are equal, calling equals on both the objects should return true.That is if a.equals(b) is true then b.equals(a) is also true
If two objects have the same hashcode they are not required to be equal.But if they are equal, they must have the same hashcode value.That is a.equals(b) is true then a.hashCode() must be equals to b.hashCode() but not vice versa.
If you override equals() ,you must override hashcode() method and vice versa
The default implementation of hashCode method is to generate a unique integer for each object on the heap. So, if you don’t override hashCode() method ,no two objects can ever be equal.
The default behaviour of equals method checks if the two references refers to the same object on the heap.If you don’t override this method no two objects will ever be considered equal since references to two different objects will always have a different bit pattern
I hope you liked this article.Happy Coding everyone 🙂
HashMap is a hash table implementation of Map interface in Java. A map contains a collection of key-value pairs where key maps to the value. HashMap permits null values and null keys and it doesn’t permit duplicate keys.It is unordered so it doesn’t guarantee that the order will remain constant with time. Hashmap provides constant time performance for operations like get or put, provided the elements are distributed evenly among the buckets.
The performance of HashMap depends on two factors ,the initial capacity and load factor. The initial capacity is the number of buckets in the HashMap and the initial capacity is the initial size of the HashMap.The load factor is a measure of how full the hash map is allowed to get before it is resized (capacity is increased).When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt) so that the hash table has approximately twice the number of buckets.
The default load factor is .75 because this can be changed while creating the HashMap.
Basic structure of HashMap
Initially in HashMap when multiple keys land in the same bucket then values along with their keys are placed in a linked list. Since Java 1.8 HashMap is converted to binary tree when the number of elements in a bucket reached a certain threshold.
How hashCode() and equals () method impacts HashMap?
The hashCode() method converts an object into an integer form(this is basically the hash code of the object) ,this process is called hashing.The performance of the hashmap depends on the hashCode() method implementation.In HashMap the hashCode() is used to calculate the bucket where the key-value pair should land .
The equals() method is used to check if two objects are equal or not. HashMap use this method to determine if the key of the key-value pair being inserted already exists or not.
A bucket is one element of HashMap array.It is used to store node,each node contains a key-value pair.A bucket can have multiple key-value pairs which are implemented as linked list.If the number of nodes in the bucket reached a certain threshold the linked list is converted to binary tree. The hashCode() and equals() method determines where the key-value pair lands (in which bucket)
The put method in HashMap
The put method is used to insert key value pairs in a HashMap. Here’s the steps:-
The hashCode() method is called to calculate the hashCode of the key . The hashCode determines the index that is the bucket where the key-value pair will be inserted.
The hashCode() and equals() method is called to check if the key being inserted, already exists in the bucket .So, each of the keys in the linked list or the binary tree is checked to match the key being inserted.If the key already exists in the bucket then replace the value of the key with the new value else create a new node and insert the key-value pair in the Bucket.
The get method in HashMap
In get method the key is provided as the parameter and the value of the key is returned .
Here’s the steps:-
The hashCode() method is called to calculate the hashCode of the key.
Calculate the index which determines the index or the bucket which contains the key-value pair.Index is calculated based on the hashcode of the key.
Search for the key in the bucket.Compare each key in the bucket with the key passed in the get method using the equals and hashCode method.If the key matches with any of the keys in the bucket return the value associated with the key else return null.
Performance of HashMap
Until Java 1.7 ,the worst cast scenario ,when multiple hash code values end up in the same bucket values are placed in a linked list which reduces hash map performance from O(1) to O(n)
Since Java 1.8 when the number of items reaches a certain threshold in the linked list,it is converted to binary tree and when the number of elements are less than a certain threshold(due to deletion) they are converted back to linked list.This improves the worst case performance from O(n) to O(log n) .
I hope you enjoyed reading this article .Happy Coding 🙂
The package com.sun.image.codec.jpeg has been removed in Java 7 as mentioned in the Java SE 7 and JDK 7 Compatibility Guide.
Synopsis: The Non-standard com.sun.image.codec.jpeg Package is Retired
Description: The com.sun.image.codec.jpeg package was added in JDK 1.2 (Dec 1998) as a non-standard way of controlling the loading and saving of JPEG format image files. This package was never part of the platform specification and it has been removed from the Java SE 7 release. The Java Image I/O API was added to the JDK 1.4 release as a standard API and eliminated the need for the com.sun.image.codec.jpeg package.
Let me know if this solution worked for you .Happy Coding guys an gals 🙂
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.