Writing thread-safe Java programs requires a developer to use proper locking when modifying shared data. Locking establishes the orderings needed to satisfy the Java Memory Model and guarantee the visibility of changes to other threads.

Data changed outside synchronization has NO specified semantics under the Java Memory Model! The JVM is free to reorder instructions and limit visibility in ways that are likely to be surprising to a developer.
Synchronized
Every object instance has a monitor that can be locked by one thread at a time. The synchronized keyword can be specified on a method or in block form to lock the monitor. Modifying a field while synchronized on an object guarantees that subsequent reads from any other thread synchronized on the same object will see the updated value. It is important to note that writes outside synchronization or synchronized on a different object than the read are not necessarily ever visible to other threads.
The synchronized keyword can be specified on a method or in block form on a particular object instance. If specified on a non-static method, the this reference is used as the instance. In a synchronized static method, the Class defining the method is used as the instance.
Lock
The java.util.concurrent.locks package has a standard Lock interface. The ReentrantLock implementation duplicates the functionality of the synchronized keyword but also provides additional functionality such as obtaining information about the state of the lock, non-blocking tryLock(), and interruptible locking.
Example of using an explicit ReentrantLock instance:
public class Counter {
private final Lock lock = new ReentrantLock();
private int value = 0;
public int increment() {
lock.lock();
try {
return ++value;
} finally {
lock.unlock();
}
}
}
ReadWriteLock
The java.util.concurrent.locks package also contains a ReadWriteLock interface (and ReentrantReadWriteLock implementation) which is defined by a pair of locks for reading and writing, typically allowing multiple concurrent readers but only one writer. Example of using an explicit ReentrantReadWriteLock to allow multiple concurrent readers:
public class Statistic {
private final ReadWriteLock lock = new ReentrantReadWriteLock();
private int value;
public void increment() {
lock.writeLock().lock();
try {
value++;
} finally {
lock.writeLock().unlock();
}
}
public int current() {
lock.readLock().lock();
try {
return value;
} finally {
lock.readLock().unlock();
}
}
}
Volatile
The volatile modifier can be used to mark a field and indicate that changes to that field must be seen by all subsequent reads by other threads, regardless of synchronization. Thus, volatile provides visibility just like synchronization but scoped only to each read or write of the field. Before Java SE 5, the implementation of volatile was inconsistent between JVM implementations and architectures and could not be relied upon. The Java Memory Model now explicitly defines volatile’s behavior.
An example of using volatile as a signaling flag:
public class Processor implements Runnable {
private volatile boolean stop;
public void stopProcessing() {
stop = true;
}
public void run() {
while(! stop) {
// .. do processing
}
}
}

Marking an array as volatile does not make entries in the array volatile! In this case volatile applies only to the array reference itself. Instead, use a class like AtomicIntegerArray to create an array with volatilelike entries.
Atomic Classes
One shortcoming of volatile is that while it provides visibility guarantees, you cannot both check and update a volatile field in a single atomic call. The java.util.concurrent.atomic package contains a set of classes that support atomic compound actions on a single value in a lock-free manner similar to volatile.
public class Counter {
private AtomicInteger value = new AtomicInteger();
public int next() {
return value.incrementAndGet();
}
}
The incrementAndGet method is just one example of a compound action available on the Atomic classes. Atomic classes are provided for booleans, integers, longs, and object references as well as arrays of integers, longs, and object references.
ThreadLocal
One way to contain data within a thread and make locking unnecessary is to use ThreadLocal storage. Conceptually a ThreadLocal acts as if there is a variable with its own version in every Thread. ThreadLocals are commonly used for stashing per-Thread values like the “current transaction” or other resources. Also, they are used to maintain per-thread counters, statistics, or ID generators.
public class TransactionManager {
private static final ThreadLocal<Transaction> currentTransaction =
new ThreadLocal<Transaction>() {
@Override
protected Transaction initialValue() {
return new NullTransaction();
}
};
public Transaction currentTransaction() {
Transaction current = currentTransaction.get();
if(current.isNull()) {
current = new TransactionImpl();
currentTransaction.put(current);
}
return current;
}
}
Concurrent Collections
A key technique for properly protecting shared data is to encapsulate the synchronization mechanism with the class holding the data. This technique makes it impossible to improperly access the data as all usage must conform to the synchronization protocol. The java.util.concurrent package holds many data structures designed for concurrent use. Generally, the use of these data structures yields far better performance than using a synchronized wrapper around an unsynchronized collection.
Concurrent Lists and Sets
The java.util.concurrent package contains three concurrent List and Set implementations described in Table 2.
| Class |
Description |
| CopyOnWriteArraySet |
CopyOnWriteArraySet provides copy-on-write semantics where each modification of the data structure results in a new internal copy of the data (writes are thus very expensive). Iterators on the data structure always see a snapshot of the data from when the iterator was created. |
| CopyOnWriteArrayList |
Similar to CopyOnWriteArraySet, CopyOnWriteArrayList uses copy-on-write semantics to implement the List interface. |
| ConcurrentSkipListSet |
ConcurrentSkipListSet (added in Java SE 6) provides concurrent access along with sorted set functionality similar to TreeSet. Due to the skip list based implementation, multiple threads can generally read and write within the set without contention as long as they aren’t modifying the same portions of the set. |
Concurrent Maps
The java.util.concurrent package contains an extension to the Map interface called ConcurrentMap, which provides some extra methods described in Table 3. All of these methods perform a set of actions in the scope of a single atomic action. Performing this set of actions outside the map would introduce race conditions due to making multiple (non-atomic) calls on the map.
Table 3: ConcurrentMap methods
| Method |
Description |
| putIfAbsent(K key, V value) : V |
If the key is not in the map then put the key/ value pair, otherwise do nothing. Returns old value or null if not previously in the map. |
| remove(Object key, Object value) : boolean |
If the map contains key and it is mapped to value then remove the entry, otherwise do nothing. |
| replace(K key, V value) : V |
If the map contains key then replace with newValue, otherwise do nothing. |
| replace(K key, V oldValue, V newValue) : boolean |
If the map contains key and it is mapped to oldValue then replace with newValue, otherwise do nothing. |
There are two ConcurrentMap implementations available as s hown in Table 4.
Table 4: ConcurrentMap implementations
| Method |
Description |
| ConcurrentHashMap |
ConcurrentHashMap provides two levels of internal hashing. The first level chooses an internal segment, and the second level hashes into buckets in the chosen segment. The first level provides concurrency by allowing reads and writes to occur safely on each segment in parallel. |
| ConcurrentSkipListMap |
ConcurrentSkipListMap (added in Java SE 6) provides concurrent access along with sorted map functionality similar to TreeMap. Performance bounds are similar to TreeMap although multiple threads can generally read and write from the map without contention as long as they aren’t modifying the same portion of the map. |
Queues
Queues act as pipes between “producers” and “consumers”. Items are put in one end of the pipe and emerge from the other end of the pipe in the same “first-in first-out” (FIFO) order.
The Queue interface was added to java.util in Java SE 5 and while it can be used in single-threaded scenarios, it is primarily used with multiple producers or one or more consumers, all writing and reading from the same queue.
The BlockingQueue interface is in java.util.concurrent and extends Queue to provide additional choices of how to handle the scenario where a queue may be full (when a producer adds an item) or empty (when a consumer reads or removes an item).
In these cases, BlockingQueue provides methods that either block forever or block for a specified time period, waiting for the condition to change due to the actions of another thread. Table 5 demonstrates the Queue and BlockingQueue methods in terms of key operations and the strategy for dealing with these special conditions.
Table 5: Queue and BlockingQueue methods
| Method |
Strategy |
Insert |
Remove |
Examine |
| Queue |
Throw Exception |
add |
remove |
element |
| Return special value |
offer |
poll |
peek |
| Blocking Queue |
Block forever |
put |
take |
n/a |
| Block with timer |
offer |
poll |
n/a |
Several Queue implementations are provided by the JDK and their relationships are discribed in Table 6.
Table 6: Queue Implementations
| Method |
Description |
| PriorityQueue |
PriorityQueue is the only non-concurrent queue implementation and can be used by a single thread to collect items and process them in a sorted order. |
| ConcurrentLinkedQueue |
An unbounded linked list queue implementation and the only concurrent implementation not supporting BlockingQueue. |
| ArrayBlockingQueue |
A bounded blocking queue backed by an array. |
| LinkedBlockingQueue |
An optionally bounded blocking queue backed by a linked list. This is probably the most commonly used Queue implementation. |
| PriorityBlockingQueue |
An unbounded blocking queue backed by a heap. Items are removed from the queue in an order based on the Comparator associated with the queue (instead of FIFO order). |
| DelayQueue |
An unbounded blocking queue of elements, each with a delay value. Elements can only be removed when their delay has passed and are removed in the order of the oldest expired item. |
| SynchronousQueue |
A 0-length queue where the producer and consumer block until the other arrives. When both threads arrive, the value is transferred directly from producer to consumer. Useful when transferring data between threads. |
Deques
A double-ended queue or Deque (pronounced “deck”) was added in Java SE 6. Deques support not just adding from one end and removing from the other but adding and removing items from both ends. Similarly to BlockingQueue, there is a BlockingDeque interface that provides methods for blocking and timeout in the case of special conditions. Table 7 shows the Deque and BlockingDeque methods. Because Deque extends Queue and BlockingDeque extends BlockingQueue, all of those methods are also available for use.
Table 7: Deque and BlockingDeque methods
| Interface |
First or Last |
Strategy |
Insert |
Remove |
Examine |
| Queue |
Head |
Throw exception |
addFirst |
removeFirst |
getFirst |
| Return special value |
offerFirst |
pollFirst |
peekFirst |
| Tail |
Throw exception |
addLast |
removeLast |
getLast |
| Return special value |
offerLast |
pollLast |
peekLast |
| Blocking Queue |
Head |
Block forever |
putFirst |
takeFirst |
n/a |
| Block with timer |
offerFirst |
pollFirst |
n/a |
| Tail |
Block forever |
putLast |
takeLast |
n/a |
| Block with timer |
offerLast |
pollLast |
n/a |
One special use case for a Deque is when add, remove, and examine operations all take place on only one end of the pipe. This special case is just a stack (first-in-last-out retrieval order). The Deque interface actually provides methods that use the terminology of a stack: push(), pop(), and peek(). These methods map to addFirst(), removeFirst(), and peekFirst() methods in the Deque interface and allow you to use any Deque implementation as a stack. Table 8 describes the Deque and BlockingDeque implementations in the JDK. Note that Deque extends Queue and BlockingDeque extends BlockingQueue
Table 8: Deques
| Class |
Description |
| LinkedList |
This long-standing data structure has been retrofitted in Java SE 6 to support the Deque interface. You can now use the standard Deque methods to add or remove from either end of the list (many of these methods already existed) and also use it as a nonsynchronized stack in place of the fully synchronized Stack class. |
| ArrayDeque |
This implementation is not concurrent and supports unbounded queue length (it resizes dynamically as needed). |
| LinkedBlockingDeque |
The only concurrent deque implementation, this is a blocking optionally-bounded deque backed by a linked list. |
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}
{{ parent.urlSource.name }}