Synchronized collections
The synchronized collection classes include Vector
and Hashtable
, which achieve thread safety by encapsulating their state and synchronizing every public method so that only one thread at a time can access the collection state,
The synchronized collections are thread-safe, but you may sometimes need to use additional client-side locking to guard compound actions. Common compound actions on collection include iteration, navigation, and conditional operations such as put-if-absent.
Because the synchronized collections commit to a synchronization policy that supports client-side locking, it is possible to create new operations that are atomic with respect to other collection operations as long as we know which lock to use. The synchronized collection classes guard each method with the lock on the synchronized collection object itself.
The iterators returned by the synchronized collections are not designed to deal with concurrent modification, and they are fail-fast–meaning that if they detect that collection has changed since iteration began, they throw the unchecked ConcurrentModificationException
. Note that the ConcurrentModificationException
can arise in single-threaded code as well; this happens when objects are removed from the collection directly rather than through Iterator.remove
.
There are several reasons, however, why locking a collection during iteration may be undesirable. Other threads that need to access the collection will block until the iteration is complete. An alternative to locking the collection during iteration is to clone the collection and iterate the copy instead. Cloning the collection has an obvious performance cost; whether this is a favorable tradeoff depends on many factors including the size of the collection, work for each element, etc.
While locking can prevent iterators from throwing ConcurrentModificationException
, you have to remember to use locking everywhere a shared collection might be iterated. For example, calling a toString
, hashCode
or equals
methods on a collection can result in underlying call of the iterator. All of these indirect uses of iteration can cause ConcurrentModificationException
.
Concurrent collections
Synchronized collections achieve their thread safety by serializing all access to the collection’s state. The cost of this approach is poor concurrency; when multiple threads contend for the collection-wide lock, throughtput suffers. The concurrent collections, on the other hand, are designed for concurrent access from multiple threads.
ConcurrentHashMap
ConcurrentHashMap
is a hash-based Map
like HashMap
, but it uses an entirely different locking strategy that offers better concurrency and scalability. Instead of synchronizing every method on a common lock, restricting access to a single thread at a time, it uses a finer-grained locking mechanism called lock striping
to allow a greater degree of shared access. Arbitrarily many reading threads can access the map concurrently, readers can access the map concurrently with writers, and a limited number of writers can modify the map concurrently. The result is far higher throughtput under concurrent access, with little performance penalty for single-threaded access.
ConcurrentHashMap
further improve on synchronized collection classes by providing iterators that do not throw ConcurrentModificationException
, thus eliminating the need to lock the collection during iteration. The iterators returned by ConcurrentHashMap
are weakly consistent
instead of fail-fast. A weakly consistent iterator can tolerate concurrent modification, traverse elements as tehy existed when the iterator was constructed, and may (but is not guaranteed to) reflect modification to the collection after the construction of the iterator.
The semantics of methods that operate on the entire Map
, sunch as size
and isEmpty
, have been slightly weakened to reflect the concurrent nature of the collection. Since the result of size
could be out of date by the time it is computed, it is really only an estimate, so size
is allowed to return an approximation instead of an exact count. While at first this may seem disturbing, in reality methods like size
and isEmpty
are far less useful in concurrent environments because these quantities are moving targets. So the requirements for these operations were weakened to enable performance optimizations for the most important operations, primarily get
, put
, containsKey
and remove
.
CopyOnWriteArrayList
CopyOnWriteArrayList
is a concurrent replacement for a synchronized List
that offers better concurrency in some common situations and eliminates the need to lock or copy the collection during iteration.
The copy-on-write collections derive their thread safety from the fact that as long as an effective immutable object is properly published, no further synchronization is required when accessing it. They implement mutability by creating and republishing a new copy of the collection every time it is modified. Iterators for the copy-on-write collections retain a reference to the backing array that was current at the start of iteration, and since this will never change, they need to synchronize only breifly to ensure visibility of the array contents. As a result, multiple threads can iterate the collection without interference from on another or from threads wanting to modify the collection. The iterator returned by the copy-on-write colletions return the elements exactly as they were at the time the iterator was created, regardless of subsequent modifications.
The copy-on-write collections are reasonable to use only when iteration is far more common than modification. This criterion exactly describes many event-notification systems: delivering a notification requires iterating the list of registered listeners and calling each one of them, and in most cases registering or unregistering an event listener is far less common than receiving an event notification.
Blocking queues and the producer-consumer pattern
Blocking queues provide blocking put
and take
methods as well as the timed equivalents offer
and poll
. If the queue is full, put
blocks until space becomes available; if the queue is empty, take
blocks until an element is available.
Blocking queues support the producer-consumer design pattern. This pattern simplifies development because it removes code dependencies between producer and consumer classes, and simplifies workload management by decoupling activities that may produce or consume data at different or variable rates.
If the producers consistently generate work faster than the consumers can process it, eventually the application will run out of memory because work items will queue up without bound. Again, the blocking nature of put
greatly simplfies coding of producers; if we use a bounded queue, then when the queue fills up the producer block, giving the consumers time to catch up because a blocked producer cannot generate more work.
Blocking queues also provide an offer
method, which returns a failure status if the item cannot be enquered. This enables you to create more flexiable policies for dealing with overload, such as shedding load, serialzing excess work items and writing them to disk, reducing the number of producer threads, or throttling producers in some other manner.
The class library contains several implementation of BlockingQueue
. LinkedBlockingQueue
and ArrayBlockingQueue
are FIFO queues, PriorityBlockingQueue
is a priority-ordered queue. SynchronousQueue
, is not really a queue at all, in that it maintains no storage space for queued elements. Instead, it maintains a list of queued threads waiting to enqueue or dequeue an element. It reduces the latency associated with moving data from producer to consumer because the work can be handed off directly. The direct handoff also feeds back more information about the state of the task to the producer. Since SynchronousQueue
has not storage capacity, put
and take
will block unless another thread is already waiting to participate in the handoff. SynchronousQueue
is generally suitable only when there are enough consumers that there nearly always will be one ready to take the handoff.
The blocking queue implementations in java.util.concurrent
all contain sufficient internal synchrnization to safely publish objects from a producer thread to the consumer thread. For mutable objects, producer-cosumer designs and blocking queues facilitates serial thread confinement for handling off ownership of objects from producers to consumers. A thread-confined object is owned exclusively by a single thread, but that ownership can be “transferred” by publishing it safely where only one other thread will gain access to it and ensuring that the publishing thread does not access it after the handoff. The safe publication ensures that the object’s state is visible to the new owner, and since the original owner will not touch it again, it is now confied to the new thread. The new owner may modify it freely since it has excludive access.
A Deque
is a double-ended queue that allows efficient insertion and removal from both the head and the tail. Deques lend themselves to a related pattern called work stealing. In a work stealing design, every consumer has its own deque. If a consumer exhausts the work in its own deque, it can steal work from the tail of someone else’s deque. Work stealing can be more scalable than a traditional producer-consumer design because workers don’t contend for a shared work queue; most of the time they access only their own deque, reducing contention. When a worker has to access another’s deque, it does so from the tail rather than the head, further reducing contention. Working stealing is well suited to problem in which consumers are also producers–when performing a unit of work is likely to result in the identification of more work (e.g. processing a page in web crawler can result in new pages to be crawled). When a worker identifies a new unit of work, it places it at the end of its own deque; when its deque is empty, it looks for work at the end of someone else’s deque, ensuring that each worker stays busy.
Blocking and interruptible methods
Threads may block, or pause, for several reasons: wainting for I/O completion, waiting to acquire a lock, waiting to wake up from Thread.sleep
, or waiting for the result of a computation in another thread. When a thread blocks, it is usually suspended and placed in one of the blocked thread states (BLCOKED
, WAITING
, or TIMED_WAITING
). The distinction between a blocking operation and an ordinary operation that merely takes a long time to finish is that a blocked thread must wait for an event that is beyong its control before it can proceed–the I/O completes, the lock becomes available, or the external computation finishes.
When a method can throw InterruptedException
, it is telling you that is is a blocking method, and further that if it is interrupted, it will make an effort to stop blocking early.
Thread
provides the interrupt
method for interrupting a thread and for querying whether a thread has been interrupted. Each thread has a boolean property that represents its interrupted status; interrupting a thread sets this status.
Interruption is a cooperative mechanism. One thread cannot force another to stop what it is doing and do something else; when thread A interrupts thread B, A is merely requesting that B stop what it is doing when it gets to a convenient stopping point. The most sensible use for interruption is to cancel an activity. Blocking methods that are responsive to interruption make it easier to cancel loing-running activities on a timely basis.
Synchronizers
A synchronizer is any object that coordinates the control flow of threads based on its state. Blocking queues can act as synchronizers; other types of synchronizers include semaphores, barriers and latches.
All synchronizers share certain structural properties: they encapsulate state that determines whether threads arriving at the synchronizer should be allowed to pass or forced to wait, provide methods to manipulate that state, and provide methods to wait efficiently for the synchronizer to enter the desired state.
Latches
A latch is a synchronizer that can delay the progress of threads until it reaches its terminal state. A latch acts as a gate: until the latch reaches the terminal state the gate is closed and no thread can pass, and in the terminal state the gate opens, allowing all threads to pass. Once the latch reaches the terminal state, it cannot change state again, so it remains open forever. Latches can be used to ensure that certain activites do not proceed until other one-time activities complete, such as:
- Ensuring that a computation does not proceed until resources it needs have been initialized.
- Ensureing that a service does not start until other services on which it depends have started.
- Waiting until all the parties involved in an activity, for instance the players in a multi-player game, are ready to proceed.
CountDownLatch
is a flexible latch implementation that can be used in any of these situations; it allows one or more threads to wait for a set of events to occur. The latch state consists of a counter initialized to a positive number, representing number of events to wait for. The countDown
method decrements the counter, indicating that an event has occuered, and the await
methods wait for the counter to reach zero. If the counter is nonzero on entry, await
blocks until the counter reaches zero, the waiting thread is interrupted, or the wait times out.
FutureTask
FutureTask
also acts like a latch. (FutureTask
implements Future
, which describes an abstract result-bearing computation.) A computation represented by a FutureTask
is implemented with a Callable
, the result-bearing equivalent of Runnable
, and can be in one of three states: waiting to run, running, or completed. Completion subsumes all the ways a computation can complete, including normal completion, cancellation, and exception. Once a FutureTask
enters the completed state, it stays in that state forever.
The behavior of Future.get
depends on the state of the task. If it is completed, get returns the result immediately, and otherwise blocks until the task transitions to the completed state and then returns the result or throws an exception. FutureTask
conveys the result from the thread executing the computation to the thread retrieving the result; the specification of FutureTask
guarantee that this transfer constitutes a safe publication of the result.
FutureTask
is used by the Executor
framework to represent asynchronous tasks, and can also be used to represent any potentially lengthy computation that can be started before the results are needed.
Semaphores
Counting semaphores are used to control the number of activities that can access a certain resource or perfrom a given action at the same time. Counting semaphore can be used to impplement resource pools or to impose a bound on a collection.
A Semaphore
manages a set of vertual permits; the initial number of permits is passed to the Semaphore
constructor. Activities can acquire permits and release permits when they are done with them. If no permit is available, acquire
blocks until one is (or until interuptted or the operation times out). The release
method returns a permit to the semaphore. A degenerate case of a counting semaphore is a binary semaphore, a binary semaphore can be used as a mutex with nonreentrant locking semantics; whoever holds the sole permit holds the mutext.
Semaphores are useful for implementing resource pools sunch as database connection pools. While it is easy to construct a fixed-sized pool that fails if you request a resource from an empty pool, what you really want is to block if the pool is empty and unblock when it becomes nonempty again. Similarly, you can use a Semaphore
to turn any collection into a blocking bounded collection.
Barries
Barriers are similar to latches in that they block a group of threads until some event has occurred. The key difference is that with a barrier, all the threads must come together at a barrier point at the same time in order to proceed. Latches are for waiting for events; barriers are for waiting for other threads.
CyclicBarrier
allows a fixed number of parties to rendezvous repeatedly at a barrier point and is useful in parallel iterative algotithms that break down a problem into a fixed number of independent subproblems. Threads call await
when they reach the barrier point. If all threads meet at the barrier point, the barrier has been successfully passed. If a call to await
times out or a thread blocked in await
is interrupted, then the barrier is considered broken and all outstanding calls to await
terminate with BrokenBarrierException
. If the barrier successfully passed, await
returns a unique arrival index for each thread, which can be used to “elect” a leader that takes some special action in the next iteration.
Barriers are often used in simulations, where the work to calculate one step can be done in parallel but all work associated with a given step must complete before advancing to the next step. Another form of barrier is Exchanger
, a two-party barrier in which the parties exchange data at the barrier point.