A hashtable implementation that allows simultaneous reads and writes from multiple threads. Also offering a concurrent Dictionary and WeakDictionary as hashtable specializations.
This library offers the following classes
- ConcurrentHashtable: Base class for concurrent hashtable implementations.
- ConcurrentWeakHashtable: Base class for 'weak' concurrent hashtable implementations. The table contents get swept in sync with the garbage collector.
- ConcurrentDictionary: A dictionary associating keys with values, based on ConcurrentHashtable.
- ConcurrentWeakDictionary: A dictionary associating keys with values, based on ConcurrentWeakHashtable. It holds weak references to both keys and values so they can be collected independently. When either the key or value gets collected the association
will be removed from the dictionary.
- ConcurrentWeakDictionaryStrongKeys: A dictionary associating keys with values, based on ConcurrentWeakHashtable. It holds weak references to its values only. When a value gets collected the association will be removed from the dictionary.
- ConcurrentWeakDictionaryStrongValues: A dictionary associating keys with values, based on ConcurrentWeakHashtable. It holds weak references to its keys only. When a key gets collected the association will be removed from the dictionary.
- Cache: A class that holds strong references to its keys and values as long they get accessed often enough. When the key-value pair doesn;t get accessed enough it will switch to weak references and the pair can get garbage collected.
- TinyReaderWriterLock: Small lock structure like a spin lock that allows multiple simultaneous readers and a single exclusive writer.
De hashtable becomes concurrent by splitting its contents in segments. Each segment can be accessed by multiple reader threads but one writer thread at a time. Because the segments are kept small there will be many of them in a filled hashtable and collisions
will be rare. This approach of avoiding collisions will work well only when the hashtable contains a reasonable number of items.
Each segment itself is an independent hashtable that can grow and shrink on its own. This prevents blocking of the entire hashtable when resizing is needed. Because the segments are kept small resizing of each segment should take little time.
This method needs at least two lookups to find an association. First to find the right segment and second to locate the item in the segment. The hashtable uses one hash for both. For finding the segment it uses the high bits of the hash and for the lookup in
the segment it uses the low bits.
Overall table scale maintenance is done on a background worker thread. When segments grow or shrink too much, this background process will split or merge them. This scale maintenance process will
block the table, except for operations that involve the table in its entirety. When the table grows or shrinks fast this maintenance process may lag behind. When shrinking this should not be a problem but when growing (e.g. during startup) this lag
may degrade performance temporarily.
The 'weak' variation of the hashtable will sweep its contents whenever a garbage collector run has been detected. During this sweep all entries that are marked as 'garbage' will be removed from the table. This sweep takes place on a background
thread and like the scale maintenance process it will not
block the table, except for operations that involve the table in its entirety. This sweep process and the scale maintenance process will block each other out.
2009 09 26
Created a silverlight version.
Garbage sweep has been improved. Cleaning the list of weak hashtable references no longer blocks the creation of new weak hashtables.
2009 01 11
New release with some issues fixed and features improved. Also added a Cache class that will hold strong references to its keys and values as long as they get accessed often enough.
Microsoft is finally implementing a Concurrent dictionary for framework 4.0. How long will it take before they implement a weak dictionary?
2008 09 13
Published the project.