We ﬁnd that across these datacenter applications, there is both a sizable beneﬁt and a potential degradation from improperly sharing resources. In this paper, we ﬁrst present a study of the importance of thread-tocore mappings for applications in the datacenter as threads can be mapped to share or to not share caches and bus bandwidth. Second, we investigate the impact of co-locating threads from multiple applications with diverse memory behavior and discover that the best mapping for a given application changes depending on its co-runner. Third, we investigate the application characteristics that impact performance in the various thread-to-core mapping scenarios. Finally, we present both a heuristics-based and an adaptive approach to arrive at good thread-to-core decisions in the datacenter. By employing our adaptive thread-to-core mapper, the performance of the datacenter applications presented in this work improved by up to 22% over status quo thread-to-core mapping.
An Bloom Filter implementation in Java, that optionally supports persistence and counting buckets.
This is a very early stage project. It works for our needs. We haven't verified it works beyond that. Issue reports and patches are very much appreciated!
Some improvements we'd love to see include:
Optimized code paths for particular sized counting buckets
More hash functions to choose between
Variable cache sizes and types. Enabling, for example, a lower-memory read only mode or a smaller cache that performs disk seeks to perform some operations
Support on-the fly bucket expansion, possibly via a d-left counting bloom filter
A high performance version of java.util.LinkedHashMap for use as a software cache.
That is one mean looking car. I guess some are gonna say "too much batmobile"…