Ndewo egwu mmadu,
The more I learn about probabilistic systems, the more they fascinate me. Bloom filters are a key mechanism in probabilistic systems: You can verify 100% sure if an element is not in a set, but can’t know exactly if it is in. This contract lets us build a bloom filter with a fixed size in comparison to an ever-growing set with keeping a lookup time O(1). Interesting concept you will find various applications for, the moment you understood it. This week’s paper by Burton H. Bloom (hence the name Bloom filter) is the first implementation of the idea on which all further research is build upon.
If you enjoy reading the Weekly CS Paper, I would be really thankful if you would support it with a few bucks: gum.co/weeklycspaper. The newsletter will stay free forever!
Abstract:
In this paper trade-offs among certain computational factors in hash coding are analyzed. The paradigm problem considered is that of testing a series of messages one-by-one for membership in a given set of messages. Two new hash- coding methods are examined and compared with a particular conventional hash-coding method. The computational factors considered are the size of the hash area (space), the time required to identify a message as a nonmember of the given set (reject time), and an allowable error frequency. The new methods are intended to reduce the amount of space required to contain the hash-coded information from that associated with conventional methods. The reduction in space is accomplished by exploiting the possibility that a small fraction of errors of commission may be tolerable in some applications, in particular, applications in which a large amount of data is involved and a core resident hash area is consequently not feasible using conventional methods. In such applications, it is envisaged that overall performance could be improved by using a smaller core resident hash area in conjunction with the new methods and, when necessary, by using some secondary and perhaps time-consuming test to “catch” the small fraction of errors associated with the new methods. An example is discussed which illustrates possible areas of application for the new methods. Analysis of the paradigm problem demonstrates that allowing a small number of test messages to be falsely identified as members of the given set will permit a much smaller hash area to be used without increasing reject time.
Download Link:
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.295.7552&rep=rep1&type=pdf
Additional Links:
- Bloom filter on Wikipedia
- What are Bloom Filters? Nice little animation video explaining bloom filters, using a library as an example