2009 FindingTheFrequentItemsInStreamsOfData

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Frequent Item, Frequent Items Finding Task, Frequent Items Finding Algorithm.

Notes

Cited by

Quotes

Abstract

1. Introduction

  • Many data generation processes can be modeled as data streams. They produce huge numbers of pieces of data, each of which is simple in isolation, but which taken together lead to a complex whole. For example, the sequence of queries posed to an Internet search engine can be thought of as a stream, as can the collection of transactions across all branches of a supermarket chain. In aggregate, this data can arrive at enormous rates, easily in the realm of hundreds of gigabytes per day or higher. While this data may be archived and indexed within a data warehouse, it is also important to process the data "as it happens," to provide up to the minute analysis and statistics on current trends. Methods to achieve this must be quick to respond to each new piece of information, and use resources which are very small when compared to the total quantity of data.
  • These applications and others like them have led to the formulation of the so-called "streaming model." In this abstraction, algorithms take only a single pass over their input, and must accurately compute various functions while using resources (space and time per item) that are strictly sublinear in the size of the input--- ideally, polynomial in the logarithm of the input size. The output must be produced at the end of the stream, or when queried on the prefix of the stream that has been observed so far. (Other variations ask for the output to be maintained continuously in the presence of updates, or on a "sliding window" of only the most recent updates.) Some problems are simple in this model: for example, given a stream of transactions, finding the mean and standard deviation of the bill totals can be accomplished by retaining a few "sufficient statistics" (sum of all values, sum of squared values, etc.). Others can be shown to require a large amount of information to be stored, such as determining whether a particular search query has already appeared anywhere within a large stream of queries. Determining which problems can be solved effectively within this model remains an active research area.
  • The frequent items problem (also known as the heavy hitters problem) is one of the most heavily studied questions in data streams. The problem is popular due to its simplicity to state, and its intuitive interest and value. It is important both in itself, and as a subroutine within more advanced data stream computations. Informally, given a sequence of items, the problem is simply to find those items which occur most frequently. Typically, this is formalized as finding all items whose frequency exceeds a specified fraction of the total number of items. This is shown in Figure 1. Variations arise when the items are given weights, and further when these weights can also be negative.
  • This abstract problem captures a wide variety of settings. The items can represent packets on the Internet, and the weights are the size of the packets. Then the frequent items represent the most popular destinations, or the heaviest bandwidth users (depending on how the items are extracted from the flow identifiers). This knowledge can help in optimizing routing decisions, for in-network caching, and for planning where to add new capacity. Or, the items can represent queries made to an Internet search engine, and the frequent items are now the (currently) popular terms. These are not simply hypothetical examples, but genuine cases where algorithms for this problem have been applied by large corporations: AT&T and Google, respectively. Given the size of the data (which is being generated at high speed), it is important to find algorithms which are capable of processing each new update very quickly, without blocking. It also helps if the working space of the algorithm is very small, so that the analysis can happen over many different groups in parallel, and because small structures are likely to have better cache behavior and hence further help increase the throughput.
  • Obtaining efficient and scalable solutions to the frequent items problem is also important since many streaming applications need to find frequent items as a subroutine of another, more complex computation. Most directly, mining frequent itemsets inherently builds on finding frequent items as a basic building block. Finding the entropy of a stream requires learning the most frequent items in order to directly compute their contribution to the entropy, and remove their contribution before approximating the entropy of the residual stream. The HSS (Hierarchical Sampling from Sketches) technique uses hashing to derive multiple substreams, the frequent elements of which are extracted to estimate the frequency moments of the stream. The frequent items problem is also related to the recently popular area of Compressed Sensing.
  • Other work solves generalized versions of frequent items problems by building on algorithms for the "vanilla" version of the problem. Several techniques for finding the frequent items in a "sliding window" of recent updates (instead of all updates) operate by keeping track of the frequent items in many sub-windows. In the "heavy hitters distinct" problem, with applications to detecting network scanning attacks, the count of an item is the number of distinct pairs containing that item paired with a secondary item. It is typically solved extending a frequent items algorithm with distinct counting algorithms. Frequent items have also been applied to models of probabilistic streaming data, and within faster "skipping" techniques.
  • Thus the problem is an important one to understand and study in order to produce efficient streaming implementations. It remains an active area, with many new research contributions produced every year on the core problem and its variations. Due to the amount of work on this problem, it is easy to miss out some important references or fail to appreciate the properties of certain algorithms. There are several cases where algorithms first published in the 1980s have been "rediscovered" two decades later; existing work is sometimes claimed to be incapable of a certain guarantee, which in truth it can provide with only minor modifications; and experimental evaluations do not always compare against the most suitable methods.
  • In this paper, we present the main ideas in this area, by describing some of the most significant algorithms for the core problem of finding frequent items using common notation and terminology. In doing so, we also present the historical development of these algorithms. Studying these algorithms is instructive, as they are relatively simple, but can be shown to provide formal guarantees on the quality of their output as a function of an accuracy parameter ε. We also provide baseline implementations of many of these algorithms against which future algorithms can be compared, and on top of which algorithms for different problems can be built. We perform experimental evaluation of the algorithms over a variety of data sets to indicate their performance in practice. From this, we are able to identify clear distinctions among the algorithms that are not apparent from their theoretical analysis alone.

3. FREQUENT ITEMS ALGORITHMS

  • We discuss two main classes of algorithms for finding the frequent items. Counter-based algorithms track a subset of items from the input, and monitor counts associated with these items. We also discuss sketch algorithms, which are (randomized) linear projections of the input viewed as a vector, and solve the frequency estimation problem. They therefore do not explicitly store items from the input. Furthermore, sketch algorithms can support deletion of items (corresponding to updates with a negative weight, discussed in more detail below), in contrast with counter-based schemes, at the cost of increased space usage and update time.
  • These are by no means the only solutions possible for this problem. Other solutions are based on various notions of randomly sampling items from the input, and of summarizing the distribution of items in order to find quantiles, from which the frequent items can be discovered. These solution types have attracted less interest for the frequent items problem, and are less effective based on our full experimental evaluations.

References

  • 1. Alon, N., Matias, Y., Szegedy, M. The space complexity of approximating the frequency moments. In ACM Symposium on Theory of Computing, (1996), 20--29. Journal version in J. Comp. Syst. Sci. 58 (1999), 137--147.
  • 2. Arasu, A., Manku, G.S. Approximate counts and quantiles over sliding windows. In ACM Principles of Database Systems (2004).
  • 3. Bhattacharrya, S., Madeira, A., Muthukrishnan, S., Ye, T. How to scalably skip past streams. In Scalable Stream Processing Systems (SSPS) Workshop with ICDE 2007 (2007).
  • 4. Bhuvanagiri, L., Ganguly, S., Kesh, D., Saha, C. Simpler algorithm for estimating frequency moments of data streams. In ACM-SIAM Symposium on Discrete Algorithms (2006).
  • 5. Bose, P., Kranakis, E., Morin, P., Tang, Y. Bounds for frequency estimation of packet streams. In SIROCCO (2003).
  • 6. Boyer, R.S., Moore, J.S. A fast majority vote algorithm. Technical Report ICSCA-CMP-32, Institute for Computer Science, University of Texas (Feb. 1981).
  • 7. Boyer, R.S., Moore, J.S. MJRTY---a fast majority vote algorithm. In Automated Reasoning: Essays in Honor of Woody Bledsoe, Automated Reasoning Series. Kluwer Academic Publishers, 1991, 105--117.
  • 8. Chakrabarti, A., Cormode, G., McGregor, A. A near-optimal algorithm for computing the entropy of a stream. In ACM-SIAM Symposium on Discrete Algorithms (2007).
  • 9. Charikar, M., Chen, K., Farach-Colton, M. Finding frequent items in data streams. In: Proceedings of the International Colloquium on Automata, Languages and Programming (ICALP) (2002).
  • 10. Cormode, G., Hadjieleftheriou, M. Finding frequent items in data streams. In: Proceedings of The International Conference on Very Large Data Bases (2008).
  • 11. Cormode, G., Korn, F., Muthukrishnan, S., Johnson, T., Spatscheck, O. Srivastava, D. Holistic UDAFs at streaming speeds. In ACM SIGMOD International Conference on Management of Data (2004), 35--46.
  • 12. Cormode, G., Muthukrishnan, S. An improved data stream summary: The countmin sketch and its applications. J. Algorithms 55, 1 (2005), 58--75.
  • 13. Datar, M., Gionis, A., Indyk, P., Rajeev Motwani Maintaining stream statistics over sliding windows. In ACM-SIAM Symposium on Discrete Algorithms (2002).
  • 14. Demaine, E., López-Ortiz, A., Munro, J.I. Frequency estimation of internet packet streams with limited space. In European Symposium on Algorithms (ESA) (2002).
  • 15. Fischer, M., Salzburg, S. Finding a majority among n votes: Solution to problem 81--5. J. Algorithms 3, 4 (1982), 376--379.
  • 16. Gilbert, A.C., Kotidis, Y., Muthukrishnan, S., Strauss, M. How to summarize the universe: Dynamic maintenance of quantiles. In: Proceedings of The International Conference on Very Large Data Bases (2002). 454--465.
  • 17. Jayram, T.S., McGregor, A., Muthukrishnan, S., Vee, E. Estimating statistical aggregates on probabilistic data streams. In ACM Principles of Database Systems (2007).
  • 18. Karp, R., Papadimitriou, C., Shenker, S. A simple algorithm for finding frequent elements in sets and bags. ACM Trans. Database Syst. 28 (2003), 51--55.
  • 19. Manku, G., Rajeev Motwani Approximate frequency counts over data streams. In: Proceedings of The International Conference on Very Large Data Bases (2002).
  • 20. Manku, G.S. Frequency counts over data streams. http://www.cse.ust.hk/vldb2002/VLDB2002-proceedings/slides/S10P03slides.pdf (2002).
  • 21. Metwally, A., Agrawal, D., Abbadi A.E. Efficient computation of frequent and top-k elements in data streams. In: Proceedings of The International Conference on Database Theory (2005).
  • 22. Misra, J., Gries, D. Finding repeated elements. Sci. Comput. Programming 2 (1982), 143--152.
  • 23. Pike, D., Dorward, S., Griesemer, R., Quinlan, S. Interpreting the data: Parallel analysis with sawzall. Dyn. Grids Worldwide Comput. 13, 4 (2005), 277--298.
  • 24. Thorup, M., Zhang, Y. Tabulation-based 4-universal hashing with applications to second moment estimation. In ACM-SIAM Symposium on Discrete Algorithms (2004).
  • 25. Venkataraman, S., Song, D.X., Gibbons, P.B., Blum, A. New streaming algorithms for fast detection of superspreaders. In Network and Distributed System Security Symposium NDSS (2005).

,

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2009 FindingTheFrequentItemsInStreamsOfDataGraham Cormode
Marios Hadjieleftheriou
Finding the Frequent Items in Streams of Datahttp://www2.research.att.com/~marioh/papers/cacm09.pdf10.1145/1562764.1562789