Stream Data Processing Task

From GM-RKB
Jump to navigation Jump to search

A Stream Data Processing Task is a data processing that ...



References

2016

  • (Wikipedia, 2016) ⇒ https://en.wikipedia.org/wiki/stream_processing Retrieved:2016-10-13.
    • Stream processing is a computer programming paradigm, equivalent to dataflow programming, event stream processing, and reactive programming, [1] that allows some applications to more easily exploit a limited form of parallel processing. Such applications can use multiple computational units, such as the FPUs on a GPU or field programmable gate arrays (FPGAs), [2] without explicitly managing allocation, synchronization, or communication among those units.

      The stream processing paradigm simplifies parallel software and hardware by restricting the parallel computation that can be performed. Given a sequence of data (a stream), a series of operations (kernel functions) is applied to each element in the stream. Uniform streaming, where one kernel function is applied to all elements in the stream, is typical. Kernel functions are usually pipelined, and local on-chip memory is reused to minimize external memory bandwidth. Since the kernel and stream abstractions expose data dependencies, compiler tools can fully automate and optimize on-chip management tasks. Stream processing hardware can use scoreboarding, for example, to launch DMAs at runtime, when dependencies become known. The elimination of manual DMA management reduces software complexity, and the elimination of hardware caches reduces the amount of the area not dedicated to computational units such as ALUs.

      During the 1980s stream processing was explored within dataflow programming. An example is the language SISAL (Streams and Iteration in a Single Assignment Language).