Big-Data Dataset

From GM-RKB
Jump to navigation Jump to search

A Big-Data Dataset is a very-large dataset that is also a complex heterogeneous dynamic dataset of mixed veracity.



References

2016

  • (Wikipedia, 2016) ⇒ https://en.wikipedia.org/wiki/big_data Retrieved:2016-6-1.
    • Big data is a term for data sets that are so large or complex that traditional data processing applications are inadequate. Challenges include analysis, capture, data curation, search, sharing, storage, transfer, visualization, querying, updating and information privacy. The term often refers simply to the use of predictive analytics or certain other advanced methods to extract value from data, and seldom to a particular size of data set. Accuracy in big data may lead to more confident decision making, and better decisions can result in greater operational efficiency, cost reduction and reduced risk.

      Analysis of data sets can find new correlations to "spot business trends, prevent diseases, combat crime and so on."Scientists, business executives, practitioners of medicine, advertising and governments alike regularly meet difficulties with large data sets in areas including Internet search, finance and business informatics. Scientists encounter limitations in e-Science work, including meteorology, genomics, connectomics, complex physics simulations, biology and environmental research. Data sets are growing rapidly in part because they are increasingly gathered by cheap and numerous information-sensing mobile devices, aerial (remote sensing), software logs, cameras, microphones, radio-frequency identification (RFID) readers and wireless sensor networks. The world's technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s; , every day 2.5 exabytes (2.5×1018) of data is created. One question for large enterprises is determining who should own big data initiatives that affect the entire organization. [1] Relational database management systems and desktop statistics and visualization packages often have difficulty handling big data. The work instead requires "massively parallel software running on tens, hundreds, or even thousands of servers". What is considered "big data" varies depending on the capabilities of the users and their tools, and expanding capabilities make big data a moving target. "For some organizations, facing hundreds of gigabytes of data for the first time may trigger a need to reconsider data management options. For others, it may take tens or hundreds of terabytes before data size becomes a significant consideration."

  • (Wikipedia, 2016) ⇒ https://en.wikipedia.org/wiki/big_data#Characteristics Retrieved:2016-6-1.
    • Big data can be described by the following characteristics:
      • Volume: The quantity of generated and stored data. The size of the data determines the value and potential insight- and whether it can actually be considered big data or not.
      • Variety: The type and nature of the data. This helps people who analyze it to effectively use the resulting insight.
      • Velocity: In this context, the speed at which the data is generated and processed to meet the demands and challenges that lie in the path of growth and development.
      • Variability: Inconsistency of the data set can hamper processes to handle and manage it.
      • Veracity: The quality of captured data can vary greatly, affecting accurate analysis.
    • Factory work and Cyber-physical systems may have a 6C system:
      • Connection (sensor and networks)
      • Cloud (computing and data on demand) [1] * Cyber (model and memory)
      • Content/context (meaning and correlation)
      • Community (sharing and collaboration)
      • Customization (personalization and value)
    • Data must be processed with advanced tools (analytics and algorithms) to reveal meaningful information. For example, to manage a factory one must consider both visible and invisible issues with various components. Information generation algorithms must detect and address invisible issues such as machine degradation, component wear, etc. on the factory floor.
  1. Wu, D., Liu. X., Hebert, S., Gentzsch, W., Terpenny, J. (2015). Performance Evaluation of Cloud-Based High Performance Computing for Finite Element Analysis. Proceedings of the ASME 2015 International Design Engineering Technical Conference & Computers and Information in Engineering Conference (IDETC/CIE2015), Boston, Massachusetts, U.S.