Improving Data Quality on Big and High-Dimensional Data
The lack of data quality is a crucial open issue that leads into modest decisions and suboptimal processes. This is intensely vital in big data sets where the information is generally assembled quickly, at multiple scales, from heterogeneous sources, and with little concern for quality. In this article, we survey data quality mining and management approaches. We discuss the use of incremental learning approaches to improve massive calculus where only cutting-edge batches of data are required to preserve up-to-date inferred Data Quality models. We also present a framework that will be able to process big and heterogeneous data streams of biomedical data.
No Reference information available - sign in for access.
No Citation information available - sign in for access.
No Supplementary Data.
No Article Media
Document Type: Research Article
Publication date: December 1, 2012
More about this publication?
- Journal of Bioinformatics and Intelligent Control (JBIC) is an international journal that publishes research articles in areas of the bioinformatics and intelligent control. JBIC is aimed to provide an international forum for the exchange of ideas and new scientific and technological findings to disseminate information and promote the transfer of knowledge between professionals in academia and industry. The journal publishes original research papers; review papers; technical reports and notes; short communications focused on emerging new developments in these research areas.
- Editorial Board
- Information for Authors
- Subscribe to this Title
- Aims & Scope
- Ingenta Connect is not responsible for the content or availability of external websites