Describing Data Processing Pipelines in Scientific Publications for Big Data Injection
The rise of Big Data analytics has been a disruptive game changer for many application domains, allowing the integration into domain-specific applications and systems of insights and knowledge extracted from external big data sets. The effective "injection" of external Big Data demands an understanding of the properties of available data sets, and expertise on the available and most suitable methods for data collection, enrichment and analysis. A prominent knowledge source is scientific literature, where data processing pipelines are described, discussed, and evaluated. Such knowledge is however not readily accessible, due to its distributed and unstructured nature. In this paper, we propose a novel ontology aimed at modelling properties of data processing pipelines, and their related artifacts, as described in scientific publications. The ontology is the result of a requirement analysis that involved experts from both academia and industry. We showcase the effectiveness of our ontology by manually applying it to a collection of Big Data related publications, thus paving the way for future work on more informed Big Data injection workflows.
Link to the DMS ontology: https://raw.githubusercontent.com/mesbahs/DMS/master/dms.owl
Link to the SPARQL endpoint: 88.198.169.206/dataset.html