Redundancy and Complexity Metrics for Big Data Classification: Towards Smart Data
Redundancy and Complexity Metrics for Big Data Classification: Towards Smart Data
Blog Article
It is recognized the importance of knowing the descriptive properties of a dataset when tackling a data science problem.Having information about the redundancy, complexity and density of a problem allows us to make decisions as to which data preprocessing and machine learning techniques are most suitable.In classification problems, there are multiple metrics to Vibration Noise Modeling for Measurement While Drilling System Based on FOGs describe the overlapping of the features between classes, class imbalances or separability, among others.However, these metrics may not scale up well when dealing with big datasets, or may not simply be sufficiently informative in this context.
In this paper, we provide a package of metrics for big data classification problems.In particular, we propose two new big data metrics: Neighborhood Density and Decision Tree Progression, which study density and accuracy progression by discarding half of the samples.In addition, we enable a number of basic metrics to handle big data.The experimental study carried out in standard big data classification problems shows that our metrics can quickly characterize big datasets.
We identified a clear redundancy of information in A pilot study on factors involved with work participation in the early stages of multiple sclerosis. most datasets, so that, discarding randomly 75% of the samples does not drastically affect the accuracy of the classifiers used.Thus, the proposed big data metrics, which are available as a Spark-Package, provide a fast assessment of the shape of a classification dataset prior to applying big data preprocessing, toward smart data.