🎉 Deepchecks raised $14m!  Click here to find out more ðŸš€
DEEPCHECKS GLOSSARY

Preprocessing

Data preparation is a Data Mining technique that involves turning raw data into a usable format. Data in the real world is frequently inadequate, inconsistent, and/or missing in specific behaviors or patterns, as well as including numerous inaccuracies. Preprocessing data is a tried and true means of overcoming such problems.

To put it another way, Data Preprocessing is a phase in Data Mining that gives tools for understanding and discovering knowledge from data at the same time.

Data preprocessing techniques

  • Data integration- As in data warehousing, data integration is involved in data analytic tasks that combine data from numerous sources into a cohesive data repository. Multiple databases, data cubes, and flat files are examples of these sources. Schema integration is an important consideration in Data Integration. It’s a hard situation.

How are real-world entities from various data sources matched up?’ This is referred to as difficulty with entity identification. How can a data analyst tell if customer id in one database and cust number in another pertain to the same thing? Metadata is the key here. Metadata is common in databases and data warehouses. Simply said, metadata is information about information.

Metadata is utilized to aid in the avoidance of schema integration issues. Redundancy is another essential consideration. If an attribute is inherited from another table, it may be redundant. Redundancies in the final data set might also be caused by inconsistencies in attribute or dimension names.

  • Data transformation- Data is translated into mining formats that are relevant to the situation. The following steps are involved in data transformation:

The attribute data is scaled to lie inside a limited predetermined range, such as -1.0 to 1.0 or 0 to 1.0, in Normalisation.

Smoothing is a technique for removing noise from data. Binning, grouping, and regression are examples of such procedures.

Aggregate is the process of applying summary or aggregation procedures on data. Daily sales data, for example, might be combined to calculate monthly and yearly totals. This phase is commonly employed while building a data cube for data analysis at several granularities.

Using concept hierarchies, low-level or primitive/raw data is replaced with higher-level ideas in the generalization of the data. Categorical qualities, for example, are generalized to higher-level notions such as street, city, and nation.

  • Data cleaning- Data cleaning methods aim to fill in missing values, smooth out noise while identifying outliers and fix data discrepancies.

Data might be erroneous in its attribute values, making it noisy. The data-gathering instruments utilized may be defective as a result of the following. Perhaps there were human or computer mistakes during data input. Data transfer errors can also occur.

Data that is “dirty” might throw the mining process off. Although most mining processes include certain techniques, they often deal with incomplete or noisy data that isn’t always reliable. As a result, running the data through various data cleansing algorithms is a valuable Data Preprocessing step.

  • Data reduction- Complex data analysis and mining on large datasets might take a long time, rendering such study unaffordable or impossible. Data reduction techniques are useful for analyzing a reduced representation of a data collection without jeopardizing the original data’s integrity while still yielding qualitative knowledge. The following are some data reduction strategies:

Aggregation processes are applied to the data in the formation of a data cube in Data Cube Aggregation.

Dimension Reduction detects and removes features or dimensions that are unnecessary, weakly related, or redundant.

Encoding techniques are employed in Data Compression to minimize the number of data sets. Wavelet Transform and Principal Component Analysis are two approaches for data compression.

Testing. CI/CD. Monitoring.

Because ML systems are more fragile than you think. All based on our open-source core.

Our GithubInstall Open SourceBook a Demo