**KNN is a non-parametric, slow learning algorithm.** It predicts the categorization of a new sample point using data from many classes. KNN is non-parametric since it makes no assumptions about the data it is analyzing, i.e. the model is distributed from the data.

What does it imply when KNN is regarded as a **lazy algorithm**?

- It means it doesn’t make any generalizations based on the training data points. This means that there will be little to no explicit training period. The training process will be quick, and KNN will keep all of the training data because it will be needed for the testing phase.

Often these data do not follow traditional theoretical assumptions, such as when using a model like linear regression, which is why KNN is so important when working with data that has little or no prior information.

When given unlabeled data, a supervised machine learning algorithm uses labeled input data to develop a function that gives a suitable output. **KNN is a supervised ML algorithm.**

You train your data on a labeled piece of data and ask it to predict the label for an unlabeled point in supervised learning. A tumor prediction model, for example, is trained on a large number of clinical test findings that are labeled as positive or negative. AThe trained model may then predict if an unmarked check will yield a positive or negative outcome.

We use the labeled data that we already have to train it. We want to train a function g: X Y from a dataset of observation (x, y) such that we may use g(x) to predict the matching output Y using X.

## Pros and cons of KNN

**Pros**:

- Time to calculate quickly
- To understand, a simple algorithm is used.
- Regression and classification are both possible with this model.
- High precision â€” there’s no need to compare to more supervised learning models.
- There are no additional assumptions regarding data, and there is no need to modify numerous parameters or develop a model. This makes it vital in the case of nonlinear data.

**Cons:**

- The accuracy of the data is determined by its quality.
- With a lot of data, the prediction stage might take a long time.
- Sensitive to the size of the data and aspects that aren’t relevant
- High memory is required since all of the training data must be stored.
- It might be computationally costly because it stores all of the training.

KNN is used by companies like Amazon and Netflix to recommend books and movies to buy or watch.

What methods do these businesses use to make recommendations? These firms use KNN to analyze data from books you’ve read or movies you’ve viewed on their website. Your accessible customer data will be entered and compared to that of other customers who have purchased comparable books or viewed similar movies.

Depending on how the system identifies that data point, books and movies are recommended.

The k-nearest neighbor method saves all existing data and classifies fresh data points based on their similarity (e.g., distance functions). When new data arrives, this is what it signifies. Then, using the K-NN method, it may be readily sorted into a suitable category.

Assume there are two classes, Class B and Class C, and we have a new unknown data point. Which of these classes will this data point belong to? A K-NN method is required to address this problem. We can simply determine the class of a dataset using K-NN. The data point is categorized by a majority vote of its neighbors, with the data point being allocated to the most frequent class among its K closest neighbors as determined by a distance function.

## Key takeaways

- No model is learned by KNN.
- The similarity between an input sample and each training instance is used by KNN to create predictions.
- This article has taught you the basics of one of the most fundamental machine learning algorithms.

When learning to develop models based on diverse data sets, KNN is an excellent place to start. To start using KNN, you need to start with a data collection having a number of diverse points and reliable information.