cst 383: learning log 6
This week, we focused on kNN regression, hyperparameter tuning, and evaluating model performance. I learned that kNN regression predicts numeric values by averaging out the labels of the k nearest neighbors, and that proper scaling and relevant predictors are really important and essential for overall accuracy. Measuring performance with MSE, RMSE, and MAE helped understand how close the predictions can be to actual values. Using baseline metrics like the mean target clarified whether a model is doing well. Hyperparameter tuning like k, distance metric, and weighting can change results, and also read that GridSearchCV or RandomizedSearchCV can make finding the best combination systematic. Weighted kNN can give closer neighbors more influence as well. The main takeaway overall for me is that good predictions rely on careful tuning and prep, and not just the algorithm itself.
Comments
Post a Comment