Ch2 of Python ML book: implementing the perceptron algorithm.
Kept at it with chapter 2 of the python ML book today, which covers implementing a basic single perceptron algorithm. It starts with one of the original training algorithms where the weights are updated using a scaling factor, and follows up with an improved approach using gradient descent: something that is core to many optimization steps across ML, including within the backpropogation algorithm of neural networks.
I took my time with it and refactored the author's OO solution into what I think is a cleaner one, both in terms of using a higher order function and in having that function capture the weights that are trained for that solution instead of having the prediction function rely on mutable weights (my notebook and the author's).
One thing that tripped me up was the helper function plot_decision_regions
.
I updated the training function to return a log of the weights as they evolved
during the iterations, and I wanted to plot how the decision regions got closer
to 100% during execution. However, it only seems to plot properly at the final step.
I think I need to read more about matplotlib's contourf function before
I can get to the bottom of it. For now, I think I'll move on.