Course Notes -- Fast.ai

Intro

I’ve been meaning to run through the Fast.ai course on practical deep learning for about a year now, since I first learned of it, and it’s finally time. As usual, I’ll be burning down through this quickly, and thankfully have a pretty deep background in the gnarlier, more mathematical aspects of the field… (I’ve derived and implemented backprop by hand a number of times, and most recently on the professional side, was SVP product for a startup building LLM Inference HW, so I’ve become quite familiar with the underlying theory and architectures).

But now it’s time to roll my sleeves up and learn the hands-on tools.

I’ll be using this space to track and update any particularly interesting things I learn along the way.

Part 1

My goal here will be to go through the fast.ai Part 1 sequence and produce and deploy an ML model accessible directly from this project page.

Things I learned:

  • yt-dlp is a great command line tool for downloading youtube videos. Highly useful for making it easy to adjust playback speed (using [ and ] keyboard shortcuts) with mpv (a close analog to mplayer with good mac support).
  • timm is a pytorch-based deep learning library collecting a number of pre-existing image models.
  • I cannot believe I hadn’t encountered python’s functools.partial() before. I’ve been used to this sort of functionality since my days messing around with making a solver for hateris in haskell ages ago, and I’ve always rolled my own in python using lambda functions. But of course there’s a built-in for that now.
  • ipywidgets.interact is another major quality of life improvement. However, as a reminder to myself, this is not enabled by default in jupyterlab.
  • Python decorators, which have been on my “to-learn” list forever, and have finally drifted to the top.
  • The use of * and / in python argument lists in order to enforce the allowable order of positional vs keyword arguments.
  • Nice to use log() for reducing the domain of distributions. (But more important to just be aware of, and interrogate, your data distributions!)
  • Somehow, I missed that list comprehensions in python also extend to dictionaries and sets! Cute.
  • I had not encountered the book Python for Data Analysis before, but it’s a solid resource on some of the internals and tools (especially in pandas) that I was less familiar with.
  • I was not previously familiar with SymPy at all. Seems legit.
  • Random Forests: I had heard of these before, but never actually learned them. To be honest, I’m a little disappointed. The process is cute, elegant, and simple. But damnit, it’s about as crude as a blunt rock. Chop your dataset into random subsets, each with a random subset of all of the features, and train a bunch of decision trees (one on each subset of your data). For predicting, take the average value of all of them. Or maybe the mode, depending on whether you want a quantized result or not. It works, but it’s literally just duct-taping random shit together. Though sometimes, that’s all you need.
  • Test Time Augmentation is a cute trick that seems especially amenable to image models for potentially improving output accuracy.

Part 1 - Project

This is a transformer-based number-classifier, trained on MNIST. More details about the specific architecture I experimented with below. To use, draw a digit 0-9, and click submit.

0: 0%
1:
2:
3:
4:
5:
6:
7:
8:
9: