Home page

Deconstructing Deep Learning + δeviations

Drop me an email Format : Date | Title
  TL; DR

Total posts : 86

View My GitHub Profile


Index page

More Deep Learning, Less Crying - A guide

This is a guide to make deep learning less messy and hopefully give you a way to use less tissues next time you code.

Who is this article for? A checklist

If you can answer yes to most of them. Read. Or cry. Your choice of course.

Oh yay. You made it here. Wipe your eyes for one last time because this will be a ride :)

PS. This might be a long-ish checklist but trust me, it will save you many tears. A note that the materials were compiled from way too many papers and slides so I do not have the proper citation for every statement here. In the references section you can find a list of all the ones I could find.

What is covered here?

In this article, I have tried to cover the major parts that frustrate me on a daily basis and their potential solutions. - This is platform independant. So it does not matter if you are using pytorch/tensorflow/caffe/flux.jl or any of the others. - We first talk about some sensible default architecture and training choices you can make to get up and running quickly. - Then we look at some tricks to make life easier and train our models faster and preserve stability. - After that we look at some hyper parameters and decide which to spend our time on. - Then for the juicy bit, we look at some common bugs and how to overcome them. This includes memory errors, under/over fitting errors etc.

Sensible defaults

Most of the time, contrary to popular belief we can actually get pretty great results by using some default values. Or sticking to simpler architectures before using some complicated one and messing everything up.

Architecture

Let us look at some defaults we can look at while building a network. Note that this goes from easy -> complicated - Dataset with only images : Start with a LeNet like architecture -> ResNets -> Even more complicated ones - Dataset with only sequences : Start with an LSTM with one hidden layer (or try with 1D convs) -> Attention or wave net based -> Transformers maybe - Other : Start with a fully connected with 1 hidden layer -> This actually cannot be generalized

Training choices

What about training? Once you have set up everything, you might be faced with endless options. What do you stick to? - Optimizer : Honestly, stick to an Adam optimizer with lr = 3e-4. (Or use AdamW+ learning rate finder) - Activations : Use relu for fully connected and convolution layers and tanh if you have an LSTM. - Initialization : He or Glorot normal should do fine. - Regularization : None (as a start). Look at this only when everything else is okay. - Normalization : None (as a start). Batchnorm causes a lot of bugs so use only when everything else is working. - Consider using a subset of the data or reduced number of classes at first. - Try to overfit a single batch first and compare with known results. (More on this below)

Tricks to use while training

Now this is just beautiful. Do give this paper by Tong He et al a read. It's amazing and covers these points in detail. So instead of repeating content, I have just given a tiny brief.

Hyperparams

Have too many to choose from? Here are some you can look at in order of importance. (Thank you Josh Tobin).

Bugs

Some of the most common bugs we might face and how to begin solving them.

Tackling out of memory errors

Sometimes your GPU starts cursing at you. Sometimes it's your fault. Sometimes you just forgot the clear the cache. This is for the other times.

Your tensors are too big

You have stuffed it with too much data

You are doing the same thing too many times (Duplicated operations)

Single batch overfitting

Want a quick way to identify a bunch of errors? Just pass the same data batch again and again. And check for these signs. (Talk about a hack). Basically just do the opposite if any of these happen.

How well do you fit?

No I am not talking about that snazzy dress you got before the lockdown.

Underfitting

Your model cries over test data.

Overfitting

Your model just cries anyway.

A good choice for either

Okay cool.. now what?

Well that about covers what I wanted to say here. It is by no means an exhaustive list. But that's why we have stack overflow right? I sincerly hope this helped you out a bit. And made you feel a bit more confident. Do let me know!! You can always reach out in the comments or connect with me from my website.

References