MATH FOR AI

DEEP LEARNING COURSE CONTENT

THE IMPORTANT CHAPTERS FOR ANY DEEP LEARNING COURSE CONTENT

IF YOU WANT TO START DEEP LEARNING AND ARE LOOKING FOR AN OVERVIEW OF WHAT A GOOD COURSE STRUCTURE SHOULD OFFER THIS POST IS FOR YOU . MOST OF THE COURSES TEACH YOU SOME BASIC THEORY ,HOW TO MAKE MODELS IN KERAS / TENSORFLOW AND DEPLOY THEM . HERE WE DISCUSS ALL THE MAJOR TOPICS THAT A GOOD DEEP LEARNING COURSE CONTENT SHOULD HAVE . ALSO IF YOU HAVE ALREADY STARTED YOU CAN USE THIS POST TO CHECK WHERE YOU NEED TO FOCUS ON AND IN WHICH PORTIONS YOU LAG BEHIND . ANYTIME YOU ARE ENROLLING FOR ANY PAID COURSE ONLINE MAKE SURE THEY ARE COVERING THE TOPICS IN THEIR THEORY SECTION MENTIONED BELOW :

WANT TO KNOW WHY COMPANIES PREFER PEOPLE WHO HAVE GOOD COMMAND ON DEEP LEARNING MATHEMATICS OVER JUST THOSE WHO KNOW HOW TO TRAIN MODELS . LISTEN WHAT GOOGLE CEO SUNDAR PICHAI HAD TO SAY ON THIS :

A BIOLOGICAL NEURON

TO UNDERSTAND THE CONCEPTS OF NEURAL NETWORKS YOU MUST UNDERSTAND THE ORIGIN SOURCE /INSPIRATION BEHIND THE IDEA . UNDERSTANDING HOW A NEURON CELL WORKS IS ESSENTIAL FOR UNDERSTANDING WHY SUCH NETWORKS ARE BUILT.

PERCEPTRON AND ITS EVOLUTION

THE BASIC BUILDING BLOCK OF AN ARTIFICIAL NEURAL NETWORK. COMPARING THE ARTIFICIAL NEURON WITH THE REAL COUNTERPART AND UNDERSTANDING EACH STEP OF EVOLUTION AND THE NEED FOR IT .

REPRESENTATION POWER OF PERCEPTRON AND NETWORK OF PERCEPTRONS

UNDERSTANDING HOW SINGLE PERCEPTRONS CAN BE USED FOR LINEAR CLASSIFICATIONS ,PROVING SUCH STATEMENTS MATHEMATICALLY ,AND A NETWORK IS REQUIRED FOR NON LINEAR FUNCTION . HOW TO DEAL WITH BOOLEAN FUNCTIONS AND THEN REAL FUNCTIONS .

WEIGHTS AND BIASES

UNDERSTANDING WHAT MAKES THESE NETWORKS “TRAINABLE ” . WHAT IS THAT WE INTEND TO “LEARN” . WHAT ARE HIDDEN LAYERS .

ROLE OF ACTIVATION FUNCTIONS

UNDERSTANDING WHAT ROLE THE ACTIVATION FUNCTIONS PLAY IN A NEURAL NETWORK MODEL .WHAT ARE THEIR ADVANTAGES . WHAT WOULD HAPPEN IF WE FORM NETWORKS WITHOUT USING ACTIVATION FUNCTIONS . DIFFERENT TYPES OF ACTIVATION FUNCTIONS .

UNDERSTANDING ERRORS/ ERROR SURFACES AND LOSS FUNCTIONS

LEARNING ABOUT THE TYPES OF ERRORS WE CALCULATE WHILE TRAINING A NEURAL NETWORK MODEL (TRAINING ERROR,VALIDATION ERROR) . DIFFERENTIATING BETWEEN TRAINING AND TESTING ERROR . VISUALISING ERROR SURFACES IN 3 DIMENSIONS AND UNDERSTANDING THE INTUITION OF ERROR SURFACES IN N DIMENSIONS WHERE N>3 . THE PURPOSE OF ERROR SURFACES AND LOSS FUNCTIONS . HOW LOSS FUNCTIONS GIVE AN ESTIMATE HOW GOOD YOUR MODEL IS , LOSS FUNCTIONS FOR REGRESSIONS AND CLASSIFIACTION PROBLEMS , RMS , SOFTMAX , CROSS ENTROPY , ENTROPY , INFORMATION GAIN , GINNIE INDEX

BIAS AND VARIANCE , UNDERSTANDING THE DATA

TRADE OFF BETWEEN MODEL COMPLEXITY AND TEST ERROR , BIAS AND VARIANCE , UNDERSTANDING TERMS LIKE OVERFITTING , UNDERFITTING .WHEN DO NEURAL NETWORKS OUTPERFORM TRADITIONAL MACHINE LEARNING ALGORITHM , DATA AUGMENTATION , ADDING NOISE TO INPUTS , ONE HOT REPRESENTATIONS OF NON NUMERIC DATA SET FEATURES , GETTING EXPOSURE TO THE VARIOUS DATA SETS (FREE AS WELL AS PAID ) ON THE INTERNET .

GRADIENTS , PARTIAL DERIVATIVES , MATRICES ,VECTORS AND EIGENVECTORS

UNDERSTANDING GRADIENTS IN TERMS OF LOSS FUNCTIONS , PARTIAL DERIVATIVE OF THE LOSS FUNCTION WITH RESPECT TO ALL THE WEIGHTS AND BIASES. UNDERSTANDING THE LOGIC BEHIND “GRADIENT DESCENT ALGORITHM ” , PROVING IT USING TAYLOR SERIES REPRESENTATION

FEED FORWARD NETWORK

UNDERSTANDING HOW USING THE CURRENT WEIGHTS AND BIASES YOU CAN CALCULATE THE OUTPUT LAYER OUTCOME GIVEN A CERTAIN INPUT .

UPDATING WEIGHTS , BACK PROPAGATION

THE BASIC “LEARNING” ALGORITHM WITHOUT ANY OPTIMISATION ALGORITHMS . STOCHASTIC LEARNING. USE OF CHAIN RULE AND PARTIAL DERIVATIVES FOR VISUALISING HOW EACH OF THOSE MILLIONS OF WEIGHTS AND BIASES ARE UPDATED . PROBLEMS FACED IN BACK PROPAGATION , THE PROBLEM OF VANISHING GRADIENTS , EXPLODING GRADIENTS , LEARNING RATE , EFFECT OF MAGNITUDE OF LEARNING RATE .

OPTIMISING THE TRAINING /BACK PROPAGATION ALGORITHM , REGULARISATION TECHNIQUES

INCORPORATING ADAPTIVE LEARNING TECHNIQUES LIKE MOMENTUM ,USING TRAINING HISTORY FOR VARYING THE LEARNING RATE , NESTROV LEARNING , FURTHER OPTIMISING USING ADAM ,ADAGRAD OPTIMISER .BIAS CORRECTION IN ADAM, RMS PROP,THE MATHEMATICS BEHIND THE ALGORITHMS , BOOSTING TECHNIQUES , WEAK LEARNERS , ADABOOST ,REGULARISATION , REGULARISATION TECHNIQUES LIKE DROPOUT ,EARLY STOPPING , BETTER WEIGHT INITIALISATION STRATEGIES ,BATCH NORMALISATION .

PRINCIPLE COMPONENT ANALYSIS

PRINCIPLE COMPONENT ANALYSIS INTERPRETATION

AUTOENCODERS

INTRODUCTION TO AUTOENCODERS , RELATION BETWEEN PCA AND AUTOENCODERS ,REGULARISATION IN AUTOENCODERS ,SPARSE AUTOENCODERS ,DENOISING AUTOENCODERS . CONTRACTIVE AUTOENCODERS . APPLICATIONS OF AUTO ENCODERS .

CONVOLUTIONAL NEURAL NETWORKS

WHAT IS CONVOLUTION , FILTERS , POOLING , POOLING TECHNIQUES LIKE MINPOOLING , MAXPOOLING , AVERAGE POOLING , CLASSIFICATION PROBLEMS , GUIDED BACKPROPOGATION , PROBLEMS IN CONVOLUTIONAL NEURAL NETWORKS , UNDERSTANDING DIMENSIONS OF EVERY HIDDEN LAYER . BRIEF KNOWLEDGE ABOUT GANS( GENERATIVE ADVESERIAL NETWORKS , RCNN, FAST RCNN , FASTER RCNN , MASK RCNN

RECURRENT NEURAL NETWORKS

WHAT TO DO WHEN SEQUENCE OF DATA INPUT MATTERS , RECURRENT NEURAL NETWORKS , PROBLEM OF VANISHING AND EXPLODING GRADIENTS IN RECURRENT NEURAL NETWORKS , INTRODUCTION TO LSTMs, GRUs , DIFFERENT GATES USED IN LSTMs AND GRUs . TIME SERIES PREDICTIONS , NATURAL LANGUAGE PROCESSING .

CONCLUSION

ABOVE WE COVERED ALL THE MAJOR TOPICS YOU MUST BE THOROUGH IN UNDERSTAND AND LEARN DEEP LEARNING / NEURAL NETWORKS RATHER THAN JUST MUGGING IT UP . APART FROM THIS FOCUS ON TRAINING MODELS IN PYTORCH AND TENSORFLOW . KERAS IS VERY STRAIGHTFORWARD AND THOUGH VERY USEFUL AND USER FRIENDLY ITS NOT AN IDEAL TOOL FOR PROPLE WHO WANT TO UNDERSTAND THE THINGS ON THE ROOT LEVEL..

DO LEAVE YOUR COMMENTS /QUERIES BELOW REGARDING DEEP LEARNING COURSE CONTENT…………………………………………

Add a Comment

You must be logged in to post a comment