1Errorsurfaceisrugged…Tipsfortraining:AdaptiveLearningRate2•Peoplebelievetrainingstuckbecausetheparametersarearoundacriticalpoint…lossnormofgradienthttps://docs.google.com/presentation/d/1siUFXARYRpNiMeSRwgFbt7mZVjkMPhR5od09w0Z8xaU/edit#slide=id.g3532c09be1_0_3823Waitaminute…4100,000updatesLearningratecannotbeone-size-fits-allTrainingcanbedifficultevenwithoutcriticalpoints.Thiserrorsurfaceisconvex.5DifferentparametersneedsdifferentlearningrateLargerLearningRateSmallerLearningRateParameterdependentFormulationforoneparameter:6RootMeanSquare……7RootMeanSquaresmallerlargerlargerstepsmallerstepUsedinAdagrad8LearningrateadaptsdynamicallyErrorSurfacecanbeverycomplex.LargerLearningRateSmallerLearningRate9RMSProp……10RMSPropsmallersteplargerstepTherecentgradienthaslargerinfluence,andthepastgradientshavelessinfluence.largerstep……11Adam:RMSProp+MomentumformomentumforRMSpropOriginalpaper:https://arxiv.org/pdf/1412.6980.pdf12WithoutAdaptiveLearningRate13LearningRateDecayAsthetraininggoes,weareclosertothedestination,sowereducethelearningrate.LearningRateScheduling14LearningRateDecayAsthetraininggoes,weareclosertothedestination,sowereducethelearningrate.LearningRateSchedulingWarmUpIncreaseandthendecrease?15https://arxiv.org/abs/1512.03385https://arxiv.org/abs/1706.03762ResidualNetworkTransformer16LearningRateDecayAfterthetraininggoes,weareclosetothedestination,sowereducethelearningrate.LearningRateSchedulingWarmUpIncreaseandthendecrease?PleaserefertoRAdamhttps://arxiv.org/abs/1908.0326517SummaryofOptimizationLearningrateschedulingrootmeansquareofthegradientsMomentum:weightedsumofthepreviousgradients(Vanilla)GradientDescentVariousImprovementsConsiderdirectiononlymagnitude18ToLearnMore……https://youtu.be/4pUmZ8hXlHMhttps://youtu.be/e03YKGHXnL8(inMandarin)(inMandarin)19NextTimeSourceofimage:https://arxiv.org/abs/1712.09913Betteroptimizationstrategies:Ifthemountainwon'tmove,buildaroadaroundit.NexttimeCanwechangetheerrorsurface?Directlymovethemountain!