Whengradientissmall…Hung-yiLee李宏毅localminimaOptimizationFailsbecause……updatestraininglossNotsmallenoughgradientisclosetozerosaddlepointcriticalpointWhichone?NowaytogoescapeWarningofMathTaylerSeriesApproximationHessianSourceofimage:http://www.offconvex.org/2016/03/22/saddlepoints/tellingthepropertiesofcriticalpointsAtcriticalpointHessianAtcriticalpoint:LocalminimaAlleigenvaluesarepositive.LocalmaximaSaddlepoint==Alleigenvaluesarenegative.==Someeigenvaluesarepositive,andsomearenegative.ExampleErrorSurfacesaddleminimaminimaCriticalpoint:SaddlepointSaddlepointAtcriticalpoint:Don’tafraidofsaddlepoint?Criticalpoint:SaddlepointHaseigenvectorYoucanescapethesaddlepointanddecreasetheloss.(thismethodisseldomusedinpractice)EndofWarningSaddlePointv.s.LocalMinima•A.D.1543SaddlePointv.s.LocalMinima•TheMagicianDiorena(魔法師狄奧倫娜)來源《三體Ⅲ·死神永生》Sourceofimage:https://read01.com/mz2DBPE.html#.YECz22gzbIUFrom3dimensionalspace,itissealed.Itisnotinhigherdimensions.SaddlePointv.s.LocalMinimaWhenyouhavelotsofparameters,perhapslocalminimaisrare?Sourceofimage:https://arxiv.org/abs/1712.09913Saddlepointinhigherdimension?EmpiricalStudyTrainingLossMinimumRatioTrainanetworkonce,untilitconvergestocriticalpoint.Minimumratio=NumberofEigenvaluesNumberofPositiveEigenvaluesneverreachareal“localminima”More“like”localminimaSource:https://docs.google.com/presentation/d/1siUFXARYRpNiMeSRwgFbt7mZVjkMPhR5od09w0Z8xaU/edit#slide=id.g31470fd33a_0_33SmallGradient…LossThevalueofanetworkparameterwVeryslowattheplateauStuckatlocalminimaStuckatsaddlepointGradientDescentTipsfortraining:BatchandMomentumBatchReview:OptimizationwithBatchNBbatchbatchbatchbatch1epoch=seeallthebatchesonceupdateupdateupdateShuffleaftereachepochSmallBatchv.s.LargeBatchBatchsize=N(Fullbatch)SeeallexamplesSeeallexamplesSeeonlyoneexampleUpdateafterseeingallthe20examplesUpdate20timesinanepochUpdateforeachexampleConsider20examples(N=20)Batchsize=1Longtimeforcooldown,butpowerfulShorttimeforcooldown,butnoisySmallBatchv...