arXiv:adap-org/9806001v1

exhentai.org  时间:2021-04-06  阅读:()
3Jun1998NeuralNetworkDesignforJFunctionApproximationinDynamicProgrammingXiaozhongPangPaulJ.
WerbosAbstractThispaperwillshowthatanewneuralnetworkdesigncansolveanexampleofdicultfunctionapproximationproblemswhicharecrucialtotheeldofapproximatedynamicprogramming(ADP).
Althoughconven-tionalneuralnetworkshavebeenproventoapproximatesmoothfunctionsverywell,theuseofADPforproblemsofintelligentcontrolorplanningrequirestheapproximationoffunctionswhicharenotsosmooth.
Asanexample,thispaperstudiestheproblemofapproximatingtheJfunctionofdynamicprogrammingappliedtothetaskofnavigatingmazesingen-eralwithouttheneedtolearneachindividualmaze.
Conventionalneuralnetworks,likemulti-layerperceptrons(MLPs),cannotlearnthistask.
Butanewtypeofneuralnetworks,simultaneousrecurrentnetworks(SRNs),candosoasdemonstratedbysuccessfulinitialtests.
Thepaperalsoex-aminestheabilityofrecurrentneuralnetworkstoapproximateMLPsandviceversa.
Keywords:Simultaneousrecurrentnetworks(SRNs),multi-layerpercep-trons(MLPs),approximatedynamicprogramming,mazenavigation,neuralnet-works.
1Introduction1.
1PurposeThispaperhasthreegoals:First,todemonstratethevalueofanewclassofneuralnetworkwhichpro-videsacrucialcomponentneededforbrain-likeintelligentcontrolsystemsforthefuture.
Second,todemonstratethatthisnewkindofneuralnetworkprovidesbetterfunctionapproximateabilityforuseinmoreordinarykindsofneuralnetworkapplicationsforsupervisedlearning.
Third,todemonstratesomepracticalimplementationtechniquesnecessarytomakethiskindofnetworkactuallyworkinpractice.
X(t)Y(t)SupervisedLearningSystemActualY(t)^Figure1:Whatissupervisedlearning1.
2BackgroundAtpresent,intheneuralnetworkeldperhaps90%ofneuralnetworkapplica-tionsinvolvetheuseofneuralnetworksdesignedtoperformanceataskcalledsupervisedlearning(Figure1).
Supervisedlearningisthetaskoflearninganon-linearfunctionwhichmayhaveseveralinputsandseveraloutputsbasedonsomeexamplesofthefunction.
Forexample,incharacterrecognition,theinputsmaybeanarrayofpixelsseenfromacamera.
Thedesiredoutputsofthenetworkmaybeaclassicationofcharacterbeingseen.
Anotherexamplewouldbeforintelligentsensinginthechemicalindustrywheretheinputsmightbespectraldatafromobservingabatchofchemicals,andthedesiredoutputswouldbetheconcentrationsofthedierentchemicalsinthebatch.
Thepurposeofthisapplicationistopredictorestimatewhatisinthebatchwithouttheneedforexpensiveanalyticaltests.
Theworkinthispaperwillfocustotallyoncertaintasksinsupervisedlearning.
Eventhoughexistingneuralnetworkscanbeusedinsupervisedlearn-ing,therecanbeperformanceproblemsdependingonwhatkindoffunctionislearned.
Manypeoplehaveprovenmanytheoremstoshowthatneuralnetworks,fuzzylogic,Taylortheoriesandotherfunctionapproximationhaveauniversalabilitytoapproximatefunctionsontheconditionthatthefunctionshavecertainpropertiesandthatthereisnolimitonthecomplexityoftheapproximation.
Inpractice,manyapproximationschemesbecomeuselesswhentherearemanyinputvariablesbecausetherequiredcomplexitygrowsatanexponentialrate.
Forexample,onewaytoapproximateafunctionwouldbetoconstructatableofthevaluesofthefunctionatcertainpointsinthespaceofpossibleinputs.
Supposethereare30inputvariablesandweconsider10possiblevaluesofeachinput.
Inthatcase,thetablemusthave1030numbersinit.
Thisisnotusefulinpracticeformanyreasons.
Actually,however,manypopular2approximationmethodslikeradialbasisfunction(RBF)aresimilarinspirittoatableofvalues.
Intheeldofsupervisedlearning,AndrewBarron[30]hasprovedsomefunc-tionapproximationtheoremswhicharemuchmoreusefulinpractice.
Hehasproventhatthemostpopularformofneuralnetworks,themulti-layerpercep-tron(MLP),canapproximateanysmoothfunction.
Unlikethecasewiththelinearbasisfunctions(likeRBFandTaylorseries),thecomplexityofthenet-workdoesnotgrowrapidlyasthenumberofinputvariablesgrows.
Unfortunatelytherearemanypracticalapplicationswherethefunctionstobeapproximatedarenotsmooth.
Insomecases,itisgoodenoughjusttoaddextralayerstoanMLP[1]ortouseageneralizedMLP[2].
However,therearesomedicultproblemswhichariseineldslikeintelligentcontrolorimageprocessingorevenstochasticsearchwherefeed-forwardnetworksdonotappearpowerfulenough.
1.
3SummaryandOrganizationofThisPaperThemaingoalofthispaperistodemonstratethecapabilityofadierentkindofsupervisedlearningsystembasedonakindofrecurrentnetworkcalledsimultaneousrecurrentnetwork(SRN).
Inthenextchapterwewillexplainwhythiskindofimprovedsupervisedlearningsystemwillbeveryimportanttointelligentcontrolandtoapproximatedynamicprogramming.
Ineectthisworkonsupervisedlearningistherststepinamulti-stepeorttobuildmorebrain-likeintelligentsystems.
ThenextstepwouldbetoapplytheSRNtostaticoptimizationproblems,andthentointegratetheSRNsintolargesystemsforADP.
Eventhoughintelligentcontrolisthemainmotivationforthiswork,theworkmaybeusefulforotherareasaswell.
Forexample,inzipcoderecog-nition,AT&T[3]hasdemonstratedthatfeed-forwardnetworkscanachieveahighlevelofaccuracyinclassifyingindividualdigits.
However,AT&Tandtheothersstillhavedicultyinsegmentingthetotalzipcodesintoindividualdig-its.
ResearchonhumanvisionbyvonderMalsburg[4]andothershassuggestedthatsomekindsofrecurrencyinneuralnetworksarecrucialtotheirabilitiesinimagesegmentationandbinocularvision.
Furthermore,researchersinimageprocessinglikeLaveenKanalhaveshowedthatiterativerelaxationalgorithmsarenecessaryeventoachievemoderatesuccessinsuchimageprocessingtasks.
ConceptuallytheSRNcanlearnanoptimaliterativealgorithm,buttheMLPcannotrepresentanyiterativealgorithms.
Insummary,thoughwearemostinterestedinbrain-likeintelligentcontrol,thedevelopmentofSRNscouldleadtoveryimportantapplicationsinareassuchasimageprocessinginthefuture.
Thenetworkdescribedinthispaperisuniqueinseveralrespects.
However,itiscertainlynottherstserioususeofarecurrentneuralnetwork.
Chapter3ofthispaperwilldescribetheexistingliteratureonrecurrentnetworks.
Itwilldescribetherelationshipbetweenthisnewdesignandotherdesignsinthe3literature.
Roughlyspeaking,thevastbulkofresearchinrecurrentnetworkshasbeenacademicresearchusingdesignsbasedonordinarydierentialequa-tions(ODE)toperformsometasksverydierentfromsupervisedlearning—taskslikeclustering,associativememoryandfeatureextraction.
ThesimpleHebbianlearningmethods[13]usedforthosetasksdonotleadtothebestper-formanceinsupervisedlearning.
Manyengineershaveusedanothertypeofrecurrentnetwork,thetimelaggedrecurrentnetwork(TLRN),wheretherecur-rencyisusedtoprovidememoryofpasttimeperiodsforuseinforecastingthefuture.
However,thatkindofrecurrencycannotprovidetheiterativeanalysiscapabilitymentionedabove.
VeryfewresearchershavewrittenaboutSRNs,atypeofrecurrentnetworkdesignedtominimizeerrorandlearnanoptimaliterativeapproximationtoafunction.
ThisiscertainlytherstuseofSRNstolearnaJfunctionfromdynamicprogrammingwhichwillbeexplainedmoreinchapter2.
ThismayalsobetherstempiricaldemonstrationoftheneedforadvancedtrainingmethodstopermitSRNstolearndicultfunctions.
Chapter4willexplaininmoredetailthetwotestproblemswehaveusedfortheSRNandtheMLP,aswellasthedetailsofarchitectureandlearningprocedure.
ThersttestproblemwasusedmainlyasaninitialtestofasimpleformofSRNs.
Inthisproblem,wetriedtotestthehypothesisthatanSRNcanalwayslearntoapproximatearandomlychosenMLP,butnotviceversa.
Althoughourresultsareconsistentwiththathypothesis,thereisroomformoreextensiveworkinthefuture,suchasexperimentswithdierentsizesofneuralnetworksandmorecomplexstatisticalanalysis.
ThemaintestprobleminthisworkwastheproblemoflearningtheJfunctionofdynamicprogramming.
Foramazenavigationproblem,manyneuralnetworkresearchershavewrittenaboutneuralnetworkswhichlearnanoptimalpolicyofactionforoneparticularmaze[5].
ThispaperwilladdressthemoredicultproblemoftraininganeuralnetworktoinputapictureofamazeandoutputtheJfunctionforthismaze.
WhentheJfunctionisknown,itisatriviallocalcalculationtondthebestdirectionofmovement.
Thiskindofneuralnetworkshouldnotrequireretrainingwheneveranewmazeisencountered.
Insteaditshouldbeabletolookatthemazeandimmediately"see"theoptimalstrategy.
Trainingsuchanetworkisaverydicultproblemwhichhasneverbeensolvedinthepastwithanykindofneuralnetwork.
Alsoitistypicalofthechallengesoneencountersintrueintelligentcontrolandplanning.
Thispaperhasdemonstratedaworkingsolutiontothisproblemforthersttime.
Nowthatasystemisworkingonaverysimpleformforthisproblem,itwouldbepossibleinthefuturetoperformmanytestsoftheabilityofthissystemtogeneralizeitssuccesstomanymazes.
Inordertosolvethemazeproblem,itwasnotsucientonlytouseanSRN.
TherearemanychoicestomakewhenimplementingthegeneralideaofSRNsorMLPs.
Chapter5willdescribeindetailhowthesechoicesweremadeinthiswork.
Themostimportantchoiceswere:41.
BothfortheMLPandforthefeed-forwardcoreoftheSRNweusedthegeneralizedMLPdesign[2]whicheliminatestheneedtodecideonthenumberoflayers.
2.
Forthemazeproblem,weusedacellularorweight-sharingarchitecturewhichexploitsthespatialsymmetryoftheproblemandreducesdramaticallythenumberofweights.
Ineectwesolvedthemazeproblemusingonlyvedistinctneurons.
Thereareinterestingparallelsbetweenthisnetworkandthehippocampusofthehumanbrain.
3.
Forthemazeproblem,anadaptivelearningrate(ALR)procedurewasusedtopreventoscillationandensureconvergence.
4.
InitialvaluesfortheweightsandtheinitialinputvectorfortheSRNwerechosenessentiallyatrandom,byhand.
Inthefuture,moresystematicmethodsareavailable.
Butthiswassucientforsuccessinthiscase.
FinallyChapter6willdiscussthesimulationresultsinmoredetail,givetheconclusionsofthispaperandmentionsomepossibilitiesforfuturework.
2MotivationInthischapterwewillexplaintheimportanceofthiswork.
Asdiscussedabove,thepapershowshowtouseanewtypeofneuralnetworkinordertoachievebetterfunctionapproximationthanwhatisavailablefromthetypesofneu-ralnetworkswhicharepopulartoday.
Thischapterwilltrytoexplainwhybetterfunctionapproximationisimportanttoapproximatedynamicprogram-ming(ADP),intelligentcontrolandunderstandingthebrain.
ImageprocessingandotherapplicationshavealreadybeendiscussedintheIntroduction.
Thesethreetopics—ADP,intelligentcontrolandunderstandingthebrain—areallcloselyrelatedtoeachotherandprovidetheoriginalmotivationfortheworkofthispaper.
Thepurposeofthispaperistomakeacorecontributiontodevelopingthemostpowerfulpossiblesystemforintelligentcontrol.
Inordertobuildthebestintelligentcontrolsystems,weneedtocombinethemostsuitablemathematicstogetherwithsomeunderstandingofnaturalintelligenceinthebrain.
Thereisalotofinterestinintelligentcontrolintheworld.
Somecontrolsystemswhicharecalledintelligentareactuallyveryquickandeasythings.
Therearemanypeoplewhotrytomovestepbysteptoaddintelligenceintocontrol,butastep-by-stepapproachmaynotbeenoughbyitself.
Sometimestoachieveacomplexdicultgoal,itisnecessarytohaveaplan,thussomepartsoftheintelligentcontrolcommunityhavedevelopedamoresystematicvisionorplanforhowitcouldbepossibletoachieverealintelligentcontrol.
First,onemustthinkaboutthequestionofwhatisintelligentcontrol.
Then,insteadoftryingtoanswerthisquestioninonestep,wetrytodevelopaplantoreachthedesign.
Actuallytherearetwoquestions:51.
Howcouldwebuildanarticialsystemwhichreplicatesthemaincapa-bilitiesofbrain-likeintelligence,somehowuniedtogetherastheyareuniedtogetherinthebrain2.
Howcanweunderstandwhatarethecapabilitiesinthebrainandhowtheyareorganizedinafunctionalengineeringviewi.
e.
howarethosecircuitsinthehumanbrainarrangedtolearnhowtoperformdierenttasksItwouldbebesttounderstandhowthehumanbrainworksbeforebuildinganarticialsystem.
However,atthepresenttime,ourunderstandingofthebrainislimited.
Butatleastweknowthatlocalrecurrencyplayscriticalruleinthehigherpartofthehumanbrain[6][7][8][4].
AnotherreasontouseSRNsisthatSRNscanbeveryusefulinADPmath-ematically.
NowwewilldiscusswhatADPcandoforintelligentcontrolandunderstandingthebrain.
Theremainderofthischapterwilladdressthreequestionsinorder:1.
WhatisADP2.
WhatistheimportanceofADPtointelligentcontrolandunderstandingthebrain3.
WhatistheimportanceofSRNstoADP2.
1WhatisADPandJFunctionToexplainwhatisADP,letusconsidertheoriginalBellmanequation[9]:J(R(t))=maxu(t)(U(R(t),u(t))+)/(1+r)U0(1)whererandu0areconstantsthatareusedonlyininnite-time-horizonproblemsandthenonlysometimes,andwheretheanglebracketsrefertoexpectationvalue.
Inthispaperweactuallyuse:J(R(t))=maxu(t)(U(R(t),u(t))+)(2)sincethemazeproblemdonotinvolveaninnitetime-horizon.
InsteadofsolvingforthevalueofJineverypossiblestate,R(t),wecanuseafunctionapproximationmethodlikeneuralnetworkstoapproximatetheJfunction.
Thisiscalledapproximatedynamicprogramming(ADP).
ThispaperisnotdoingtrueADPbecauseintrueADPwedonotknowwhattheJfunctionisandmustthereforeuseindirectmethodstoapproximateit.
However,beforewetrytouseSRNsasacomponentofanADPsystem,itmakessensetorsttesttheabilityofanSRNtoapproximateaJfunction,inprinciple.
NowwewilltrytoexplainwhatistheintuitivemeaningoftheBellmanequation(Equation(1))andtheJfunctionaccordingtothetreatmenttakenfrom[2].
TounderstandADP,onemustrstreviewthebasicsofclassicaldynamicprogramming,especiallytheversionsdevelopedbyHoward[28]andBertsekas.
6Classicaldynamicprogrammingistheonlyexactandecientmethodtocom-putetheoptimalcontrolpolicyovertime,inageneralnonlinearstochasticenvi-ronment.
Theonlyreasontoapproximateitistoreducecomputationalcost,soastomakethemethodaordable(feasible)acrossawiderangeofapplications.
Indynamicprogramming,theusersuppliesautilityfunctionwhichmaytaketheformU(R(t),u(t))—wherethevectorRisaRepresentationorestimateofthestateoftheenvironment(i.
e.
thestatevector)—andastochasticmodeloftheplantorenvironment.
Then"dynamicprogramming"(i.
e.
solutionoftheBellmanequation)givesusbackasecondaryorstrategicutilityfunctionJ(R).
ThebasictheoremisthatmaximizingU(R(t),u(t))+J(R(t+1))yieldstheoptimalstrategy,thepolicywhichwillmaximizetheexpectedvalueofUaddedupoverallfuturetime.
Thusdynamicprogrammingconvertsadicultprob-leminoptimizingovermanytimeintervalsintoastraightforwardprobleminshort-termmaximization.
Inclassicaldynamicprogramming,wendtheexactfunctionJwhichexactlysolvestheBellmanequation.
InADP,welearnakindof"model"ofthefunctionJ;this"model"iscalleda"Critic.
"(Alternatively,somemethodslearnamodelofthederivativesofJwithrespecttothevariablesRi;thesecorrespondtoLagrangemultipliers,λi,andtothe"pricevariables"ofmicroeconomictheory.
SomemethodslearnafunctionrelatedtoJ,asintheAction-DependentAdaptiveCritic(ADAC)[29].
2.
2IntelligentControlandRobustControlTounderstandthehumanbrainscientically,wemusthavesomesuitablemath-ematicalconcepts.
Sincethehumanbrainmakesdecisionslikeacontrolsystem,itisanexampleofanintelligentcontrolsystem.
Neuroscientistsdonotyetun-derstandthegeneralabilityofthehumanbraintolearntoperformnewtasksandsolvenewproblemseventhoughtheyhavestudiedthebrainfordecades.
Somepeoplecomparethepastresearchinthiseldtowhatwouldhappenifwespentyearstostudyradioswithoutknowingthemathematicsofsignalprocessing.
Werstneedsomemathematicalideasofhowitispossibleforacomputingsystemtohavethiskindofcapabilitybasedondistributedparallelcomputation.
Thenwemustaskwhatarethemostimportantabilitiesofthehumanbrainwhichunifyallofitsmorespecicabilitiesinspecictasks.
Itwouldbeseenthatthemostimportantabilityofbrainistheabilitytolearnovertimehowtomakebetterdecisionsinordertobettermaximizethegoalsoftheorganism.
Thenaturalwaytoimitatethiscapabilityinengineeringsystemsistobuildsystemswhichlearnovertimehowtomakedecisionswhichmaximizesomemeasureofsuccessorutilityoverfuturetime.
Inthiscontext,dynamicprogrammingisimportantbecauseitistheonlyexactandecientmethodformaximizingutilityoverfuturetime.
Inthegeneralsituation,whererandomdisturbancesandnonlinearityareexpected,ADPisimportantbecauseitprovidesboththelearningcapabilityandthepossibilityofreducingcomputationalcosttoan7aordablelevel.
Forthisreason,ADPistheonlyapproachwehavetoimitatingthiskindofabilityofthebrain.
ThesimilaritybetweensomeADPdesignsandthecircuitryofthebrainhasbeendiscussedatlengthin[10]and[11].
Forexample,thereisanimportantstructureinthebraincalledthelimbicsystemwhichperformssomekindsofevaluationorreinforcementfunctions,verysimilartothefunctionsoftheneuralnetworksthatmustapproximatetheJfunctionofdynamicprogramming.
Thelargestpartofthelimbicsystem,calledthehippocampus,isknowntopossessahigherdegreeoflocalrecurrency[8].
Ingeneral,therearetwowaystomakeclassicalcontrollersstabledespitegreatuncertaintyaboutparametersoftheplanttobecontrolled.
Forexample,incontrollingahighspeedaircraft,thelocationofthecenterofthegravityisnotknown.
Thecenterofgravityisnotknownexactlybecauseitdependsonthecargooftheairplaneandthelocationofthepassengers.
Onewaytoaccountforsuchuncertaintiesistouseadaptivecontrolmethods.
Wecangetsimilarresults,butmoreassuranceofstabilityinmostcases[16]byusingrelatedneuralnetworkmethods,suchasadaptivecriticswithrecurrentnetworks.
Itislikeadaptivecontrolbutmoregeneral.
ThereisanotherapproachcalledrobustcontrolorH∞control,whichtrystodesignaxedcontrollerwhichremainsstableoveralargerangeinparameterspace.
BarasandPatel[31]haveforthersttimesolvedthegeneralproblemofH∞controlforgeneralpartiallyobservednonlinearplants.
Theyhaveshownthatthisproblemreducestoaprobleminnonlinear,stochasticoptimization.
Adaptivedynamicprogrammingmakesitpossibletosolvelargescaleproblemsofthistype.
2.
3ImportanceoftheSRNtoADPADPsystemsalreadyexistwhichperformrelativelysimplecontroltaskslikestabilizinganaircraftasitlandsunderwindyconditions[12].
Howeverthiskindoftaskdoesnotreallyrepresentthehighestlevelofintelligenceorplanning.
Trueintelligentcontrolrequirestheabilitytomakedecisionswhenfuturetimeperiodswillfollowacomplicated,unknownpathstartingfromtheinitialstate.
Oneexampleofachallengeforintelligentcontrolistheproblemofnavigatingamazewhichwewilldiscussinchapter4.
Atrueintelligentcontrolsystemshouldbeabletolearnthiskindoftask.
However,theADPsystemsinusetodaycouldneverlearnthiskindoftask.
TheyuseconventionalneuralnetworkstoapproximatetheJfunction.
BecausetheconventionalMLPcannotapproximatesuchaJfunction,wemaydeducethatADPsystemconstructedonlyfromMLPswillneverbeabletodisplaythiskindofintelligentcontrol.
Therefore,itisessentialthatwecanndakindofneuralnetworkwhichcanperformthiskindoftask.
Aswewillshow,theSRNcanllthiscrucialgap.
ThereareadditionalreasonsforbelievingthattheSRNmaybecrucialtointelligentcontrolasdiscussedinchapter13of[9].
83AlternativeFormsofRecurrentNetworks3.
1RecurrentNetworksinGeneralThereisahugeliteratureonrecurrentnetworks.
Biologistshaveusedmanyrecurrentmodelsbecausetheexistenceofrecurrencyinthebrainisobvious.
However,mostoftherecurrentnetworksimplementedsofarhavebeenclas-sicstylerecurrentnetworks,asshownonthelefthandofFigure2.
Mostofthesenetworksareformulatedfromordinarydierentialequation(ODE)sys-tems.
UsuallytheirlearningisbasedonarestrictedconceptofHebbianlearn-ing.
Originallyintheneuralnetworkeld,themostpopularneuralnetworkswererecurrentnetworkslikethosewhichHopeld[14]andGrossberg[15]usedtoprovideassociativememory.
FEATUREEXTRACTIONART,SOM,.
.
.
MINIMIZATIONHOPFIELD,CAUCHYCLASSICALRECURRENTNETWORKSRECURRENTNETWORKSTLRNSRN(DynamicSystems,Prediction)(Betterfunctionapproximation)ASSOCIATIVEMEMORYSTATICFUNCTIONCLUSTERINGHOPFIELD,HASSOUNFigure2:RecurrentnetworksAssociativememorynetworkscanactuallybeappliedtosupervisedlearning.
Butinactualitytheircapabilitiesareverysimilartothoseoflook-uptablesandradialbasisfunctions.
Theymakepredictionsbasedonsimilaritytoprevious9examplesorprototypes.
Theydonotreallytrytoestimategeneralfunctionalrelationships.
Asaresultthesemethodshavebecomeunpopularinpracticalapplicationsofsupervisedlearning.
ThetheoremsofBarrondiscussedintheIntroductionshowthatMLPsdoprovidebetterfunctionapproximationthandosimplemethodsbasedonsimilarity.
Therehasbeensubstantialprogressinthepastfewyearsindevelopingnewassociativememorydesigns.
Nevertheless,theMLPisstillbetterforthespecictaskoffunctionapproximationwhichisthefocusofthispaper.
Inasimilarway,classicrecurrentnetworkshavebeenusedfortaskslikeclustering,featureextractionandstaticfunctionoptimization.
Butthesearedierentproblemsfromwhatwearetryingtosolvehere.
Actuallytheproblemofstaticoptimizationwillbeconsideredinfuturestagesofthisresearch.
WehopethattheSRNcanbeusefulinsuchappli-cationsafterwehaveuseditforsupervisedlearning.
WhenpeopleusetheclassicHopeldnetworksforstaticoptimization,theyspecifyalltheweightsandconnectionsinadvance[14].
Thishaslimitedthesuccessofthiskindofnetworkforlargescaleproblemswhereitisdiculttoguesstheweights.
WiththeSRNwehavemethodstotraintheweightsinthatkindofstructure.
Thustheguessingisnolongerneeded.
However,touseSRNsinthatapplicationrequiresrenementbeyondthescopeofthispaper.
TherehavealsobeenresearchersusingODEneuralnetworkswhohavetriedtousetrainingschemesbasedonaminimizationoferrorinsteadofHebbianapproaches.
However,inpracticalapplicationsofsuchnetworks,itisimportanttoconsidertheclockratesofcomputationanddatasampling.
Forthatreason,itisbotheasierandbettertouseerrorminimizingdesignsbasedondiscretetimeratherthanODE.
3.
2StructureofDiscrete-TimeRecurrentNetworksIftheimportanceofneuralnetworksismeasuredbythenumberofwordspub-lished,thentheclassicnetworksdominatetheeldofrecurrentnetworks.
How-ever,ifthevalueismeasuredbasedoneconomicvalueofpracticalapplication,thentheeldisdominatedbytime-laggedrecurrentnetworks(TLRNs).
ThepurposeoftheTLRNistopredictorclassifytime-varyingsystemsusingre-currencyasawaytoprovidememoryofthepast.
TheSRNhassomerelationwiththeTLRNbutitisdesignedtoperformafundamentallydierenttask.
TheSRNusesrecurrencytorepresentmorecomplexrelationshipsbetweenoneinputvectorX(t)andoneoutputY(t)withoutconsiderationoftheothertimest.
Figure3andFigure4showusmoredetailsabouttheTLRNandtheSRN.
Incontrolapplications,u(t)representsthecontrolvariableswhichweusetocontroltheplant.
Forexample,ifwedesignacontrollerforacarengine,theX(t)variablesarethedatawegetfromoursensors.
Theu(t)variableswouldincludethevalvesettingswhichweusetotrytocontroltheprocessofcombustion.
TheR(t)variablesprovideawayfortheneuralnetworkstorememberpasttime10Z-1X(t)R(t)X(t+1)u(t)TLRNFigure3:Timelaggedrecurrentnetwork(TLRN)XfyFigure4:Simultaneousrecurrentnetwork(SRN)11cycles,andtoimplicitlyestimateimportantvariableswhichcannotbeobserveddirectly.
Infact,theapplicationofTLRNstoautomobilecontrolisthemostvaluableapplicationofrecurrentnetworkseverdevelopedsofar.
Asimultaneousrecurrentnetwork(Figure4)isdenedasamapping:Y(t)=F(X(t),W)(3)whichiscomputedbyiteratingoverthefollowingequation:y(n+1)(t)=f(y(n)(t),X(t),W)(4)wherefissomesortoffeed-forwardnetworkorsystem,andYisdenedas:Y(t)=limn→∞y(n)(t)(5)WhenweuseYinthispaper,weusen=20insteadof∞here.
InFigure4,theoutputsoftheneuralnetworkcomebackagainasinputstothesamenetwork.
However,inconceptthereisnotimedelay.
Theinputsandoutputsshouldbesimultaneous.
Thatiswhyitiscalledasimultaneousrecurrentnetwork(SRN).
Inpractice,ofcourse,therewillalwaysbesomephys-icaltimedelaybetweentheoutputsandtheinputs.
HoweveriftheSRNisimplementedinfastcomputers,thistimedelaymaybeverysmallcomparedtothedelaybetweendierentframesofinputdata.
InFigure4,Xreferstotheinputdataatthecurrenttimeframet.
Thevectoryrepresentsthetemporaryoutputofthenetwork,whichisthenrecycledasanadditionalsetofinputstothenetwork.
AtthecenteroftheSRNisactuallythefeed-forwardnetworkwhichimplementsthefunctionf.
(IndesigninganSRN,youcanchooseanyfeed-forwardnetworkorsystemasyoulike.
Thefunctionfsimplydescribeswhichnetworkyouuse).
TheoutputoftheSRNatanytimetissimplythelimitofthetemporaryoutputy.
InEquation(3)and(4),noticethattherearetwointegers—nandt—whichcouldbothrepresentsomekindoftime.
Theintegertrepresentsaslowerkindoftimecycle,likethedelaybetweenframesofincomingdata.
Theintegernrepresentsafasterkindoftime,likethecomputingcycleofafastelectronicchip.
Forexample,ifwebuildacomputertoanalyzeimagescomingfromamoviecamera,"t"and"t+1"representtwosuccessiveincomingpictureswithamoviecamera.
Thereareusuallyonly32framespersecond.
(Inthehumanbrain,itseemsthatthereareonlyabout10framespersecondcomingintotheneocortex.
)Butifweuseafastneuralnetworkchip,thecomputationalcycle—thetimebetween"n"and"n+1"—couldbeassmallasamicrosecond.
Inactuality,itisnotnecessarytochoosebetweentime-laggedrecurrency(fromttot+1)andsimultaneousrecurrency(fromnton+1).
Itispossibletobuildahybridsystemwhichcontainsbothtypesofrecurrency.
Thiscouldbeveryusefulinanalyzingdatalikemoviepictures,whereweneedbothmemoryandsomeabilitytosegmenttheimages.
[9]discusseshowtobuildsuchahybrid.
12However,beforebuildingsuchahybrid,wemustrstlearntomakeSRNsworkbythemselves.
Finally,pleasenotethattheTLRNisnottheonlykindofneuralnetworkusedinpredictingdynamicalsystems.
Evenmorepopularisthetimedelayedneuralnetwork(TDNN),showninFigure5.
TheTDNNispopularbecauseitiseasytouse.
However,ithaslesscapability,inprinciple,becauseithasnoabilitytoestimateunknownvariables.
Itisespeciallyweakwhensomeofthesevariableschangeslowlyovertimeandrequirememorywhichpersistsoverlongtimeperiods.
Inaddition,theTLRNtstherequirementsofADPdirectly,whiletheTDNNdoesnot[9][16].
X(t)X(t-1)X(t-k)TDNNX(t+1)u(t-k)u(t-1)u(t)Figure5:Timedelayedneuralnetwork(TDNN)3.
3TrainingofSRNsandTLRNsTherearemanytypesoftrainingthathavebeenusedforrecurrentnetworks.
Dierenttypesoftraininggiverisetodierentkindsofcapabilitiesfordierenttasks.
ForthetaskswhichwehavedescribedfortheSRNandtheTLRN,theproperformsoftrainingallinvolvesomecalculationofthederivativesoferrorwithrespectstotheweights.
Usuallyafterthesederivativesareknown,theweightsareadaptedaccordingtoasimpleformulaasfollows:newWi,j=oldWi,jLRErrorWi,j(6)whereLRiscalledthelearningrate.
TherearevemainwaystotrainSRNs,allbasedondierentmethodsforcalculatingorapproximatingthederivatives.
FourofthesemethodscanalsobeusedwithTLRNs.
Somecanbeusedforcontrolapplications.
Butthedetailsofthoseapplicationsarebeyondthescopeofthispaper.
Theseve13typesoftrainingarelistedinFigure6.
Forthispaper,wehaveusedtwoofthesemethods:Backpropagationthroughtime(BTT)andTruncation.
TypesofSRNSimultaneousBackpropagationBackpropagationThroughTimeForwardPropagationTrainingTruncationErrorCriticsFigure6:TypesofSRNTrainingThevemethodsare:1.
Backpropagationthroughtime(BTT).
Thismethodandforwardpropa-gationarethetwomethodswhichcalculatethederivativesexactly.
BTTisalsolessexpensivethanforwardpropagation.
2.
Truncation.
Thisisthesimplestandleastexpensivemethod.
Itusesonlyonesimplepassofbackpropagationthroughthelastiterationofthemodel.
TruncationisprobablythemostpopularmethodusedtoadaptSRNseventhoughthepeoplewhouseitmostlyjustcallitordinarybackpropagation.
3.
Simultaneousbackpropagation.
Thisismorecomplexthantruncation,butitstillcanbeusedinrealtimelearning.
Itcalculatesderivativeswhichareexactintheneighborhoodofequilibriumbutitdoesnotaccountforthedetailsofthenetworkbeforeitreachestheneighborhoodofequilibrium.
4.
Errorcritics(showninFigure7).
ThisprovidesageneralapproximationtoBTTwhichissuitableforuseinreal-timelearning[9].
14TLRNTLRNCriticErrorCriticErrorErrorErrorR(t)X(t)u(t)X(t)u(t)X(t+1)X(t+1)X(t=1)u(t+1)R(t+1)λ^(t)λ^λ(t)(t+1)Figure7:ErrorCritics155.
Forwardpropagation.
This,likeBTT,calculatesexactderivatives.
Itisoftenconsideredsuitableforreal-timelearningbecausethecalculationsgoforwardintime.
However,whentherearenneuronsandmconnections,thecostofthismethodperunitoftimeisproportionaltonm.
Becauseofthishighcost,forwardpropagationisnotreallybrain-likeanymorethanBTT.
3.
3.
1Backpropagationthroughtime(BTT)BTTisageneralmethodforcalculatingallthederivativeofanyoutcomeorresultofaprocesswhichinvolvesrepeatedcallstothesamenetworkornet-worksusedtohelpcalculatesomekindofnaloutcomevariableorresultE.
Insomeapplications,Ecouldrepresentutility,performance,costorothersuchvariables.
Butinthispaper,Ewillbeusedtorepresenterror.
BTTwasrstproposedandimplementedin[17].
ThegeneralformofBTTisasfollows:fork=1toTdoforwardcalculation(k);calculateresultE;calculatedirectderivativesofEwithrespecttooutputsofforwardcalculations;fork=Tto1backpropagatethroughforwardscalculation(k),calculatingrun-ningtotalswhereappropriate.
ThesestepsareillustratedinFigure8.
Noticethatthisalgorithmcanbeappliedtoallkindsofcalculations.
ThuswecanapplyittocaseswherekrepresentsdataframestasintheTLRNs,ortocaseswherekrepresentsinternaliterationsnasintheSRNs.
AlsonotethateachboxofcalculationreceivesinputfromsomedashedlineswhichrepresentthederivativesofEwithrespecttotheoutputofthebox.
Inordertocalculatethederivativescomingoutofeachcalculationbox,onesimplyusesbackpropagationthroughthecalculationofthatboxstartingoutfromtheincomingderivatives.
WewillexplaininmoredetailhowthisworksintheSRNcaseandtheTLRNcase.
SofarasweknowBTThasbeenappliedinpublishedworkingsystemsforTLRNsandforcontrol,butnotyetforSRNsuntilnow.
However,Rumelhart,HintonandWilliams[18]didsuggestthatsomeoneshouldtrythis.
TheapplicationofBTTforTLRNsisdescribedatlengthin[2]and[9].
TheprocedureisillustratedinFigure9.
Inthisexamplethetotalerrorisactuallythesumoferrorovereachtimetwheretgoesfrom1toT.
ThereforetheoutputsoftheTLRNateachtimet(t#include#include#includevoidF_NET2(doubleF_Yhat,doubleW[30][30],doublex[30],intn,intm,intN,doubleF_W[30][30],doubleF_net[30],doubleF_Ws[30],doubleWs,doubleF_x[30]);voidNET(doubleW[30][30],doublex[30],doubleww[30],intn,intm,intN,doubleYhat[30]);41voidsynthesis(intB[30][30],intA[30][30],intn1,intn2);voidpweight(doubleWs,doubleF_Ws_T,doubleww[30],doubleF_net_T[30],doubleW[30][30],doubleF_W_T[30][30],intn,intN,intm);intminimum(ints,intt,intu,intv);intmin(intk,intl);doublef(doublex);voidmain(){inti,j,it,iz,ix,iy,lt,m,maxt,n,n1,n2,nn,nm,N,p,q,po,t,TT;intA[30][30],B[30][30];doublea,b,dot,e,e1,e2,es,mu,s,sum,F_Ws_T,Ws,F_Yhat,wi;doubleW[30][30],x[30],F_net_T[30],F_Ws[30],F_W_O[30][30],F_W[30][30];doubleF_W_T[30][30],F_net[30],ww[30],yy[21][12][8][8];doubleYhat[30],F_y[21][12][8][8],F_x[30],F_Jhat[30][30];doubleS_F_W1,S_F_W2,Lr_W,S_F_net1,S_F_net2,Lr_ww,Lr_Ws,F_Ws_O;doubley[50][50],F_net_O[50],F_Ws1,F_Ws2,W_O[50][50],ww_O[50],Ws_O;FILE*f;/*Numberofinputs,neuronsandoutput:7,3,1*//*'n'isthenumberoftheactiveneurons*//*'m'and'N'botharethenumberofinputs*//*'nm'isthenumberofmemoryis:5*//*'nn+1'*'nn+1'isthesizeofthemaze'*//*'TT'isthenumberoftrials*//*'lt'isthenumberoftheintervaltime*//*'maxt'isthemaxnumberforTinfigure[8]*//*Lr-Ws,Lr_wwandLr_WarethelearningratesforWs,wwandW*/a=0.
9;b=0.
2;n=5;m=11;N=11;nn=6;nm=5;TT=30000;lt=50;maxt=20;wi=25;Ws=40;e=0;po=pow(2,31)-1;/*InitialvaluesofOld*/F_Ws_O=1;for(i=m+1;i-1;q--){for(ix=0;ixS_F_W2)||(S_F_W1==S_F_W2))Lr_W=Lr_W*(a+b);elseif(S_F_W1S_F_net2)||(S_F_net1==S_F_net2))Lr_ww=Lr_ww*(a+b);elseif(S_F_net1F_Ws2)||(F_Ws1==F_Ws2))Lr_Ws=Lr_Ws*(a+b);elseif(F_Ws1m;i--){for(j=i+1;j0;i--)for(j=m+1;jl)r=l;elser=k;returnr;}voidpweight(doubleWs,doubleF_Ws_T,doubleww[30],doubleF_net_T[30],doubleW[30][30],doubleF_W_T[30][30],intn,intN,intm){inti,j;for(i=m+1;i

数脉科技:六月优惠促销,免备案香港物理服务器,E3-1230v2处理器16G内存,350元/月

数脉科技六月优惠促销发布了!数脉科技对香港自营机房的香港服务器进行超低价促销,可选择30M、50M、100Mbps的优质bgp网络。更大带宽可在选购时选择同样享受优惠,目前仅提供HKBGP、阿里云产品,香港CN2、产品优惠码续费有效,仅限新购,每个客户可使用于一个订单。新客户可以立减400元,或者选择对应的机器用相应的优惠码,有需要的朋友可以尝试一下。点击进入:数脉科技官方网站地址数脉科技是一家成...

hostkvm:7折优惠-香港VPS韩国VPS,8折优惠-日本软银、美国CN2 GIA、新加坡直连VPS

hostkvm本月对香港国际线路的VPS、韩国CN2+bgp线路的VPS正在做7折终身优惠,对日本软银线路、美国CN2 GIA线路、新加坡直连线路的VPS进行8折终身优惠促销。所有VPS从4G内存开始支持Windows系统,当然主流Linux发行版是绝对不会缺席的!官方网站:https://hostkvm.com香港国际线路、韩国,7折优惠码:2021summer日本、美国、新加坡,8折优惠码:2...

SugarHosts新增Windows云服务器sugarhosts六折无限流量云服务器六折优惠

SugarHosts糖果主机商我们较早的站长们肯定是熟悉的,早年是提供虚拟主机起家的,如今一直还在提供虚拟主机,后来也有增加云服务器、独立服务器等。数据中心涵盖美国、德国、香港等。我们要知道大部分的海外主机商都只提供Linux系统云服务器。今天,糖果主机有新增SugarHosts夏季六折的优惠,以及新品Windows云服务器/云VPS上线。SugarHosts Windows系统云服务器有区分限制...

exhentai.org为你推荐
ip购买急。购买一年的固定IP地址要多少钱?网易网盘关闭入口网易网盘里面有好的东西,怎么才能共享出来?【已解决】硬盘工作原理硬盘的工作原理是什么?李子柒年入1.6亿新晋网红李子柒是不是背后有团队是摆拍、炒作为的是人气、流量?18comic.fun贴吧经常有人说A站B站,是什么意思啊?seo优化工具想找一个效果好的SEO优化软件使用,在网上找了几款不知道哪款好,想请大家帮忙出主意,用浙江哪款软件效果好seo优化工具SEO优化工具哪个好用点啊?www.55125.cnwww95599cn余额查询ip在线查询通过对方的IP地址怎么样找到他的详细地址?www.se333se.com米奇网www.qvod333.com 看电影的效果好不?
最便宜虚拟主机 什么是域名解析 blackfriday 好看的桌面背景大图 华为云主机 免费静态空间 个人空间申请 促正网秒杀 宁波服务器 中国电信测网速 免费美国空间 上海服务器 流媒体加速 移动服务器托管 路由跟踪 论坛主机 百度云空间 万网空间 symantec 建站行业 更多