sittvants官网
tvants官网 时间:2021-04-20 阅读:(
)
SmoothCache:HTTP-LiveStreamingGoesPeer-to-PeerRobertoRoverso1,2,SamehEl-Ansary1,andSeifHaridi21PeerialismInc.
,Stockholm,Sweden2TheRoyalInstituteofTechnology(KTH),Stockholm,Sweden{roberto,sameh}@peerialism.
com,haridi@kth.
seAbstract.
Inthispaper,wepresentSmoothCache,apeer-to-peerlivevideostreaming(P2PLS)system.
ThenoveltyofSmoothCacheisthree-fold:i)ItistherstP2PLSsystemthatisbuilttosupporttherelatively-newapproachofusingHTTPasthetransportprotocolforlivecontent,ii)Thesystemsupportsbothsingleandmulti-bitratestreamingmodesofoperation,andiii)InSmoothcache,wemakeuseofrecentadvancesinapplication-layerdynamiccongestioncontroltomanageprioritiesoftransfersaccordingtotheirurgency.
WestartbyexplainingwhytheHTTPlivestreamingsemanticsrendermanyoftheexistingassumptionsusedinP2PLSprotocolsobsolete.
Afterwards,wepresentourdesignstartingwithabaselineP2Pcachingmodel.
We,then,showanumberofoptimizationsrelatedtoaspectssuchasneighborhoodmanagement,uploaderselectionandproactivecaching.
Finally,wepresentoureval-uationconductedonarealyetinstrumentedtestnetwork.
Ourresultsshowthatwecanachievesubstantialtracsavingsonthesourceofthestreamwithoutmajordegradationinuserexperience.
Keywords:HTTP-Livestreaming,peer-to-peer,caching,CDN.
1IntroductionPeer-to-peerlivestreaming(P2PLS)isaprobleminthePeer-To-Peer(P2P)net-workingeldthathasbeentackledforquitesometimeonboththeacademicandindustrialfronts.
Thetypicalgoalistoutilizetheuploadbandwidthofhostscon-sumingacertainlivecontenttoooadthebandwidthofthebroadcastingorigin.
Ontheindustrialfront,wendsuccessfullargedeploymentswhereknowledgeabouttheirtechnicalapproachesisratherlimited.
ExceptionsincludesystemsdescribedbytheirauthorslikeCoolstreaming[16]orinferredbyreverseengi-neeringlikePPlive[4]andTVAnts[12].
Ontheacademicfront,therehavebeenseveralattemptstotrytoestimatetheoreticallimitsintermsofoptimalityofbandwidthutilization[3][7]ordelay[15].
Traditionally,HTTPhasbeenutilizedforprogressivedownloadstreaming,championedbypopularVideo-On-Demand(VoD)solutionssuchasNetix[1]andApple'siTunesmoviestore.
However,lately,adaptiveHTTP-basedstream-ingprotocolsbecamethemaintechnologyforlivestreamingaswell.
Allcompa-nieswhohaveamajorsayinthemarketincludingMicrosoft,AdobeandAppleR.
Bestaketal.
(Eds.
):NETWORKING2012,PartII,LNCS7290,pp.
29–43,2012.
cIFIPInternationalFederationforInformationProcessing201230R.
Roverso,S.
El-Ansary,andS.
HaridihaveadoptedHTTP-streamingasthemainapproachforlivebroadcasting.
ThisshifttoHTTPhasbeendrivenbyanumberofadvantagessuchasthefollowing:i)RoutersandrewallsaremorepermissivetoHTTPtraccomparedtotheRTSP/RTPii)HTTPcachingforreal-timegeneratedmediaisstraight-forwardlikeanytraditionalweb-contentiii)TheContentDistributionNetworks(CDNs)businessismuchcheaperwhendealingwithHTTPdownloads[5].
TherstgoalofthispaperistodescribetheshiftfromtheRTSP/RTPmodeltotheHTTP-livemodel(Section2).
This,inordertodetailtheimpactofthesameonthedesignofP2Plivestreamingprotocols(Section3).
Apointwhichwendratherneglectedintheresearchcommunity(Section4).
Wearguethatthisshifthasrenderedmanyoftheclassicalassumptionsmadeinthecurrentstate-of-the-artliteratureobsolete.
Forallpracticalpurposes,anynewP2PLSalgorithmirrespectiveofitstheoreticalsoundness,won'tbedeployableifitdoesnottakeintoaccounttherealitiesofthemainstreambroadcastingecosystemaroundit.
TheissuebecomesevenmoretopicalasweobserveatrendinstandardizingHTTPlive[8]streamingandembeddingitinallbrowserstogetherwithHTML5,whichisalreadythecaseinbrowserslikeApple'sSafari.
ThesecondgoalofthispaperistopresentaP2PLSprotocolthatiscompatiblewithHTTPlivestreamingfornotonlyonebitratebutthatisfullycompatiblewiththeconceptofadaptivebitrate,whereastreamisbroadcastwithmultiplebitratessimultaneouslytomakeitavailableforarangeofviewerswithvariabledownloadcapacities(Section5).
ThelastgoalofthispaperistodescribeanumberofoptimizationsofourP2PLSprotocolconcerningneighborhoodmanagement,uploaderselectionandpeertransferwhichcandeliverasignicantamountoftracsavingsonthesourceofthestream(Section6and7).
Experimentalresultsofourapproachshowthatthisresultcomesatalmostnocostintermsofqualityofuserexperience(Section8).
2TheShiftfromRTP/RTSPtoHTTPInthetraditionalRTSP/RTPmodel,theplayerusesRTSPasthesignallingprotocoltorequesttheplayingofthestreamfromastreamingserver.
TheplayerentersareceiveloopwhiletheserverentersasendloopwherestreamfragmentsaredeliveredtothereceiverusingtheRTPprotocoloverUDP.
Theinteractionbetweentheserverandplayerisstateful.
Theservermakesdecisionsaboutwhichfragmentissentnextbasedonacknowledgementsorerrorinformationpreviouslysentbytheclient.
Thismodelmakestheplayerratherpassive,havingthemereroleofrenderingthestreamfragmentswhichtheserverprovides.
IntheHTTPlivestreamingmodelinstead,itistheplayerwhichcontrolsthecontentdeliverybyperiodicallypullingfromtheserverpartsofthecontentatthetimeandpaceitdeemssuitable.
Theserverinsteadisentitledwiththetaskofencodingthestreaminrealtimewithdierentencodingrates,orqualities,andstoringitindatafragmentswhichappearontheserverassimpleresources.
Whenaplayerrstcontactsthestreamingserver,itispresentedwithameta-datale(Manifest)containingthelateststreamfragmentsavailableattheserverSmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer31(a)(b)Fig.
1.
a)SampleSmoothstreamingManifest,b)Client-ServerinteractionsinMi-crosoftSmoothStreamingatthetimeoftherequest.
Eachfragmentisuniquelyidentiedbyatime-stampandabitrate.
Ifastreamisavailableinndierentbitrates,thenthismeansthatforeachtimestamp,thereexistsnversionsofit,oneforeachbitrate.
Afterreadingthemanifest,theplayerstartstorequestfragmentsfromtheserver.
Theburdenofkeepingthetimelinessofthelivestreamistotallyupontheplayer.
Theserverincontrast,isstatelessandmerelyservesfragmentslikeanyotherHTTPserverafterencodingthemintheformatadvertisedinthemanifest.
ManifestContents.
Togiveanexample,weuseMicrosoft'sSmoothStreamingmanifest.
InFigure1a,weshowtherelevantdetailsofamanifestforalivestreamwith3videobitrates(331,688,1470Kbps)and1audiobitrate(64Kbps).
Byinspectingoneofthestreams,wendtherst(themostrecent)fragmentcontainingadvaluewhichisthetimedurationofthefragmentinaunitof100nanosecondsandatvaluewhichisthetimestampofthefragment.
Thefragmentunderneath(theolderfragment)hasonlyadvaluebecausethetimestampisinferredbyaddingthedurationtothetimestampoftheoneabove.
Thestreamseachhaveatemplateforformingarequesturlforfragmentsofthatstream.
Thetemplatehasplaceholdersforsubstitutionwithanactualbitrateandtimestamp.
Foradenitionofthemanifest'sformat,see[5].
AdaptiveStreamingProtocol.
InFigure1b,weshowanexampleinteractionsequencebetweenaSmoothStreamingClientandServer[5].
TheClientrstissuesaHTTPGETrequesttoretrievethemanifestfromthestreamingserver.
Afterinterpretingthemanifest,theplayerrequestsavideofragmentfromthelowestavailablebitrate(331Kbps).
Thetimestampoftherstrequestisnotpredictablebutinmostcaseswehaveobservedthatitisanamountequalto10secondsbackwardfromthemostrecentfragmentinthemanifest.
Thisisprobablytheonlypredictablepartoftheplayer'sbehavior.
Infact,withoutdetailedknowledgeoftheplayer'sinternalalgorithmandgiventhatdierentplayersmayimplementdierentalgorithms,itisdiculttomakeassumptionsabouttheperiodbetweenconsecutivefragmentrequests,thetimeatwhichtheplayerwillswitchrates,orhowtheaudioandvideoare32R.
Roverso,S.
El-Ansary,andS.
Haridiinterleaved.
Forexample,whenafragmentisdelayed,itcouldgetre-requestedatthesamebitrateoratalowerrate.
Thetimeoutbeforetakingsuchactionisonethingthatwefoundslightlymorepredictableanditwasmostofthetimearound4seconds.
Thatisasubsetofmanydetailsaboutthepullbehavioroftheplayer.
ImplicationsofUnpredictability.
Thepointofmentioningthesedetailsistoexplainthatthebehavioroftheplayer,howitbuersandclimbsupanddownthebitratesisratherunpredictable.
Infact,wehaveseenitchangeindierentversionofthesameplayer.
Moreover,dierentadoptersoftheapproachhaveminorvariationsontheinteractionssequence.
Forinstance,AppleHTTP-live[8]dictatesthattheplayerrequestsamanifesteverytimebeforerequestinganewfragmentandpacksaudioandvideofragmentstogether.
Asaconsequenceofwhatwedescribedabove,webelievethataP2PLSprotocolforHTTPlivestreamingshouldoperateasifreceivingrandomrequestsintermsoftimingandsizeandhastomakethisthemainprinciple.
Thisltersoutthedetailsofthedierentplayersandtechnologies.
3ImpactoftheShiftonP2PLSAlgorithmsTraditionally,thetypicalsetupforaP2PLSagentistositbetweenthestreamingserverandtheplayerasalocalproxyoeringtheplayerthesameprotocolasthestreamingserver.
Insuchasetup,theP2PLSagentwoulddoitsbest,exploitingthepeer-to-peeroverlay,todeliverpiecesintimeandintherightorderfortheplayer.
Thus,theP2PLSagentistheonedrivingthestreamingprocessandkeepinganactivestateaboutwhichvideooraudiofragmentshouldbedeliverednext,whereastheplayerblindlyrenderswhatitissuppliedwith.
Giventheassumptionofapassiveplayer,itiseasytoenvisagetheP2PLSalgorithmskippingforinstancefragmentsaccordingtotheplaybackdeadline,i.
e.
discardingdatathatcomestoolateforrendering.
Inthiskindofsituation,theplayerisexpectedtoskipthemissingdatabyfast-forwardingorblockingforfewinstantsandthenstarttheplaybackagain.
ThistypeofbehaviortowardstheplayerisanintrinsicpropertyofmanyofthemostmatureP2PLSsystemdesignsandanalysessuchas[13,15,16].
Incontrasttothat,aP2PLSagentforHTTPlivestreamingcannotrelyonthesameoperationalprinciples.
Thereisnofreedominskippingpiecesanddecidingwhatistobedeliveredtotheplayer.
TheP2PLSagenthastoobeytheplayer'srequestforfragmentsfromtheP2Pnetworkandthespeedatwhichthisisaccomplishedaectstheplayer'snextaction.
Fromourexperience,delvinginthepathoftryingtoreverseengineertheplayerbehaviorandintegratingthatintheP2Pprotocolissomekindofblackartbasedontrial-and-errorandwillresultintoverycomplicatedandextremelyversion-speciccustomizations.
Essentially,anyP2PLSschedulingalgorithmthatassumesthatithascontroloverwhichdatashouldbedeliveredtotheplayerisratherinapplicabletoHTTPlivestreaming.
SmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer334RelatedWorkWearenotawareofanyworkthathasexplicitlyarticulatedtheimpactoftheshifttoHTTPontheP2Plivestreamingalgorithms.
However,amorerelevanttopictolookatisthebehavioroftheHTTP-basedliveplayers.
Akhshabiet.
al[2],providearecentdissectionofthebehaviorofthreesuchplayersunderdierentbandwidthvariationscenarios.
Itishoweverclearfromtheiranalysisthatthebitrateswitchingmechanicsoftheconsideredplayersarestillinearlystagesofdevelopment.
Inparticular,itisshownthatthroughputuctuationsstillcauseeithersignicantbueringorunnecessarybitratereductions.
Ontopofthat,itisshownhowallthelogicimplementedintheHTTP-liveplayersistailoredtoTCP'sbehavior,astheonesuggestedin[6].
Thatinordertocompen-satethroughputvariationscausedbyTCP'scongestioncontrolandpotentiallylargeretransmissiondelays.
InthecaseofaP2PLSagentactingasproxy,itisthenofparamountimportancetonotinterferewithsuchadaptationpatterns.
Webelieve,giventhepresentedapproaches,themostrelatedworkistheP2PcachingnetworkLiveSky[14].
WeshareincommonthefactoftryingtoestablishaP2PCDN.
However,LiveSkydoesnotpresentanysolutionforsupportingHTTPlivestreaming.
5P2PLSasaCachingProblemWewilldescribehereourbaselinedesigntotacklethenewrealitiesoftheHTTP-basedplayers.
WetreattheproblemofreducingtheloadonthesourceofthestreamthesamewayitwouldbetreatedbyaContentDistributionNetwork(CDN):asacachingproblem.
ThedesignofthestreamingprotocolwasmadesuchthateveryfragmentisfetchedasanindependentHTTPrequestthatcouldbeeasilyscheduledonCDNnodes.
Thedierenceisthatinourcase,thecachingnodesareconsumermachinesandnotdedicatednodes.
TheplayerisdirectedtoorderfromourlocalP2PLSagentwhichactsasanHTTPproxy.
Alltracto/fromthesourceofthestreamaswellasotherpeerspassesbytheagent.
BaselineCaching.
Thepolicyisasfollows:anyrequestformanifestles(meta-data),isfetchedfromthesourceasisandnotcached.
Thatisduetothefactthatthemanifestchangesovertimetocontainthenewlygeneratedfragments.
Contentfragmentsrequestedbytheplayerarelookedupinalocalindexofthepeerwhichkeepstrackofwhichfragmentisavailableonwhichpeer.
Ifinforma-tionaboutthefragmentisnotintheindex,thenweareinthecaseofaP2Pcachemissandwehavetoretrieveitfromthesource.
Incaseofacachehit,thefragmentisrequestedfromtheP2Pnetworkandanyerrororslownessintheprocessresults,again,inafallbacktothesourceofthecontent.
Onceafragmentisdownloaded,anumberofotherpeersareimmediatelyinformedinorderforthemtoupdatetheirindicesaccordingly.
AchievingSavings.
Themainpointisthustoincreasethecachehitratioasmuchaspossiblewhilethetimelinessofthemovieispreserved.
Thecachehitratioisourmainmetricbecauseitrepresentssavingsfromtheloadonthesource34R.
Roverso,S.
El-Ansary,andS.
HaridiTable1.
SummaryofbaselineandimprovedstrategiesStrategyBaselineImprovedManifestTrimming(MT)OOnPartnershipConstruction(PC)RandomRequest-Point-awarePartnershipMaintenance(PM)RandomBitrate-awareUploaderSelection(US)RandomThroughput-basedProactiveCaching(PR)OOnofthelivestream.
Havingexplainedthebaselineidea,wecanseethat,inthe-ory,ifallpeersstartedtodownloadthesameuncachedmanifestsimultaneously,theywouldalsoallstartrequestingfragmentsexactlyatthesametimeinper-fectalignment.
Thisscenariowouldleavenotimeforthepeerstoadvertiseandexchangeusefulfragmentsbetweeneachothers.
Consequentlyaperfectalign-mentwouldresultinnosavings.
Inreality,wehavealwaysseenthatthereisanamountofintrinsicasynchronyinthestreamingprocessthatcausessomepeerstobeaheadofothers,hencemakingsavingspossible.
However,thelargerthenumberofpeers,thehighertheprobabilityofmorepeersbeingaligned.
Wewillshowthat,giventheaforementionedasynchrony,eventhepreviouslydescribedbaselinedesigncanachievesignicantsavings.
Ourtargetsavingsarerelativetothenumberofpeers.
Thatiswedonottargetachievingaconstantloadonthesourceofthestreamirrespectiveofthenumberofusers,whichwouldleadtolossoftimeliness.
Instead,weaimtosaveasubstantialpercentageofallsourcetracbyooadingthatpercentagetotheP2Pnetwork.
TheattractivenessofthatmodelfromabusinessperspectivehasbeenveriedwithcontentownerswhonowadaysbuyCDNservices.
6BeyondBaselineCachingWegivehereadescriptionofsomeoftheimportanttechniquesthatarecrucialtotheoperationoftheP2PLSagent.
Foreachsuchtechniqueweprovidewhatwethinkisthesimplestwaytorealizeitaswellasimprovementsifwewereabletoidentifyany.
ThetechniquesaresummarizedinTable1.
ManifestManipulation.
OneimprovementparticularlyapplicableinMi-crosoft'sSmoothstreamingbutthatcouldbeextendedtoallothertechnologiesismanifestmanipulation.
AsexplainedinSection2,theserversendsamanifestcontainingalistofthemostrecentfragmentsavailableatthestreamingserver.
Thepointofthatistoavailtotheplayersomedataincasetheuserdecidestojumpbackintime.
Minortrimmingtohidethemostrecentfragmentsfromsomepeersplacesthembehindothers.
Weusethattechniquetopushpeerswithhighuploadbandwidthslightlyaheadofothersbecausetheyhavetheycanbemoreusefultothenetwork.
Wearecarefulnottoabusethistoomuch,otherwisepeerswouldsuerahighdelayfromliveplayingpoint,sowelimitittoamaximumofSmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer354seconds.
Itisworthnotingherethatwedoaquickbandwidthmeasurementforpeersuponadmissiontothenetwork,mainly,forstatisticalpurposesbutwedonotdependonthismeasurementexceptduringtheoptionaltrimmingprocess.
NeighborhoodandPartnershipConstruction.
Weuseatrackeraswellasgossipingforintroducingpeerstoeachother.
Anytwopeerswhocanestablishbi-directionalcommunicationareconsideredneighbors.
Eachpeerprobeshisneighborsperiodicallytoremovedeadpeersandupdateinformationabouttheirlastrequestedfragments.
Neighborhoodconstructionisinessenceaprocesstocreatearandomundirectedgraphwithhighnodearity.
Asubsetoftheedgesintheneighborhoodgraphisselectedtoformadirectedsubgraphtoestablishpartnershipbetweenpeers.
Unliketheneighborhoodgraph,whichisupdatedlazily,theedgesofthepartneshipgraphareusedfrequently.
Aftereachsuccessfuldownloadofafragment,thesetof(out-partners)isinformedaboutthenewlydownloadedfragment.
Fromtheoppositeperspective,itiscrucialforapeertowiselypickhisin-partnersbecausetheyaretheprovidersoffragmentsfromtheP2Pnetwork.
Forthisdecision,weexperimentwithtwodierentstrategies:i)Randompicking,ii)Request-point-awarepicking:wherethein-partnersincludeonlypeerswhoarerelativelyaheadinthestreambecauseonlysuchpeerscanhavefuturefragments.
PartnershipMaintenance.
Eachpeerstrivestocontinuouslyndbetterin-partnersusingperiodicmaintenance.
Themaintenanceprocesscouldbelimitedtoreplacementofdeadpeersbyrandomly-pickedpeersfromtheneighborhood.
Ourimprovedmaintenancestrategyistoscorethein-partnersaccordingtoacertainmetricandreplacelow-scoringpartnerswithnewpeersfromtheneigh-borhood.
Themetricweuseforscoringpeersisacompositeonebasedon:i)favoringthepeerswithhigherpercentageofsuccessfullytransferreddata,ii)favoringpeerswhohappentobeonthesamebitrate.
Notethatwhilefavor-ingpeersonthesamebitrate,havingallpartnersfromasinglebitrateisverydangerous,becauseonceabit-ratechangeoccursthepeerisisolated.
Thatis,allthereceivedupdatesaboutpresenceoffragmentsfromotherpeerswouldbefromtheoldbitrate.
Thatiswhy,uponreplacement,wemakesurethattheresultingin-partnerssethasallbit-rateswithagaussiandistributioncenteredaroundthecurrentbitrate.
Thatis,mostin-partnersarefromthecurrentbitrate,lesspartnersfromtheimmediatelyhigherandlowerbitratesandmuchlesspartnersfromotherbitratesandsoforth.
Oncethebit-ratechanges,themaintenancere-centersthedistributionaroundthenewbitrate.
UploaderSelection.
Inthecaseofacachehit,ithappensquiteoftenthatapeerndsmultipleuploaderswhocansupplythedesiredfragment.
Inthatcase,weneedtopickone.
Thesimpleststrategywouldbetopickarandomuploader.
Ourimprovedstrategyhereistokeeptrackoftheobservedhistoricalthroughputofthedownloadsandpickthefastestuploader.
Sub-fragments.
Uptothispoint,wehavealwaysusedinourexplanationthefragmentasadvertisedbythestreamingserverastheunitoftransportforsimplifyingthepresentation.
Inpractice,thisisnotthecase.
Thesizesof36R.
Roverso,S.
El-Ansary,andS.
Haridithefragmentvaryfromonebitratetotheother.
LargerfragmentswouldresultinwaitingforalongertimebeforeinformingotherpeerswhichwoulddirectlyentaillowersavingsbecauseoftheslownessofdisseminatinginformationaboutfragmentpresenceintheP2Pnetwork.
Tohandlethat,ourunitoftransportandadvertisingisasub-fragmentofaxedsize.
Thatsaid,therealityoftheuploaderselectionprocessisthatitalwayspicksasetuploadersforeachfragmentratherthanasingleuploader.
Thisparallelizationappliesforbothrandomandthroughput-baseduploaderselectionstrategies.
Fallbacks.
Whiledownloadingafragmentfromanotherpeer,itisofcriticalimportancetodetectproblemsassoonaspossible.
Thetimeoutbeforefallingbacktothesourceisthusoneofthemajorparameterswhiletuningthesystem.
Weputanupperbound(Tp2p)onthetimeneededforanyP2Poperation,computedas:Tp2p=TplayerSTfwhereTplayeristhemaximumamountoftimeafterwhichtheplayerconsidersarequestforafragmentexpired,SisthesizeoffragmentandTfistheexpectedtimetoretrieveaunitofdatafromthefallback.
Basedonourexperience,Tplayerisplayer-specicandconstant,forinstanceMicrosoft'sSmoothStreamingwaits4secondsbeforetimingout.
AlongerTp2ptranslatesinahigherP2Psuccesstransferratio,hencehighersavings.
SinceTplayerandSareoutsideofourcontrol,itisextremelyimportanttoestimateTfcorrectly,inparticularinpresenceofcongestionanductuatingthroughputtowardsthesource.
Asafurtheroptimization,werecalculatethetimeoutforafragmentwhileaP2Ptransferishappeningdependingontheamountofdataalreadydownloaded,toallowmoretimefortheoutstandingpartofthetransfer.
Finally,uponfallback,onlytheamountoffragmentthatfailedtobedownloadedfromtheoverlaynetworkisretrievedfromthesource,i.
e.
throughapartialHTTPrequestontherangeofmissingdata.
7ProactiveCachingThebaselinecachingprocessisinessencereactive,i.
e.
theattempttofetchafragmentstartsaftertheplayerrequestsit.
However,whenapeerisinformedaboutthepresenceofafragmentintheP2Pnetwork,hecantriviallyseethatthisisafuturefragmentthatwouldbeeventuallyrequested.
Startingtoprefetchitearlybeforeitisrequested,increasestheutilizationoftheP2Pnetworkanddecreasestheriskoffailingtofetchitintimewhenrequested.
Thatsaid,wedonotguaranteethatthisfragmentwouldberequestedinthesamebitrate,whenthetimecomes.
Therefore,weendureabitofriskthatwemighthavetodiscarditifthebitratechanges.
Inpractice,wemeasuredthattheprefetchersuccessfullyrequeststherightfragmentwitha98.
5%ofprobability.
TracPrioritization.
Toimplementthisproactivestrategywehavetakenad-vantageofourdynamicruntime-prioritizationtransportlibraryDTL[9]whichexposestotheapplicationlayertheabilitytoprioritizeindividualtransfersrel-ativetoeachotherandtochangethepriorityofeachindividualtransferatrun-time.
Uponstartingtofetchafragmentproactively,itisassignedaverySmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer37low-priority.
Therationaleistoavoidcontendingwiththetransferprocessoffragmentsthatarereactivelyrequestedandunderadeadlinebothontheup-loadinganddownloadingends.
SuccessfulPrefetching.
Onepossibilityisthatalow-priorityprefetchingpro-cesscompletesbeforeaplayer'srequestandthereisnowaytodeliverittotheplayerbeforethathappens,theonlyoptionistowaitforaplayerrequest.
Moreimportantly,whenthattimecomes,carefuldeliveryfromthelocalmachineisveryimportantbecauseextremelyfastdeliverymightmaketheadaptivestream-ingplayermistakenlythinkthatthereisanabundanceofdownloadbandwidthandstarttorequestthefollowingfragmentsahigherbitratebeyondtheac-tualdownloadbandwidthofthepeer.
Therefore,weschedulethedeliveryfromthelocalmachinetobenotfasterthanthealready-observedaveragedownloadrate.
Wehavetostressherethatthisisnotanattempttocontroltheplayertodosomethinginparticular,wejustmaintaintransparencybynotdeliveringprefetchedfragmentsfasterthannotprefetchedones.
InterruptedPrefetching.
Anotherpossiblityisthattheprefetchingprocessgetsinterruptedbytheplayerin3possibleways:i)Theplayerrequeststhefrag-mentbeingprefetched:inthatcasethetransportlayerisdynamicallyinstructedtoraisethepriorityandTplayerissetaccordinglybasedontheremainingamountofdataasdescribedintheprevioussection.
ii)Theplayerrequeststhesamefrag-mentbeingprefetchedbutatahigherratewhichmeanswehavetodiscardanyprefetcheddataandtreattherequestlikeanyotherreactivelyfetchedfragment.
iii)Theplayerdecidestoskipsomefragmentstocatchupandisnolongerinneedofthefragmentbeingprefetched.
Inthiscase,wehavetodiscarditaswell.
8EvaluationMethodology.
Duetothenon-representativebehaviourofPlanetlabandthedicultytodoparameterexplorationinpublicly-deployedproductionnetwork,wetriedanotherapproachwhichistodevelopaversionofourP2Pagentthatisremotely-controlledandaskforvolunteerswhoareawarethatwewillconductexperimentsontheirmachines.
Needlesstosay,thatthisfunctionalityisremovedfromanypublicly-deployableversionoftheagent.
TestNetwork.
Thetestnetworkcontainedaround1350peers.
However,themaximum,minimumandaveragenumberofpeerssimultaneouslyonlinewere770,620and680respectively.
ThenetworkincludedpeersmostlyfromSweden(89%)butalsosomefromEurope(6%)andtheUS(4%).
Theuploadband-widthdistributionofthenetworkwasasfollows:15%:0.
5Mbps,42%:1Mbps,17%:2.
5Mbps,15%:10Mbps,11%:20Mbps.
Ingeneral,onecanseethatthereisenoughbandwidthcapacityinthenetwork,howeverthemajorityofthepeersareonthelowerendofthebandwidthdistribution.
Forconnectivity,82%ofthepeerswerebehindNAT,and12%wereonopenInternet.
WehaveusedourNAT-Crackertraversalschemeasdescribedin[11]andwereabletoestablishbi-directionalcommunicationbetween89%ofallpeerpairs.
Theuniquenumber38R.
Roverso,S.
El-Ansary,andS.
Haridi010203040506070809010005101520Savings(%)Time(minutes)ImprovementsNONEMT+PCMT+PC+PMMT+PC+PM+USMT+PC+PM+US+PR(a)020406080100x60Peers(%)Time(secs)ImprovementsNOP2PNONEMT+PCMT+PC+PMMT+PC+PM+USMT+PC+PM+US+PR(b)Fig.
2.
(a)Comparisonoftracsavingswithdierentalgorithmimprovements,(b)ComparisonofcumulativebueringtimeforsourceonlyandimprovementsofNATtypesencounteredwere18types.
Apartfromthetrackerusedforin-troducingclientstoeachother,ournetworkinfrastructurecontained,aloggingserver,abandwidthmeasurementserver,aSTUN-likeserverforhelpingpeerswithNATtraversalandacontrollertolaunchtestsremotely.
StreamProperties.
Weusedaproduction-qualitycontinuouslivestreamwith3videobitrates(331,688,1470Kbps)and1audiobitrate(64Kbps)andweletpeerswatch20minutesofthisstreamineachtest.
ThestreamwaspublishedusingMicrosoftSmoothStreamingtraditionaltoolchain.
ThebandwidthofthesourcestreamwasprovidedbyacommercialCDNandwemadesurethatithadenoughcapacitytoservethemaximumnumberofpeersinourtestnetwork.
ThissetupgaveustheabilitytocomparethequalityofthestreamingprocessinthepresenceandabsenceofP2Pcachinginordertohaveafairassessmentoftheeectofouragentontheoverallqualityofuserexperience.
Westressthat,inarealdeployment,P2PcachingisnotintendedtoeliminatetheneedforaCDNbuttoreducethetotalamountofpaid-fortracthatisprovidedbytheCDN.
Oneoftheissuesthatwefacedregardingrealistictestingwasmakingsurethatweareusingtheactualplayerthatwouldbeusedinproduction,inourcasethatwastheMicrosoftSilverlightplayer.
Theproblemisthatthenormalmodeofoperationofallvideoplayersisthroughagraphicaluserinterface.
Naturally,wedidnotwanttotellourvolunteerstoclickthe"Play"buttoneverytimewewantedtostartatest.
Luckily,wewereabletondaratherunconventionalwaytoruntheSilverlightplayerinaheadlessmodeasabackgroundprocessthatdoesnotrenderanyvideoanddoesnotneedanyuserintervention.
Reproducibility.
Eachtesttocollectonedatapointinthetestnetworkhap-pensinrealtimeandexploringallparametercombinationofinterestisnotfeasi-ble.
Therefore,wedidamajorparametercombinationsstudyonoursimulationplatform[10]rsttogetasetofworth-tryingexperimentsthatwelaunchedonthetestnetwork.
AnotherproblemistheuctuationofnetworkconditionsSmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer39(a)(b)(c)Fig.
3.
Breakdownoftracquantitiesperbitratefor:(a)AnetworkwithP2Pcaching,Source&P2Ptracsummedtogether.
(b)ThesameP2Pnetworkwithsource&P2Ptracreportedseparately,and(c)AnetworkwithnoP2P.
andnumberofpeers.
Werepeatedeachdatapointanumberoftimesbeforegainingcondencethatthisistheaverageperformanceofacertainparametercombination.
EvaluationMetrics.
ThemainmetricthatweuseistracsavingsdenedasthepercentageoftheamountofdataservedfromtheP2Pnetworkfromthetotalamountofdataconsumedbythepeers.
EverypeerreportstheamountofdataservedfromtheP2Pnetworkandstreamingsourceevery30secondstotheloggingserver.
Inourbookkeeping,wekeeptrackofhowmuchofthetracwasduetofragmentsofacertainbitrate.
Thesecondimportantmetricisbueringdelay.
TheSilverlightplayercanbeinstrumentedtosenddebuginformationeverytimeitgoesin/outofbueringmode,i.
e.
whenevertheplayerndsthattheamountofinternally-buereddataisnotenoughforplayback,itsendsadebugmessagetotheserver,whichinourcaseisinterceptedbytheagent.
Usingthismethod,apeercanreportthelengthsoftheperiodsitenteredintobueringmodeinthe30secondssnapshotsaswell.
Attheendofthestream,wecalculatethesumofalltheperiodstheplayerofacertainpeerspentbuering.
8.
1DeploymentResultsStep-by-StepTowardsSavings.
Therstinvestigationwemadewastostartfromthebaselinedesignwithallthestrategiessettothesimplestpossible.
Infact,duringthedevelopmentcycleweusedthisbaselineversionrepeatedlyuntilweobtainedastableproductwithpredictableandconsistentsavingslevelbeforewestartedtoenablealltheotherimprovements.
Figure2ashowstheevolutionofsavingsintimeforallstrategies.
Thenaivebaselinecachingwasabletosaveatotalof44%ofthesourcetrac.
Afterthat,weworkedonpushingthehigher-bandwidthpeersaheadandmakingeachpartnerselectpeersthatareusefulusingtherequest-point-awarepartnershipwhichmovedthesavingstoalevelof56%.
Sofar,thepartnershipmaintenancewasrandom.
Turningonbit-rate-awaremaintenanceaddedonlyanother5%ofsavingsbutwebelievethatthisisakeystrategythatdeservesmorefocusbecauseitdirectlyaectstheeective40R.
Roverso,S.
El-Ansary,andS.
Haridi(a)0102030405060708090100102030405060Savings(%)NumberofPartners(b)Fig.
4.
(a)Breakdownoftracquantitiesperbitrateusingbaseline,(b)Comparisonofsavingsbetweendierentin-partnersnumberpartnershipsizeofotherpeersfromeachbitratewhichdirectlyaectssavings.
Fortheuploaderselection,runningthethroughput-basedpickingachieved68%ofsavings.
Finally,wegotourbestsavingsbyaddingproactivecachingwhichgaveus77%savings.
UserExperience.
Gettingsavingsaloneisnotagoodresultunlesswehaveprovidedagooduserexperience.
Toevaluatetheuserexperience,weusetwometrics:First,thepercentageofpeerswhoexperiencedatotalbueringtimeoflessthan5seconds,i.
e.
theyenjoyedperformancethatdidnotreallydeviatemuchfromlive.
Second,showingthatourP2Pagentdidnotachievethislevelofsavingsbyforcingtheadaptivestreamingtomoveeveryonetothelowestbitrate.
Fortherstmetric,Figure2bshowsthatwithalltheimprovements,wecanmake87%ofthenetworkwatchthestreamwithlessthan5secondsofbueringdelay.
Forthesecondmetric,Figure3ashowsalsothat88%ofallconsumedtracwasonthehighestbitrateandP2Paloneshouldering75%(Figure3b),anindicationthat,forthemostpart,peershaveseenthevideoatthehighestbitratewithamajorcontributionfromtheP2Pnetwork.
P2P-lessasaReference.
Wetakeonemorestepbeyondshowingthatthesystemoerssubstantialsavingswithreasonableuserexperience,namelytounderstandwhatwouldbetheuserexperienceincaseallthepeersstreameddirectlyfromtheCDN.
Therefore,werunthesystemwithP2Pcachingdisabled.
Figure2bshowsthatwithoutP2P,only3%more(90%)ofallviewerswouldhavealessthan5secondsbuering.
Ontopofthat,Figure3cshowsthatonly2%more(90%)ofallconsumedtracisonthehighestbitrate,thatisthesmallpricewepaidforsaving77%ofsourcetrac.
Figure4ainsteaddescribesthelowerperformanceofourbaselinecachingscenario,whichfalls13%oftheP2P-lessscenario(77%).
Thisismainlyduetothelackofbit-rate-awaremaintenance,whichturnsouttoplayaverysignicantroleintermsofuserexperience.
PartnershipSize.
Therearemanyparameterstotweakintheprotocolbut,inourexperience,thenumberofin-partnersisbyfartheparameterwiththemostsignicanteect.
Throughouttheevaluationpresentedhere,weuse50SmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer4101020304050607080901003316681470Savings(%)Bitrate(Kbps)(a)0102030405060708090100x60.
0Peers(%)Time(secs)NOP2P1470KbpsP2P1470KbpsNOP2P668KbpsP2P668KbpsNOP2P331KbpsP2P331Kbps(b)Fig.
5.
(a)Savingsforsinglebitrateruns,(b)Bueringtimeforsinglebitraterunsin-partners.
Figure4bshowsthatmorepeersresultinmoresavings;albeitwithdiminishingreturns.
Wehaveselected50-peersasahigh-enoughnumber,atapointwhereincreasingthepeersdoesnotresultintomuchmoresavings.
SingleBitrate.
Anotherevaluationworthpresentingaswellisthecaseofasinglebitrate.
Inthisexperiment,weget84%,81%and69%forthelow,mediumandhighbitraterespectively(Figure5a).
AsfortheuserexperiencecomparedwiththesamesinglebitratesinaP2P-lesstest,wendthattheuserexperienceexpressedasdelaysismuchclosertotheP2P-lessnetwork(Figure5b).
Weexplaintherelativelybetterexperienceinthesinglebitratecasebythefactthatallthein-partnersarefromthesamebitrate,whileinthemulti-bitratecase,eachpeerhasinhispartnershipthemajorityofthein-partnersfromasinglebitratebutsomeofthemarefromotherbitrateswhichrenderstheeectivepartnershipsizesmaller.
Wecanalsoobservethattheuserexperienceimprovesasthebitratebecomessmaller.
9ConclusionInthispaper,wehaveshownanovelapproachinbuildingapeer-to-peerlivestreamingsystemthatiscompatiblewiththenewrealitiesoftheHTTP-live.
ThesenewrealitiesrevolvearoundthepointthatunlikeRTSP/RTPstreaming,thevideoplayerisdrivingthestreamingprocess.
TheP2Pagentwillhavealimitedabilitytocontrolwhatgetsrenderedontheplayerandmuchlimitedabilitytopredictitsbehaviour.
OurapproachwastostartwithbaselineP2PcachingwhereaP2PagentactsasanHTTPproxythatreceivesrequestsfromtheHTTPliveplayerandattemptstofetchitfromtheP2Pnetworkratherthesourceifitcandosoinareasonabletime.
Beyondbaselinecaching,wepresentedseveralimprovementsthatincluded:a)Request-point-awarepartnershipconstructionwherepeersfocusonestablishingrelationshipswithpeerswhoareaheadoftheminthestream,b)Bit-rate-aware42R.
Roverso,S.
El-Ansary,andS.
Haridipartnershipmaintenancethroughwhichacontinuousupdatingofthepartner-shipsetisaccomplishedbothfavoringpeerswithhighsuccessfultransfersrateandpeerswhoareonthesamebitrateofthemaintainingpeer,c)Manifesttrimmingwhichisatechniqueformanipulatingthemetadatapresentedtothepeeratthebeginningofthestreamingprocesstopushhigh-bandwidthpeersaheadofothers,d)Throughput-baseduploaderselectionwhichisapolicyusedtopickthebestuploaderforacertainfragmentifmanyexist,e)Carefultim-ingforfallingbacktothesourcewherethepreviousexperienceisusedtotunetimingoutonP2Ptransfersearlyenoughthuskeepingthetimelinessoftheliveplayback.
Ourmostadvancedoptimizationwastheintroductionofproactivecachingwhereapeerrequestsfragmentsaheadoftime.
Toaccomplishthisfeaturewithoutdisruptingthealready-ongoingtransfer,weusedourapplication-layercongestioncontrol[9]tomakepre-fetchingactivitieshavelesspriorityanddy-namicallyraisethispriorityincasethepiecebeingpre-fetchedgotrequestedbytheplayer.
Weevaluatedoursystemusingatestnetworkofrealvolunteeringclientsofabout700concurrentnodeswhereweinstrumentedtheP2Pagentstoruntestsunderdierentcongurations.
Thetestshaveshownthatwecouldachievearound77%savingsforamulti-bitratestreamwitharound87%ofthepeersexperiencingatotalbueringdelayoflessthan5secondsandalmostallofthepeerswatchedthedataonthehighestbitrate.
WecomparedtheseresultswiththesamenetworkoperatinginP2P-lessmodeandfoundthatonly3%oftheviewershadabetterexperiencewithoutP2Pwhichwejudgeasaverylimiteddegradationinqualitycomparedtothesubstantialamountofsavings.
References1.
Netixinc.
,www.
netflix.
com2.
Akhshabi,S.
,Begen,A.
C.
,Dovrolis,C.
:Anexperimentalevaluationofrate-adaptationalgorithmsinadaptivestreamingoverHTTP.
In:ProceedingsoftheSecondAnnualACMConferenceonMultimediaSystems,MMSys(2011)3.
Guo,Y.
,Liang,C.
,Liu,Y.
:AQCS:adaptivequeue-basedchunkschedulingforP2Plivestreaming.
In:Proceedingsofthe7thIFIP-TC6NETWORKING(2008)4.
Hei,X.
,Liang,C.
,Liang,J.
,Liu,Y.
,Ross,K.
W.
:InsightsintoPPLive:AMea-surementStudyofaLarge-ScaleP2PIPTVSystem.
In:Proc.
ofIPTVWorkshop,InternationalWorldWideWebConference(2006)5.
MicrosoftInc.
:SmoothStreaming,http://www.
iis.
net/download/SmoothStreaming6.
Liu,C.
,Bouazizi,I.
,Gabbouj,M.
:ParallelAdaptiveHTTPMediaStreaming.
In:Proc.
of20thInternationalConferenceonComputerCommunicationsandNet-works(ICCCN),July31-August4,pp.
1–6(2011)7.
Massoulie,L.
,Twigg,A.
,Gkantsidis,C.
,Rodriguez,P.
:RandomizedDecentralizedBroadcastingAlgorithms.
In:26thIEEEInternationalConferenceonComputerCommunications,INFOCOM2007,pp.
1073–1081(May2007)8.
Pantos,R.
:HTTPLiveStreaming(December2009),http://tools.
ietf.
org/html/draft-pantos-http-live-streaming-01SmoothCache:HTTP-LiveStreamingGoesPeer-to-Peer439.
Reale,R.
,Roverso,R.
,El-Ansary,S.
,Haridi,S.
:DTL:DynamicTransportLibraryforPeer-to-PeerApplications.
In:Bononi,L.
,Datta,A.
K.
,Devismes,S.
,Misra,A.
(eds.
)ICDCN2012.
LNCS,vol.
7129,pp.
428–442.
Springer,Heidelberg(2012)10.
Roverso,R.
,El-Ansary,S.
,Gkogkas,A.
,Haridi,S.
:Mesmerizer:Aeectivetoolforacompletepeer-to-peersoftwaredevelopmentlife-cycle.
In:ProceedingsofSIMU-TOOLS(March2011)11.
Roverso,R.
,El-Ansary,S.
,Haridi,S.
:NATCracker:NATCombinationsMatter.
In:Proc.
of18thInternationalConferenceonComputerCommunicationsandNet-works,ICCCN2009.
IEEEComputerSociety,SF(2009)12.
Silverston,T.
,Fourmaux,O.
:P2PIPTVmeasurement:acasestudyofTVants.
In:Proceedingsofthe2006ACMCoNEXTConference,CoNEXT2006,pp.
45:1–45:2.
ACM,NewYork(2006),http://doi.
acm.
org/10.
1145/1368436.
136849013.
Vlavianos,A.
,Iliofotou,M.
,Faloutsos,M.
:BiToS:EnhancingBitTorrentforSup-portingStreamingApplications.
In:Proceedingsofthe25thIEEEInternationalConferenceonComputerCommunications,INFOCOM2006,pp.
1–6(April2006)14.
Yin,H.
,Liu,X.
,Zhan,T.
,Sekar,V.
,Qiu,F.
,Lin,C.
,Zhang,H.
,Li,B.
:Livesky:Enhancingcdnwithp2p.
ACMTrans.
MultimediaComput.
Commun.
Appl.
6,16:1–16:19(2010),http://doi.
acm.
org/10.
1145/1823746.
182375015.
Zhang,M.
,Zhang,Q.
,Sun,L.
,Yang,S.
:UnderstandingthePowerofPull-BasedStreamingProtocol:CanWeDoBetterIEEEJournalonSelectedAreasinCom-munications25,1678–1694(2007)16.
Zhang,X.
,Liu,J.
,Li,B.
,Yum,Y.
S.
P.
:CoolStreaming/DONet:adata-drivenover-laynetworkforpeer-to-peerlivemediastreaming.
In:24thAnnualJointConferenceoftheIEEEComputerandCommunicationsSocieties,INFOCOM2005(2005)
pacificrack在最新的7月促销里面增加了2个更加便宜的,一个月付1.5美元,一个年付12美元,带宽都是1Gbps。整个系列都是PR-M,也就是魔方的后台管理。2G内存起步的支持Windows 7、10、Server 2003\2008\2012\2016\2019以及常规版本的Linux!官方网站:https://pacificrack.com支持PayPal、支付宝等方式付款7月秒杀VP...
80vps怎么样?80vps最近新上了香港服务器、美国cn2服务器,以及香港/日本/韩国/美国多ip站群服务器。80vps之前推荐的都是VPS主机内容,其实80VPS也有独立服务器业务,分布在中国香港、欧美、韩国、日本、美国等地区,可选CN2或直连优化线路。如80VPS香港独立服务器最低月付420元,美国CN2 GIA独服月付650元起,中国香港、日本、韩国、美国洛杉矶多IP站群服务器750元/月...
提速啦(www.tisula.com)是赣州王成璟网络科技有限公司旗下云服务器品牌,目前拥有在籍员工40人左右,社保在籍员工30人+,是正规的国内拥有IDC ICP ISP CDN 云牌照资质商家,2018-2021年连续4年获得CTG机房顶级金牌代理商荣誉 2021年赣州市于都县创业大赛三等奖,2020年于都电子商务示范企业,2021年于都县电子商务融合推广大使。资源优势介绍:Ceranetwo...
tvants官网为你推荐
支持ipad空间文章qq空间日志文章,要求经典2019支付宝五福支付宝集五福在哪里看到360邮箱请问360邮箱怎么申请asp.net网页制作如何用DREAMWEAVER ASP.NET 做网页北京大学cuteftp2828商机网千元能办厂?28商机网是真的吗?玖融网泰和网理财可信吗,泰和网理财是不是骗人的啊????????什么是通配符dir是什么工具条有什么工具条比较好
解析域名 域名抢注 fc2新域名 中国十大域名注册商 我的世界服务器租用 qq云存储 pccw googleapps 2014年感恩节 淘宝双十一2018 512m内存 搜狗12306抢票助手 html空间 个人空间申请 怎么测试下载速度 cdn加速是什么 爱奇艺vip免费领取 支持外链的相册 双线机房 跟踪路由命令 更多