competitivesuperdome
superdome 时间:2021-03-26 阅读:(
)
PeekingAttheFuturewithGiantMonsterVirtualMachinesProjectCapstone:APerformanceStudyofRunningManyLargeVirtualMachinesinParallelTECHNICALWHITEPAPERTECHNICALWHITEPAPER/2PeekingAttheFuturewithGiantMonsterVirtualMachinesTableofContentsExecutiveSummary3Introduction.
3ProjectCapstone.
3VMwarevSphere6.
0.
3HPESuperdomeX.
4IBMFlashSystem.
4TestEnvironment.
4TestConfigurationDetails.
5VirtualMachineConfiguration.
6TestWorkload.
6MonsterVirtualMachineTests.
7StoragePerformance7Four120-vCPUVMs.
7Eight60-vCPUVMs.
8Sixteen30-vCPUVMs9Under-ProvisioningwithFour112-vCPUVMs.
10CPUAffinityvs.
PreferHT.
11BestPractices.
12Conclusion13Appendix.
13References.
14TECHNICALWHITEPAPER/3PeekingAttheFuturewithGiantMonsterVirtualMachinesExecutiveSummaryThistechnicalwhitepaperexaminestheextraordinarypossibilitiesavailablewhenleadingedgeserversandstoragepushtheboundariesofcurrenttechnologyintermsofcapacityandperformance.
Testswererunwithmanydifferentconfigurationsofextremelylargevirtualmachines(knownasmonstervirtualmachines)andresultsshowthatVMwarevSphere6successfullymanagedtorunallofthevirtualmachinesinahighperformingandefficientmanner.
vSphere6isreadytorunthelargestsystemsandworkloadsoftodaywithgreatperformance.
vSphere6isalsoreadyforthefuturetotakeonthehighperformantsystemsandworkloadsthatwillbecomemorecommonindatacenters.
IntroductionTherateofincreasesincapacityandperformanceofcomputingisdramatic.
Startingin1975,Moore'slawobservedthatthenumberoftransistorsinanintegratedcircuitdoubleseverytwoyears.
Thisdoublingoftransistorshastranslatedintochipperformancealsodoublingeverytwoyears.
VMwarevSpherehasalsorapidlyincreaseditscapacitytosupportlargervirtualmachinesandhoststokeepupwiththiscomputecapacitythatincreasesovertime.
Two-andfour-socketx86-basedserversarecommonlyusedtoday.
WhilethenumberofcorespersocketintheseserversdoesnotexactlyfollowMoore'slaw(becauseeachcoreitselfismorepowerfulwitheachgenerationofprocessors)itcanbeusedasaroughproxy.
ThecurrentgenerationIntelXeonchipshaveamaximumof18corespersocketand36logicalprocesseswithhyper-threadingenabled.
Thisisalmostadoublingofthe10corespersocketinXeonchipsfromtwogenerationsandaboutfouryearsbefore.
Manyfour-socketserversthatusethecurrentgenerationIntelXeonprocessorshave72cores,buttheHPESuperdomeXhas16socketswith240cores.
Byusingthiscuttingedgeserver,itispossibletohavethetypeofcomputecapacityinasingleserverthat,byfollowingMoore'sLaw,won'tbeavailableinafour-socketserverformanyyears.
Itisapeekintothefuture.
ProjectCapstoneProjectCapstonebringstogetherVMware,HPE,andIBMinauniqueopportunitytocombineeachoftheseindustryleadingcompaniesandtheirrespectiveleading-edgetechnologiestobuildatestenvironmentthatshowstheupperboundsofwhatiscurrentlypossiblewithsuchgiantcomputepower.
RunningnumerousheavyworkloadsonmonstervirtualmachinesonavSphere6,HPESuperdomeX,andIBMFlashSystemconfigurationinparallelexemplifiesthepresentcapabilitiesofthesecombinedtechnologies.
ProjectCapstonebecameacenterpieceofthe2015VMwareconferenceseasonasitoccupiedcenterstageatVMworldUSinSanFranciscoasthesubjectofahighlyanticipatedSpotlightSessionthatincludedindividualpresentationsfromseniormanagementofVMware,HP,andIBM.
VMworld2015EuropeinBarcelonaincludedaCapstone-themedbreakoutsessionaswell.
Butperhapsmostsignificantly,theVMworldfloorpresenceatOracleOpenWorldinSanFranciscoinOctoberbrandishedacompletedemoversionoftheCapstoneStacktoincludetheSuperdomeXaswellastheIBMFlashSystem.
VMwarevSphere6.
0VMwarevSphere6.
0includesnewscalabilityfeaturesthatenableittohostextremelylargeandperformance-intensiveapplications.
ThecapabilitiesofindividualvirtualmachineshasincreasedsignificantlyfromthepreviousversionsofvSphere.
Asinglevirtualmachinecannowhaveupto128vCPUsand4TBofmemory.
Whiletheselevelsofresourcesarenotcommonlyrequired,therearesomelargeapplicationsthatdorequireandmakeuseofresourcesatthisscale.
Theseareusuallythelastapplicationstobeconsideredforvirtualizationduetotheirsize,butitisnowpossibletomovethislasttierofapplicationsintovirtualmachines.
TECHNICALWHITEPAPER/4PeekingAttheFuturewithGiantMonsterVirtualMachinesHPESuperdomeXHPEIntegritySuperdomeXsetsnew,highstandardsforx86availability,scalability,andperformance;itisanidealplatformforcriticalbusinessprocessinganddecisionsupportworkloads.
SuperdomeXblendsx86efficiencieswithprovenHPEmission-criticalinnovationsforasuperioruptimeexperiencewithRAS(reliability,availability,andserviceability)featuresnotfoundinotherx86platforms,allowingthismachinetoachievefivenines(99.
999%)ofavailability.
Breakthroughscalabilityofupto16socketscanhandlethelargestscaled-upx86workloads.
ThroughtheuniquenParstechnology,HPESuperdomeXincreasesreliabilityandflexibilitybyallowingforelectricallyisolatedenvironmentstobebuiltwithinasingleenclosure[1].
Itisawell-balancedarchitecturewithpowerfulXeonprocessorsworkinginconcertwithhighI/Oandalargememoryfootprintthatenablesthevirtualizationoflargeandcriticalapplicationsatanunprecedentedscale.
Whetheryouwanttomaximizeapplicationuptime,standardize,orconsolidate,HPESuperdomeXhelpsvirtualizemission-criticalenvironmentsinwaysneverbeforeimagined.
TheHPESuperdomeXistheidealsystemforProjectCapstonebecauseitisuniquelysuitedtoactasthephysicalplatformforsuchamassivevirtualizationeffort.
TheabilityofvSphere6toscaleupto128virtualCPUscanbeeasilyrealizedontheHPESuperdomeXbecauseitallowsformassive,individualvirtualmachinestobeencapsulatedonasinglesystemwhilehugeaggregateprocessingisparallelized.
IBMFlashSystemTheIBMFlashSystemfamilyofall-flashstorageplatformsincludesIBMFlashSystem900andIBMFlashSystemV9000arrays.
PoweredbyIBMFlashCoretechnology,theFlashSystem900deliverstheextremeperformance,enterprisereliability,andoperationalefficienciesrequiredtogaincompetitiveadvantageintoday'sdynamicmarketplace.
Addingtothesecapabilities,FlashSystemV9000offerstheadvantagesofsoftware-definedstorageatthespeedofflash.
Theseall-flashstoragesystemsdeliverthefullcapabilitiesofFlashCoretechnology'shardwareacceleratedarchitecture,MicroLatencymodules,andadvancedflashmanagementcoupledwitharichsetoffeaturesfoundinonlythemostadvancedenterprisestoragesolutions,includingIBMReal-timeCompression,virtualization,dynamictiering,thinprovisioning,snapshots,cloning,replication,datacopyservices,andhigh-availabilityconfigurations.
Whilevirtualizationliftsthephysicalrestraintsontheserverroom,theoverallperformanceofmulti-workloadserverenvironmentsenabledbyvirtualizationareheldbackbytraditionalstoragebecausedisk-basedsystemsstrugglewiththechallengesposedbytheresultingI/O.
consolidated.
Asvirtualizationhasenabledtheconsolidationofmultipleworkloadsrunonafewerphysicalservers,diskssimplycan'tkeepup,andthislimitsthevalueenterprisesgainfromvirtualization.
IBMFlashSystemV9000solvesthestoragechallengesleftunansweredbytraditionalstoragesolutions.
IthandlesrandomI/Opatternswithease,anditoffersthecapabilitytovirtualizeallexistingdatastorageresourcesandbringthemtogetherunderonepointofcontrol.
FlashSystemV9000providesacomprehensivestoragesolutionthatseamlesslyandautomaticallyallocatesstorageresourcestoaddresseveryapplicationdemand.
Itmovesdatatothemostefficient,cost-effectivestoragemedium—fromflash,todisk,andeventotape—withoutdisruptingapplicationperformanceordataavailability,andmorecapacitycanbeaddedwithoutapplicationdowntimeoralengthyupdateprocess.
IBMFlashSystemV9000helpsenterprisesrealizethefullvalueofVMwarevSphere6.
TestEnvironmentThetestenvironmentwasdesignedtoallowtestingforextremelylargemonstervirtualmachines.
vSphere6providesthecapabilitytohostvirtualmachinesofupto128vCPUs.
Thisisthefoundationforrunninglargermonstervirtualmachinesthaninthepast.
TheHPESuperdomeXandIBMFlashSystemstoragearrayprovidedthehardwareserverandstorageplatformsrespectively.
TheSuperdomeXusedinthisprojecthad240coresandTECHNICALWHITEPAPER/5PeekingAttheFuturewithGiantMonsterVirtualMachines480logicalthreadswithhyper-threadingenabled.
Thiswascoupledwith20TBofextremelylowlatency,all-flashstoragewithintheIBMFlashSystemarray.
Afour-socketserverwasusedasaclientloaddriversystemforthetestbed.
Thediagrambelowshowsthetestbedsetup.
Figure1.
TestbedhardwareTestConfigurationDetailsHPESuperdomeXServer:vSphere6.
016IntelXeonE7-2890v22.
8GHzCPUs(15coresperCPU)240cores/480threads(hyper-threadingenabled)12TBofRAM16GbFibreChannel10GbEthernetIBMFlashSystem900:20TBcapacityAll-flashmemory16GbFibreChannelTECHNICALWHITEPAPER/6PeekingAttheFuturewithGiantMonsterVirtualMachinesClientloaddriverserver:4xIntelXeonE7-48702.
4GHz512GBofRAM10GbEthernetVirtualMachineConfigurationTheconfigurationofthevirtualmachinewaskeptconstantinalltestsexceptforthenumberofvirtualCPUsandrelatedvirtualNUMA.
Inalltests,thetotalnumberofvCPUsacrossallvirtualmachinesundertestwasequaltothenumberofcoresorhyper-threadsontheserver.
Inthemaximumsizevirtualmachinetestcase,therewerefourvirtualmachineseachwith120vCPUsforatotalof480vCPUsassignedontheserver.
Thismatchesthe480hyper-threadsavailableontheserver.
Table1showsthenumberofvirtualmachineswiththeirvCPUconfigurationsthatweretested.
NumberofVMsvCPUsperVMVirtualSocketsperVMTotalvCPUsAssignedonServerTotalPhysicalThreadsOnServerWithHTEnabled41204480480860248048016301480480Table1.
VirtualmachineconfigurationTheconfigurationparameterPreferHTwasusedfortheseteststooptimizetheuseofthesystem'shyper-threadsinthishighCPUutilizationbenchmark.
Bydefault,vSphere6scheduleseachvCPUonacorewhereanothervCPUisnotscheduled.
Inotherwords,vSpherewillnotusethesecondthreadthatiscreatedoneachcorewithhyper-threadingenableduntilthereisavCPUalreadyscheduledonallofthephysicalcoresonthesystem.
UsingthePreferHTparameterchangesthisandinstructstheschedulerforavirtualmachinetoprefertousehyper-threadsinsteadofphysicalcores.
ThebestperformancefortwovCPUswouldbetouseathreadfromtwophysicalcores,andthisisthedefaultschedulingbehavior.
Usingtwothreadsofthesamecoreresultsinlowerperformancebecausehyper-threadssharemostoftheresourcesofthephysicalcore.
However,inthecaseofhighoverallsystemutilization,allthreadsonallcoresareinuseatthesametime.
PreferHTprovidesaperformanceadvantagebecauseeachvirtualmachineisspreadacrossfewerNUMAnodesandthisresultsinincreasedNUMAmemorylocality.
ByusingPreferHT,ahighlyutilizedsystembecomesmoreefficientbecausethevirtualmachinesallhavemoreNUMAlocalitywhilestillusingallthelogicalthreadsontheserver.
Standardbestpracticesfordatabasevirtualmachineswereusedfortheconfiguration.
Eachvirtualmachinewasconfiguredwith256GBofRAM,twopvSCSIcontrollers,andasinglevmxnet3virtualnetworkadapter.
ThevirtualmachineswereinstalledwithRedHatEnterpriseLinux6.
5astheguestoperatingsystem.
Oracle12cwasinstalledfollowingtheinstallationguidefromOracle.
TestWorkloadTheopensourcedatabaseworkloadDVDStore3wasusedforthesetests[2].
DVDStoresimulatesanonlinestorethatallowscustomerstologin,browseproducts,readandsubmitproductreviews,andpurchaseproducts.
Itusesmanydatabasefeaturestorunthedatabaseincludingprimarykeys,foreignkeys,fulltextindexingandsearching,transactions,rollbacks,storedprocedures,triggers,andsimpleandcomplexmulti-joinqueries.
ItisTECHNICALWHITEPAPER/7PeekingAttheFuturewithGiantMonsterVirtualMachinesdesignedtobeCPUintensive,butalsorequireslowlatencystorageinordertoachievegoodthroughput.
DVDStoreincludesadriverprogramthatsimulatesuseractivityonthedatabase.
Eachsimulateduserstepsthroughthefullprocessofanorder:login,browsetheDVDcatalog,browseproductreviews,andpurchaseDVDs.
Performanceismeasuredinordersperminute(OPM).
DVDStore3,whichwasrecentlyupdatedfromversion2,addsproductreviewsandafewotherfeaturesthataredesignedtomaketheworkloadincludethetypicalproductreviewscommonlyfoundtodayonmanyWebsites,andversion3isalsomoreCPUintensive.
TheincreasedCPUusagemakesitpossibleforaDVDStore3instancetofullysaturatelargersystemsmoreeasilythanwhatwaspossiblewiththepreviousversionofDVDStore.
Forthesetests,a40GBDVDStore3databaseinstancewascreatedoneachvirtualmachine.
Thedirectdatabasedriverwasusedontheclientloadsystemtostressthedatabasewithoutrunningamiddletierbecausethefocusofthesetestswasonthelargedatabasevirtualmachines.
Thedatabasebuffercachewassettosamesizeasthedatabasetooptimizeperformance.
ThenumberofdriverthreadsrunningagainsteachmonstervirtualmachinewasincreaseduntilthemaximumOPMbegantodecrease.
AtthepointofmaximumOPMtheCPUusageandotherperformancemetricswerecheckedtoverifythatthesystemhadreachedsaturation.
MonsterVirtualMachineTestsAseriesoftestswererunwithdifferentsizesofvirtualmachines.
Eachtestisbrieflydescribedwiththeresultsandanalysis.
ThefirsttestsdiscussedareallsimilarinthateachconfigurationisasetofvirtualmachinesthatfullyconsumealltheCPUthreadsonthehost.
Theconfigurationsarefour120-vCPUVMs,eight60-vCPUVMs,and1630-vCPUVMs.
Ineachcase,thetotalnumberofvCPUsrunningacrossallthevirtualmachinesis480,whichequalsthenumberofCPUthreadsonthehost.
Inadditiontothesetestswithmaximumconfigurations,sometestswererunwithavirtualmachineconfigurationthatunder-provisionstheserver,andatestcomparingCPUaffinity(pinning)vs.
PreferHTconfigurations.
StoragePerformanceForthesetests,thegoalwastousealltheCPUsontheserver.
Inordertoaccomplishthis,theamountofdiskI/Owasminimizedbyspecifyingadatabasebuffercachethatwasapproximatelythesamesizeasthedatabaseondisk.
Thismeantthataftertheinitialwarmupphaseofrunningthetest,mostdatabasequeriescouldbesatisfiedwithoutadiskI/Ooperationbecausemostofthedatabasewascachedinmemory.
InorderforallCPUstobekeptbusy,thediskI/Ooperationsthatoccurmustbeaslowlatencyaspossible.
TheIBMFlashSystemarraywasabletokeepaveragedisklatencybelow0.
3millisecondsinalltestsandwasahighlightofsystemperformance.
IOPSpeakedatapproximately50,000duringsomeofthetestruns,whichwaswellwithinthecapabilitiesofthestoragearray.
Thearrayprovidedextremelylowlatencystorageinalltestscenarios.
ThecapabilitiesoftheIBMFlashSystemarrayintermsofIOPSwereneverpushed,butthetestsdidbenefitgreatlyfromtheconsistentlylowresponsetimes.
Four120-vCPUVMsThemaximumsizevirtualmachineinvSphere6is128vCPUs.
Sowiththelimitof480totalCPUthreadsonthehost,runningfour120-vCPUVMsisthemaximumsizepossiblewhilekeepingallvirtualmachinesthesamesizeandstayingunderthevSpheremaximum.
Whilenotmanyenvironmentstodayhaveasinglevirtualmachinerunningatthissize,thistestranfourofthemonasinglehostunderhighload.
Tomeasurethescalabilityofthesolutionatfullcapacity,testswererunfirstwithjustasinglemonstervirtualmachine.
Inadditionaltests,allfourvirtualmachineswererunatthesametime.
Maximumperformancewasfoundforeachtestcasebyincreasingthenumberofthreadsintheclientdriverstofindthepointatwhichthemostordersperminute(OPM)wereachieved.
ThispointofmaximumthroughputwasalsofoundtobeatnearCPUsaturation,indicatingthatperformancehadpeaked.
TECHNICALWHITEPAPER/8PeekingAttheFuturewithGiantMonsterVirtualMachinesFigure2.
Almostlinearscalabilityof4x120vCPUVMsonasingleserverInthistypeoftest,theidealislinearscalability.
Thiswouldbea4timesperformancegrowthgoingfromasinglevirtualmachinetofourvirtualmachines.
AsFigure2shows,thefour120-vCPUvirtualmachinesachieved3.
7timesthethroughputofthesingle120-vCPUvirtualmachine,whichis92%ofperfectlinearscalability.
Storageperformedataveryhighlevelmaintaining0.
3millisecondslatencyand20,000IOPSduringthetest.
Eight60-vCPUVMsForthenextsetoftests,thevirtualmachineswerereadjusteddownto60vCPUsandclonedsothattherewasatotalofeight60-vCPUVMs.
Inthistest,eachvirtualmachinehadthesamenumberofvCPUsaseachSuperdomeXcomputeblade.
Itisbynomeansarequirementtomatchvirtualmachinesizetotheunderlyinghardwaresospecifically,butthiscanallowforoptimizedresultsinsomeenvironments.
05010015020025030035040045050014OrdersPerMinute(OPM)inThousandsNumberof120-vCPUVirtualMachinesScalabilityof4x120vCPUVMsonSingleServerTECHNICALWHITEPAPER/9PeekingAttheFuturewithGiantMonsterVirtualMachinesFigure3.
Scalabilityof60-vCPUVMsThetotalthroughputachievedwitheight60-vCPUVMswasthehighestofanyofthetestsconducted.
TheIBMFlashSystemarrayalsocontinuedtoachieveimpressiveperformancewithlatencyunder0.
3millisecondsandaverage16,000IOPS.
Sixteen30-vCPUVMsThistestconsistedofrunningasixteen30-vCPUVMs.
Eachofthese30-vCPUVMswasessentiallyusingallthethreadsonaserversocketbecauseoftheuseofthepreferHTparameter.
0100200300400500600148OrdersPerMinute(OPM)inThousandsNumberof60vCPUVMsScalibilityof60vCPUVMs-WithPreferHTTECHNICALWHITEPAPER/10PeekingAttheFuturewithGiantMonsterVirtualMachinesThislargenumberofmonsterVMsrunningatthesametimestillresultedinverygoodtotalthroughputandexcellentscalabilitymovingfromasinglevirtualmachinerunningtoallsixteen.
Thethroughputofthesixteenvirtualmachineswas14.
3timesofasingleVMor89%ofperfectlinearscalability.
Onceagain,disklatencyremainedbelow0.
3millisecondswithaverageIOPSof13,000.
Under-ProvisioningwithFour112-vCPUVMsIntheothertestscoveredinthispaper,theserverhasbeenfullycommittedwithavCPUallocatedforeverythreadonthehost.
ThismeansthatallthreadswillbeusedforvirtualmachinevCPUs.
Inmostenvironments,thisstillleaveslotsofCPUavailablebecausenotallvirtualmachinesarerunningatfullCPUutilization.
InanenvironmentwhereallassignedvCPUsareat100%usage,thereisn'tanythingleftoverfortheESXihypervisortouseforitsfunctions.
ThisincludesvirtualnetworkinganddiskI/Ohandling.
ThehypervisorthencompetesdirectlywiththevirtualmachinesforCPU.
Inthiscase,performanceofthevirtualmachinescanactuallybeimprovedbyreducingthenumberofvCPUstoleavesomeCPUthreadsavailableonthehostfortheuseofESXi.
Inthisspecificconfiguration,whilerunningthe4x120virtualCPUswiththeDVDStore3workload,thenetworktrafficisabout900Megabitspersecond(Mb/s)transmittedand200Mb/sreceivedandanaverage30,000ofdiskIOPSisalsobeingprocessed.
InordertoallowforthehosttohavesomeCPUresourceavailabletohandlethisworkload,thenumberofvCPUsforeachofthefourvirtualmachineswasreducedfrom120to112.
Thisleavesonecore(twohyper-threads)persocketunassignedtoavirtualmachine.
01002003004005006001816OrdersPerMinute(OPM)inThousandsNumberof30vCPUVMsScalabilityof30vCPUVMs-WithPreferHTTECHNICALWHITEPAPER/11PeekingAttheFuturewithGiantMonsterVirtualMachines.
Figure4.
4x120vCPUvs.
4x112vCPUTheresultsshowthatoverallthroughputincreasedsignificantlyfrom448thousandto524thousandOPM.
ThegaininperformancewithsmallervirtualmachinesisduetothereductionincontentionofresourcesbetweentheESXihypervisorandthevirtualmachinesthatisfoundinthisextremetestingscenariowhenallCPUresourceswereallocatedandfullyutilized.
CPUAffinityvs.
PreferHTItispossibletocontroltheCPUsthatareusedforavirtualmachinebyusingtheCPUaffinitysetting.
ThisallowsanadministratortooverridetheESXischedulerandonlyallowavirtualmachinetousespecificphysicalcores.
ThevCPUsusedbyavirtualmachinearepinnedtospecificphysicalcores.
Incertainbenchmarkingscenarios,theuseofCPUaffinityhasshownsmallincreasesinperformance.
Evenintheserelativelyuncommoncases,itsuseisnotrecommendedbecauseofthehighadministrativeeffortandthepotentialforpoorperformanceifthesettingisnotupdatedaschangesintheenvironmentoccuroriftheCPUaffinitysettingisdoneincorrectly.
UsingtheCapstonetestingenvironmentatestwithCPUaffinityandPreferHTwasconductedtomeasurewhichconfigurationperformedbetter.
ItwasfoundthatPreferHT,whichallowstheESXihypervisortomakeallvCPUschedulingdecisions,outperformedaCPUaffinityconfigurationby4%.
0100,000200,000300,000400,000500,000600,0004x120vCPUs4x112vCPUsOrdersPerMinute(OPM)4x120vCPUvs4x112vCPUVMTotalThroughputon60Core/120ThreadServerTECHNICALWHITEPAPER/12PeekingAttheFuturewithGiantMonsterVirtualMachinesFigure5.
UsingPreferHTperformedslightlybetterthansettingCPUaffinityBestPracticesRunningmanyverylargevirtualmachinesonanevenlargerservermakesitmoreimportanttofollowmonstervirtualmachinebestpractices.
Considertheserver'sNUMAarchitecturewhendecidingwhatsizetomakethevirtualmachines.
Whencreatingvirtualmachines,makesurethevirtualNUMAsocketsmatchthephysicalNUMAarchitectureofthehostascloselyaspossible.
Formoreinformation,see"UsingNUMASystemswithESXi"[3].
Sizeandconfigurestoragewithenoughperformancetomatchthelargeperformancecapabilityofthemonstervirtualmachines.
Alargeserverwithunderpoweredstoragewillbelimitedbythestorage.
NetworkperformancecanquicklybecomeanissueifthetrafficforthelargevirtualmachinesisnotcorrectlyspreadacrossmultipleNICs.
Combininganumberofhighperformanceworkloadsonasinglehostwillalsoresultinhighnetworktrafficthatwillmostlikelyneedtousemultiplenetworkconnectionstoavoidabottleneck.
InextremelyhighCPUutilizationscenarios,includingbenchmarktests,itcanbebettertoleaveafewCPUcoresunassignedtovirtualmachinestogivetheESXihypervisorneededresourcesforitsfunctions.
DonotuseCPUaffinity,sometimesreferredtoasCPUpinning,becauseitusuallydoesnotresultinabigincreaseinperformance.
Insomeextremehighutilizationscenarios,usethePreferHTsettingtogetmoretotalperformancefromasystem,butnotethatusingthissettingcouldreduceindividualvirtualmachineperformance.
050,000100,000150,000200,000250,000300,000350,000400,000450,000500,000PinnedPreferHTOrdersPerMinute(OPM)CPUAffinityvsPreferHTwith4x120vCPUVMsOnaSingleServerTECHNICALWHITEPAPER/13PeekingAttheFuturewithGiantMonsterVirtualMachinesConclusionProjectCapstonehasshownthatvSphere6iscapableofrunningmultiplegiantmonstervirtualmachinestodayonsomeoftheworld'smostcapableserversandstorage.
TheHPESuperdomeXandsuperlowlatencyIBMFlashSystemstoragewerechosenbecauseoftheirtremendousperformancecapabilities,theireaseofconfigurationanduse,andtheiroverallcomplimentarystaturetovSphere6.
Theuniquepropertiesofthisstackallowedthetestingteamtopushthelimitsofvirtualizedinfrastructuretoneverbeforeseenlevels.
Asstatedinthemediacollateral"ProjectCapstone,DrivingOracletoSoarBeyondtheClouds,"(seeAppendix)thisexampleinfrastructurestackispossibletodayandshowsthatashighercorecountsandall-flashstoragearraysbecomemorecommoninthefuture,aVMwarevSphere–basedapproachwillprovidetheneededscalabilityandcapacity.
ThiscollaborationofVMware,HPE,andIBMshowsthatapplicationsofthelargestsizescanrunonavSpherevirtualinfrastructure.
Thelimitingfactorinmostdatacenterstodayisthehardware,butwhenusingthelatesttechnologyavailable,itispossibletolifttheselimitsandbringtheflexibilityandcapabilitiesofvirtualizedinfrastructuretoallcornersofthedatacenter.
Thiscollaborativeachievementbetweenthreeoftheworld'smostrecognizedcomputingcompanieshassolidifiedthepropositionofcomprehensivevirtualizationthatVMwarehasheldforanumberofyears.
Verysimplyput,allapplicationsanddatabases—regardlessoftheirprocessing,memory,networking,orthroughputdemands—arecandidatesforavirtualizedinfrastructure.
VMware,HPE,andIBMbuiltProjectCapstonewithleadingedgecomponentsusedasafoundationtoprovethat100%virtualizationisarealityineventhelargestcomputeenvironment.
AppendixAninitialblogforProjectCapstonewaspreviouslypublished[4].
http://blogs.
vmware.
com/vsphere/2015/08/vmworld-us-2015-spotlight-session-project-capstone-a-collaboration-between-vmw-hp-ibm-no-application-left-behind.
htmlAshortvideoonProjectCapstonethatgivessomehighlightsfromtheprojectisavailableonline[5].
https://www.
youtube.
com/watchv=X4SRxl04uQ0ProjectCapstonewaspresentedatVMworld2015inSanFranciscoandwithexecutivesfromallthreecompaniesparticipating.
Avideoofthispresentationisavailableonline[6].
https://www.
youtube.
com/watchv=O3BTvP46i4cTECHNICALWHITEPAPER/14PeekingAttheFuturewithGiantMonsterVirtualMachinesReferences[1]Hewlett-PackardDevelopmentCompany,L.
P.
(2010)HPnPartitions(nPars),forIntegrityandHP9000midrange.
http://www8.
hp.
com/h20195/v2/GetPDF.
aspx/c04123352.
pdf[2]ToddMuirheadandDaveJaffe.
(2015,July)DVDStore3.
http://www.
github.
com/dvdstore/ds3[3]VMware,Inc.
(2015)UsingNUMASystemswithESXi.
http://pubs.
vmware.
com/vsphere-60/index.
jsp#com.
vmware.
vsphere.
resmgmt.
doc/GUID-7E0C6311-5B27-408E-8F51-E4F1FC997283.
html[4]DonSullivan.
(2015,August)VMworldUS2015SpotlightSession:ProjectCapstone,aCollaborationbetweenVMW,HP&IBM.
http://blogs.
vmware.
com/vsphere/2015/08/vmworld-us-2015-spotlight-session-project-capstone-a-collaboration-between-vmw-hp-ibm-no-application-left-behind.
html[5]IBMSystemsISVs.
(2015,November)ProjectCapstone-Pushingtheperformancelimitsofvirtualization.
https://www.
youtube.
com/watchv=X4SRxl04uQ0[6]VMworld.
(2015,November)VMworld2015:VAPP6952-S-VMwareProjectCapstone,aCollaborationofVMware,HP,andIBM.
https://www.
youtube.
com/watchv=O3BTvP46i4c[7]VMware,Inc.
(2015)ConfigurationMaximumsvSphere6.
0.
https://www.
vmware.
com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.
pdfVMware,Inc.
3401HillviewAvenuePaloAltoCA94304USATel877-486-9273Fax650-427-5001www.
vmware.
comCopyright2016VMware,Inc.
Allrightsreserved.
ThisproductisprotectedbyU.
S.
andinternationalcopyrightandintellectualpropertylaws.
VMwareproductsarecoveredbyoneormorepatentslistedathttp://www.
vmware.
com/go/patents.
VMwareisaregisteredtrademarkortrademarkofVMware,Inc.
intheUnitedStatesand/orotherjurisdictions.
Allothermarksandnamesmentionedhereinmaybetrademarksoftheirrespectivecompanies.
Date:27January2016Commentsonthisdocument:https://communities.
vmware.
com/docs/DOC-30846PeekingAttheFuturewithGiantMonsterVirtualMachinesAbouttheAuthorsLeoDemers,MissionCriticalProductManager,HPEKristyOrtega,EcoSystemOfferingManager,IBMRawleyBurbridge,FlashSystemCorporateSolutionArchitect,IBMToddMuirhead,StaffPerformanceEngineer,VMwareDonSullivan,ProductLineMarketingManagerforBusinessCriticalApplications,VMwareAcknowledgementsTheauthorsthankMarkLohmeyer,MichaelKuhn,RandyMeyer,DrewSher,RawleyBurbridge,BruceHerndon,JimBritton,RezaTaheri,JuanGarcia-Rovetta,MichelleTidwell,andJosephDieckhans.
外贸主机哪家好?抗投诉VPS哪家好?无视DMCA。ParkinHost今年还没有搞过促销,这次parkinhost俄罗斯机房上新服务器,母机采用2个E5-2680v3处理器、128G内存、RAID10硬盘、2Gbps上行线路。具体到VPS全部200Mbps带宽,除了最便宜的套餐限制流量之外,其他的全部是无限流量VPS。ParkinHost,成立于 2013 年,印度主机商,隶属于 DiggDigi...
菠萝云国人商家,今天分享一下菠萝云的广州移动机房的套餐,广州移动机房分为NAT套餐和VDS套餐,NAT就是只给端口,共享IP,VDS有自己的独立IP,可做站,商家给的带宽起步为200M,最高给到800M,目前有一个8折的优惠,另外VDS有一个下单立减100元的活动,有需要的朋友可以看看。菠萝云优惠套餐:广州移动NAT套餐,开放100个TCP+UDP固定端口,共享IP,8折优惠码:gzydnat-8...
桔子数据(徐州铭联信息科技有限公司)成立于2020年,是国内领先的互联网业务平台服务提供商。公司专注为用户提供低价高性能云计算产品,致力于云计算应用的易用性开发,并引导云计算在国内普及。目前公司研发以及运营云服务基础设施服务平台(IaaS),面向全球客户提供基于云计算的IT解决方案与客户服务,拥有丰富的国内BGP、双线高防、香港等优质的IDC资源。 公司一直秉承”以人为本、客户为尊、永...
superdome为你推荐
百度关键词价格查询百度推广关键词怎么扣费?haole018.com为啥进WWWhaole001)COM怎么提示域名出错?囡道是haole001换地了吗javmoo.com0904-javbo.net_avop210hhb主人公叫什么,好喜欢,有知道的吗555sss.comms真的是500万像素?www.cn12365.orgwww.12365china.net是可靠的网站吗?还是骗子拿出来忽悠人的29ff.comhttp://fcm.com在哪里输入这个网址啊www.zzzcn.com哪里有免费看书的网站官人放题SBNS-088 中年男の夢を叶えるセックス やりたい放題! 4(中文字幕)种子下载地址有么?好人一生平安bihaiyinsha谁知道长葛洗浴中心如何消费?莱姿蔓格莱姿蔓化妆品孕妇能用吗
jsp虚拟主机 免费网站域名注册 美国域名注册 双线服务器租用 vps代理 香港机房 Vultr lighttpd 免费网站申请 idc是什么 安徽双线服务器 贵阳电信测速 日本代理ip 永久免费空间 免费个人主页 xuni 攻击服务器 域名和主机 卡巴斯基试用版下载 免费获得q币 更多