competitivesuperdome

superdome  时间:2021-03-26  阅读:()
PeekingAttheFuturewithGiantMonsterVirtualMachinesProjectCapstone:APerformanceStudyofRunningManyLargeVirtualMachinesinParallelTECHNICALWHITEPAPERTECHNICALWHITEPAPER/2PeekingAttheFuturewithGiantMonsterVirtualMachinesTableofContentsExecutiveSummary3Introduction.
3ProjectCapstone.
3VMwarevSphere6.
0.
3HPESuperdomeX.
4IBMFlashSystem.
4TestEnvironment.
4TestConfigurationDetails.
5VirtualMachineConfiguration.
6TestWorkload.
6MonsterVirtualMachineTests.
7StoragePerformance7Four120-vCPUVMs.
7Eight60-vCPUVMs.
8Sixteen30-vCPUVMs9Under-ProvisioningwithFour112-vCPUVMs.
10CPUAffinityvs.
PreferHT.
11BestPractices.
12Conclusion13Appendix.
13References.
14TECHNICALWHITEPAPER/3PeekingAttheFuturewithGiantMonsterVirtualMachinesExecutiveSummaryThistechnicalwhitepaperexaminestheextraordinarypossibilitiesavailablewhenleadingedgeserversandstoragepushtheboundariesofcurrenttechnologyintermsofcapacityandperformance.
Testswererunwithmanydifferentconfigurationsofextremelylargevirtualmachines(knownasmonstervirtualmachines)andresultsshowthatVMwarevSphere6successfullymanagedtorunallofthevirtualmachinesinahighperformingandefficientmanner.
vSphere6isreadytorunthelargestsystemsandworkloadsoftodaywithgreatperformance.
vSphere6isalsoreadyforthefuturetotakeonthehighperformantsystemsandworkloadsthatwillbecomemorecommonindatacenters.
IntroductionTherateofincreasesincapacityandperformanceofcomputingisdramatic.
Startingin1975,Moore'slawobservedthatthenumberoftransistorsinanintegratedcircuitdoubleseverytwoyears.
Thisdoublingoftransistorshastranslatedintochipperformancealsodoublingeverytwoyears.
VMwarevSpherehasalsorapidlyincreaseditscapacitytosupportlargervirtualmachinesandhoststokeepupwiththiscomputecapacitythatincreasesovertime.
Two-andfour-socketx86-basedserversarecommonlyusedtoday.
WhilethenumberofcorespersocketintheseserversdoesnotexactlyfollowMoore'slaw(becauseeachcoreitselfismorepowerfulwitheachgenerationofprocessors)itcanbeusedasaroughproxy.
ThecurrentgenerationIntelXeonchipshaveamaximumof18corespersocketand36logicalprocesseswithhyper-threadingenabled.
Thisisalmostadoublingofthe10corespersocketinXeonchipsfromtwogenerationsandaboutfouryearsbefore.
Manyfour-socketserversthatusethecurrentgenerationIntelXeonprocessorshave72cores,buttheHPESuperdomeXhas16socketswith240cores.
Byusingthiscuttingedgeserver,itispossibletohavethetypeofcomputecapacityinasingleserverthat,byfollowingMoore'sLaw,won'tbeavailableinafour-socketserverformanyyears.
Itisapeekintothefuture.
ProjectCapstoneProjectCapstonebringstogetherVMware,HPE,andIBMinauniqueopportunitytocombineeachoftheseindustryleadingcompaniesandtheirrespectiveleading-edgetechnologiestobuildatestenvironmentthatshowstheupperboundsofwhatiscurrentlypossiblewithsuchgiantcomputepower.
RunningnumerousheavyworkloadsonmonstervirtualmachinesonavSphere6,HPESuperdomeX,andIBMFlashSystemconfigurationinparallelexemplifiesthepresentcapabilitiesofthesecombinedtechnologies.
ProjectCapstonebecameacenterpieceofthe2015VMwareconferenceseasonasitoccupiedcenterstageatVMworldUSinSanFranciscoasthesubjectofahighlyanticipatedSpotlightSessionthatincludedindividualpresentationsfromseniormanagementofVMware,HP,andIBM.
VMworld2015EuropeinBarcelonaincludedaCapstone-themedbreakoutsessionaswell.
Butperhapsmostsignificantly,theVMworldfloorpresenceatOracleOpenWorldinSanFranciscoinOctoberbrandishedacompletedemoversionoftheCapstoneStacktoincludetheSuperdomeXaswellastheIBMFlashSystem.
VMwarevSphere6.
0VMwarevSphere6.
0includesnewscalabilityfeaturesthatenableittohostextremelylargeandperformance-intensiveapplications.
ThecapabilitiesofindividualvirtualmachineshasincreasedsignificantlyfromthepreviousversionsofvSphere.
Asinglevirtualmachinecannowhaveupto128vCPUsand4TBofmemory.
Whiletheselevelsofresourcesarenotcommonlyrequired,therearesomelargeapplicationsthatdorequireandmakeuseofresourcesatthisscale.
Theseareusuallythelastapplicationstobeconsideredforvirtualizationduetotheirsize,butitisnowpossibletomovethislasttierofapplicationsintovirtualmachines.
TECHNICALWHITEPAPER/4PeekingAttheFuturewithGiantMonsterVirtualMachinesHPESuperdomeXHPEIntegritySuperdomeXsetsnew,highstandardsforx86availability,scalability,andperformance;itisanidealplatformforcriticalbusinessprocessinganddecisionsupportworkloads.
SuperdomeXblendsx86efficiencieswithprovenHPEmission-criticalinnovationsforasuperioruptimeexperiencewithRAS(reliability,availability,andserviceability)featuresnotfoundinotherx86platforms,allowingthismachinetoachievefivenines(99.
999%)ofavailability.
Breakthroughscalabilityofupto16socketscanhandlethelargestscaled-upx86workloads.
ThroughtheuniquenParstechnology,HPESuperdomeXincreasesreliabilityandflexibilitybyallowingforelectricallyisolatedenvironmentstobebuiltwithinasingleenclosure[1].
Itisawell-balancedarchitecturewithpowerfulXeonprocessorsworkinginconcertwithhighI/Oandalargememoryfootprintthatenablesthevirtualizationoflargeandcriticalapplicationsatanunprecedentedscale.
Whetheryouwanttomaximizeapplicationuptime,standardize,orconsolidate,HPESuperdomeXhelpsvirtualizemission-criticalenvironmentsinwaysneverbeforeimagined.
TheHPESuperdomeXistheidealsystemforProjectCapstonebecauseitisuniquelysuitedtoactasthephysicalplatformforsuchamassivevirtualizationeffort.
TheabilityofvSphere6toscaleupto128virtualCPUscanbeeasilyrealizedontheHPESuperdomeXbecauseitallowsformassive,individualvirtualmachinestobeencapsulatedonasinglesystemwhilehugeaggregateprocessingisparallelized.
IBMFlashSystemTheIBMFlashSystemfamilyofall-flashstorageplatformsincludesIBMFlashSystem900andIBMFlashSystemV9000arrays.
PoweredbyIBMFlashCoretechnology,theFlashSystem900deliverstheextremeperformance,enterprisereliability,andoperationalefficienciesrequiredtogaincompetitiveadvantageintoday'sdynamicmarketplace.
Addingtothesecapabilities,FlashSystemV9000offerstheadvantagesofsoftware-definedstorageatthespeedofflash.
Theseall-flashstoragesystemsdeliverthefullcapabilitiesofFlashCoretechnology'shardwareacceleratedarchitecture,MicroLatencymodules,andadvancedflashmanagementcoupledwitharichsetoffeaturesfoundinonlythemostadvancedenterprisestoragesolutions,includingIBMReal-timeCompression,virtualization,dynamictiering,thinprovisioning,snapshots,cloning,replication,datacopyservices,andhigh-availabilityconfigurations.
Whilevirtualizationliftsthephysicalrestraintsontheserverroom,theoverallperformanceofmulti-workloadserverenvironmentsenabledbyvirtualizationareheldbackbytraditionalstoragebecausedisk-basedsystemsstrugglewiththechallengesposedbytheresultingI/O.
consolidated.
Asvirtualizationhasenabledtheconsolidationofmultipleworkloadsrunonafewerphysicalservers,diskssimplycan'tkeepup,andthislimitsthevalueenterprisesgainfromvirtualization.
IBMFlashSystemV9000solvesthestoragechallengesleftunansweredbytraditionalstoragesolutions.
IthandlesrandomI/Opatternswithease,anditoffersthecapabilitytovirtualizeallexistingdatastorageresourcesandbringthemtogetherunderonepointofcontrol.
FlashSystemV9000providesacomprehensivestoragesolutionthatseamlesslyandautomaticallyallocatesstorageresourcestoaddresseveryapplicationdemand.
Itmovesdatatothemostefficient,cost-effectivestoragemedium—fromflash,todisk,andeventotape—withoutdisruptingapplicationperformanceordataavailability,andmorecapacitycanbeaddedwithoutapplicationdowntimeoralengthyupdateprocess.
IBMFlashSystemV9000helpsenterprisesrealizethefullvalueofVMwarevSphere6.
TestEnvironmentThetestenvironmentwasdesignedtoallowtestingforextremelylargemonstervirtualmachines.
vSphere6providesthecapabilitytohostvirtualmachinesofupto128vCPUs.
Thisisthefoundationforrunninglargermonstervirtualmachinesthaninthepast.
TheHPESuperdomeXandIBMFlashSystemstoragearrayprovidedthehardwareserverandstorageplatformsrespectively.
TheSuperdomeXusedinthisprojecthad240coresandTECHNICALWHITEPAPER/5PeekingAttheFuturewithGiantMonsterVirtualMachines480logicalthreadswithhyper-threadingenabled.
Thiswascoupledwith20TBofextremelylowlatency,all-flashstoragewithintheIBMFlashSystemarray.
Afour-socketserverwasusedasaclientloaddriversystemforthetestbed.
Thediagrambelowshowsthetestbedsetup.
Figure1.
TestbedhardwareTestConfigurationDetailsHPESuperdomeXServer:vSphere6.
016IntelXeonE7-2890v22.
8GHzCPUs(15coresperCPU)240cores/480threads(hyper-threadingenabled)12TBofRAM16GbFibreChannel10GbEthernetIBMFlashSystem900:20TBcapacityAll-flashmemory16GbFibreChannelTECHNICALWHITEPAPER/6PeekingAttheFuturewithGiantMonsterVirtualMachinesClientloaddriverserver:4xIntelXeonE7-48702.
4GHz512GBofRAM10GbEthernetVirtualMachineConfigurationTheconfigurationofthevirtualmachinewaskeptconstantinalltestsexceptforthenumberofvirtualCPUsandrelatedvirtualNUMA.
Inalltests,thetotalnumberofvCPUsacrossallvirtualmachinesundertestwasequaltothenumberofcoresorhyper-threadsontheserver.
Inthemaximumsizevirtualmachinetestcase,therewerefourvirtualmachineseachwith120vCPUsforatotalof480vCPUsassignedontheserver.
Thismatchesthe480hyper-threadsavailableontheserver.
Table1showsthenumberofvirtualmachineswiththeirvCPUconfigurationsthatweretested.
NumberofVMsvCPUsperVMVirtualSocketsperVMTotalvCPUsAssignedonServerTotalPhysicalThreadsOnServerWithHTEnabled41204480480860248048016301480480Table1.
VirtualmachineconfigurationTheconfigurationparameterPreferHTwasusedfortheseteststooptimizetheuseofthesystem'shyper-threadsinthishighCPUutilizationbenchmark.
Bydefault,vSphere6scheduleseachvCPUonacorewhereanothervCPUisnotscheduled.
Inotherwords,vSpherewillnotusethesecondthreadthatiscreatedoneachcorewithhyper-threadingenableduntilthereisavCPUalreadyscheduledonallofthephysicalcoresonthesystem.
UsingthePreferHTparameterchangesthisandinstructstheschedulerforavirtualmachinetoprefertousehyper-threadsinsteadofphysicalcores.
ThebestperformancefortwovCPUswouldbetouseathreadfromtwophysicalcores,andthisisthedefaultschedulingbehavior.
Usingtwothreadsofthesamecoreresultsinlowerperformancebecausehyper-threadssharemostoftheresourcesofthephysicalcore.
However,inthecaseofhighoverallsystemutilization,allthreadsonallcoresareinuseatthesametime.
PreferHTprovidesaperformanceadvantagebecauseeachvirtualmachineisspreadacrossfewerNUMAnodesandthisresultsinincreasedNUMAmemorylocality.
ByusingPreferHT,ahighlyutilizedsystembecomesmoreefficientbecausethevirtualmachinesallhavemoreNUMAlocalitywhilestillusingallthelogicalthreadsontheserver.
Standardbestpracticesfordatabasevirtualmachineswereusedfortheconfiguration.
Eachvirtualmachinewasconfiguredwith256GBofRAM,twopvSCSIcontrollers,andasinglevmxnet3virtualnetworkadapter.
ThevirtualmachineswereinstalledwithRedHatEnterpriseLinux6.
5astheguestoperatingsystem.
Oracle12cwasinstalledfollowingtheinstallationguidefromOracle.
TestWorkloadTheopensourcedatabaseworkloadDVDStore3wasusedforthesetests[2].
DVDStoresimulatesanonlinestorethatallowscustomerstologin,browseproducts,readandsubmitproductreviews,andpurchaseproducts.
Itusesmanydatabasefeaturestorunthedatabaseincludingprimarykeys,foreignkeys,fulltextindexingandsearching,transactions,rollbacks,storedprocedures,triggers,andsimpleandcomplexmulti-joinqueries.
ItisTECHNICALWHITEPAPER/7PeekingAttheFuturewithGiantMonsterVirtualMachinesdesignedtobeCPUintensive,butalsorequireslowlatencystorageinordertoachievegoodthroughput.
DVDStoreincludesadriverprogramthatsimulatesuseractivityonthedatabase.
Eachsimulateduserstepsthroughthefullprocessofanorder:login,browsetheDVDcatalog,browseproductreviews,andpurchaseDVDs.
Performanceismeasuredinordersperminute(OPM).
DVDStore3,whichwasrecentlyupdatedfromversion2,addsproductreviewsandafewotherfeaturesthataredesignedtomaketheworkloadincludethetypicalproductreviewscommonlyfoundtodayonmanyWebsites,andversion3isalsomoreCPUintensive.
TheincreasedCPUusagemakesitpossibleforaDVDStore3instancetofullysaturatelargersystemsmoreeasilythanwhatwaspossiblewiththepreviousversionofDVDStore.
Forthesetests,a40GBDVDStore3databaseinstancewascreatedoneachvirtualmachine.
Thedirectdatabasedriverwasusedontheclientloadsystemtostressthedatabasewithoutrunningamiddletierbecausethefocusofthesetestswasonthelargedatabasevirtualmachines.
Thedatabasebuffercachewassettosamesizeasthedatabasetooptimizeperformance.
ThenumberofdriverthreadsrunningagainsteachmonstervirtualmachinewasincreaseduntilthemaximumOPMbegantodecrease.
AtthepointofmaximumOPMtheCPUusageandotherperformancemetricswerecheckedtoverifythatthesystemhadreachedsaturation.
MonsterVirtualMachineTestsAseriesoftestswererunwithdifferentsizesofvirtualmachines.
Eachtestisbrieflydescribedwiththeresultsandanalysis.
ThefirsttestsdiscussedareallsimilarinthateachconfigurationisasetofvirtualmachinesthatfullyconsumealltheCPUthreadsonthehost.
Theconfigurationsarefour120-vCPUVMs,eight60-vCPUVMs,and1630-vCPUVMs.
Ineachcase,thetotalnumberofvCPUsrunningacrossallthevirtualmachinesis480,whichequalsthenumberofCPUthreadsonthehost.
Inadditiontothesetestswithmaximumconfigurations,sometestswererunwithavirtualmachineconfigurationthatunder-provisionstheserver,andatestcomparingCPUaffinity(pinning)vs.
PreferHTconfigurations.
StoragePerformanceForthesetests,thegoalwastousealltheCPUsontheserver.
Inordertoaccomplishthis,theamountofdiskI/Owasminimizedbyspecifyingadatabasebuffercachethatwasapproximatelythesamesizeasthedatabaseondisk.
Thismeantthataftertheinitialwarmupphaseofrunningthetest,mostdatabasequeriescouldbesatisfiedwithoutadiskI/Ooperationbecausemostofthedatabasewascachedinmemory.
InorderforallCPUstobekeptbusy,thediskI/Ooperationsthatoccurmustbeaslowlatencyaspossible.
TheIBMFlashSystemarraywasabletokeepaveragedisklatencybelow0.
3millisecondsinalltestsandwasahighlightofsystemperformance.
IOPSpeakedatapproximately50,000duringsomeofthetestruns,whichwaswellwithinthecapabilitiesofthestoragearray.
Thearrayprovidedextremelylowlatencystorageinalltestscenarios.
ThecapabilitiesoftheIBMFlashSystemarrayintermsofIOPSwereneverpushed,butthetestsdidbenefitgreatlyfromtheconsistentlylowresponsetimes.
Four120-vCPUVMsThemaximumsizevirtualmachineinvSphere6is128vCPUs.
Sowiththelimitof480totalCPUthreadsonthehost,runningfour120-vCPUVMsisthemaximumsizepossiblewhilekeepingallvirtualmachinesthesamesizeandstayingunderthevSpheremaximum.
Whilenotmanyenvironmentstodayhaveasinglevirtualmachinerunningatthissize,thistestranfourofthemonasinglehostunderhighload.
Tomeasurethescalabilityofthesolutionatfullcapacity,testswererunfirstwithjustasinglemonstervirtualmachine.
Inadditionaltests,allfourvirtualmachineswererunatthesametime.
Maximumperformancewasfoundforeachtestcasebyincreasingthenumberofthreadsintheclientdriverstofindthepointatwhichthemostordersperminute(OPM)wereachieved.
ThispointofmaximumthroughputwasalsofoundtobeatnearCPUsaturation,indicatingthatperformancehadpeaked.
TECHNICALWHITEPAPER/8PeekingAttheFuturewithGiantMonsterVirtualMachinesFigure2.
Almostlinearscalabilityof4x120vCPUVMsonasingleserverInthistypeoftest,theidealislinearscalability.
Thiswouldbea4timesperformancegrowthgoingfromasinglevirtualmachinetofourvirtualmachines.
AsFigure2shows,thefour120-vCPUvirtualmachinesachieved3.
7timesthethroughputofthesingle120-vCPUvirtualmachine,whichis92%ofperfectlinearscalability.
Storageperformedataveryhighlevelmaintaining0.
3millisecondslatencyand20,000IOPSduringthetest.
Eight60-vCPUVMsForthenextsetoftests,thevirtualmachineswerereadjusteddownto60vCPUsandclonedsothattherewasatotalofeight60-vCPUVMs.
Inthistest,eachvirtualmachinehadthesamenumberofvCPUsaseachSuperdomeXcomputeblade.
Itisbynomeansarequirementtomatchvirtualmachinesizetotheunderlyinghardwaresospecifically,butthiscanallowforoptimizedresultsinsomeenvironments.
05010015020025030035040045050014OrdersPerMinute(OPM)inThousandsNumberof120-vCPUVirtualMachinesScalabilityof4x120vCPUVMsonSingleServerTECHNICALWHITEPAPER/9PeekingAttheFuturewithGiantMonsterVirtualMachinesFigure3.
Scalabilityof60-vCPUVMsThetotalthroughputachievedwitheight60-vCPUVMswasthehighestofanyofthetestsconducted.
TheIBMFlashSystemarrayalsocontinuedtoachieveimpressiveperformancewithlatencyunder0.
3millisecondsandaverage16,000IOPS.
Sixteen30-vCPUVMsThistestconsistedofrunningasixteen30-vCPUVMs.
Eachofthese30-vCPUVMswasessentiallyusingallthethreadsonaserversocketbecauseoftheuseofthepreferHTparameter.
0100200300400500600148OrdersPerMinute(OPM)inThousandsNumberof60vCPUVMsScalibilityof60vCPUVMs-WithPreferHTTECHNICALWHITEPAPER/10PeekingAttheFuturewithGiantMonsterVirtualMachinesThislargenumberofmonsterVMsrunningatthesametimestillresultedinverygoodtotalthroughputandexcellentscalabilitymovingfromasinglevirtualmachinerunningtoallsixteen.
Thethroughputofthesixteenvirtualmachineswas14.
3timesofasingleVMor89%ofperfectlinearscalability.
Onceagain,disklatencyremainedbelow0.
3millisecondswithaverageIOPSof13,000.
Under-ProvisioningwithFour112-vCPUVMsIntheothertestscoveredinthispaper,theserverhasbeenfullycommittedwithavCPUallocatedforeverythreadonthehost.
ThismeansthatallthreadswillbeusedforvirtualmachinevCPUs.
Inmostenvironments,thisstillleaveslotsofCPUavailablebecausenotallvirtualmachinesarerunningatfullCPUutilization.
InanenvironmentwhereallassignedvCPUsareat100%usage,thereisn'tanythingleftoverfortheESXihypervisortouseforitsfunctions.
ThisincludesvirtualnetworkinganddiskI/Ohandling.
ThehypervisorthencompetesdirectlywiththevirtualmachinesforCPU.
Inthiscase,performanceofthevirtualmachinescanactuallybeimprovedbyreducingthenumberofvCPUstoleavesomeCPUthreadsavailableonthehostfortheuseofESXi.
Inthisspecificconfiguration,whilerunningthe4x120virtualCPUswiththeDVDStore3workload,thenetworktrafficisabout900Megabitspersecond(Mb/s)transmittedand200Mb/sreceivedandanaverage30,000ofdiskIOPSisalsobeingprocessed.
InordertoallowforthehosttohavesomeCPUresourceavailabletohandlethisworkload,thenumberofvCPUsforeachofthefourvirtualmachineswasreducedfrom120to112.
Thisleavesonecore(twohyper-threads)persocketunassignedtoavirtualmachine.
01002003004005006001816OrdersPerMinute(OPM)inThousandsNumberof30vCPUVMsScalabilityof30vCPUVMs-WithPreferHTTECHNICALWHITEPAPER/11PeekingAttheFuturewithGiantMonsterVirtualMachines.
Figure4.
4x120vCPUvs.
4x112vCPUTheresultsshowthatoverallthroughputincreasedsignificantlyfrom448thousandto524thousandOPM.
ThegaininperformancewithsmallervirtualmachinesisduetothereductionincontentionofresourcesbetweentheESXihypervisorandthevirtualmachinesthatisfoundinthisextremetestingscenariowhenallCPUresourceswereallocatedandfullyutilized.
CPUAffinityvs.
PreferHTItispossibletocontroltheCPUsthatareusedforavirtualmachinebyusingtheCPUaffinitysetting.
ThisallowsanadministratortooverridetheESXischedulerandonlyallowavirtualmachinetousespecificphysicalcores.
ThevCPUsusedbyavirtualmachinearepinnedtospecificphysicalcores.
Incertainbenchmarkingscenarios,theuseofCPUaffinityhasshownsmallincreasesinperformance.
Evenintheserelativelyuncommoncases,itsuseisnotrecommendedbecauseofthehighadministrativeeffortandthepotentialforpoorperformanceifthesettingisnotupdatedaschangesintheenvironmentoccuroriftheCPUaffinitysettingisdoneincorrectly.
UsingtheCapstonetestingenvironmentatestwithCPUaffinityandPreferHTwasconductedtomeasurewhichconfigurationperformedbetter.
ItwasfoundthatPreferHT,whichallowstheESXihypervisortomakeallvCPUschedulingdecisions,outperformedaCPUaffinityconfigurationby4%.
0100,000200,000300,000400,000500,000600,0004x120vCPUs4x112vCPUsOrdersPerMinute(OPM)4x120vCPUvs4x112vCPUVMTotalThroughputon60Core/120ThreadServerTECHNICALWHITEPAPER/12PeekingAttheFuturewithGiantMonsterVirtualMachinesFigure5.
UsingPreferHTperformedslightlybetterthansettingCPUaffinityBestPracticesRunningmanyverylargevirtualmachinesonanevenlargerservermakesitmoreimportanttofollowmonstervirtualmachinebestpractices.
Considertheserver'sNUMAarchitecturewhendecidingwhatsizetomakethevirtualmachines.
Whencreatingvirtualmachines,makesurethevirtualNUMAsocketsmatchthephysicalNUMAarchitectureofthehostascloselyaspossible.
Formoreinformation,see"UsingNUMASystemswithESXi"[3].
Sizeandconfigurestoragewithenoughperformancetomatchthelargeperformancecapabilityofthemonstervirtualmachines.
Alargeserverwithunderpoweredstoragewillbelimitedbythestorage.
NetworkperformancecanquicklybecomeanissueifthetrafficforthelargevirtualmachinesisnotcorrectlyspreadacrossmultipleNICs.
Combininganumberofhighperformanceworkloadsonasinglehostwillalsoresultinhighnetworktrafficthatwillmostlikelyneedtousemultiplenetworkconnectionstoavoidabottleneck.
InextremelyhighCPUutilizationscenarios,includingbenchmarktests,itcanbebettertoleaveafewCPUcoresunassignedtovirtualmachinestogivetheESXihypervisorneededresourcesforitsfunctions.
DonotuseCPUaffinity,sometimesreferredtoasCPUpinning,becauseitusuallydoesnotresultinabigincreaseinperformance.
Insomeextremehighutilizationscenarios,usethePreferHTsettingtogetmoretotalperformancefromasystem,butnotethatusingthissettingcouldreduceindividualvirtualmachineperformance.
050,000100,000150,000200,000250,000300,000350,000400,000450,000500,000PinnedPreferHTOrdersPerMinute(OPM)CPUAffinityvsPreferHTwith4x120vCPUVMsOnaSingleServerTECHNICALWHITEPAPER/13PeekingAttheFuturewithGiantMonsterVirtualMachinesConclusionProjectCapstonehasshownthatvSphere6iscapableofrunningmultiplegiantmonstervirtualmachinestodayonsomeoftheworld'smostcapableserversandstorage.
TheHPESuperdomeXandsuperlowlatencyIBMFlashSystemstoragewerechosenbecauseoftheirtremendousperformancecapabilities,theireaseofconfigurationanduse,andtheiroverallcomplimentarystaturetovSphere6.
Theuniquepropertiesofthisstackallowedthetestingteamtopushthelimitsofvirtualizedinfrastructuretoneverbeforeseenlevels.
Asstatedinthemediacollateral"ProjectCapstone,DrivingOracletoSoarBeyondtheClouds,"(seeAppendix)thisexampleinfrastructurestackispossibletodayandshowsthatashighercorecountsandall-flashstoragearraysbecomemorecommoninthefuture,aVMwarevSphere–basedapproachwillprovidetheneededscalabilityandcapacity.
ThiscollaborationofVMware,HPE,andIBMshowsthatapplicationsofthelargestsizescanrunonavSpherevirtualinfrastructure.
Thelimitingfactorinmostdatacenterstodayisthehardware,butwhenusingthelatesttechnologyavailable,itispossibletolifttheselimitsandbringtheflexibilityandcapabilitiesofvirtualizedinfrastructuretoallcornersofthedatacenter.
Thiscollaborativeachievementbetweenthreeoftheworld'smostrecognizedcomputingcompanieshassolidifiedthepropositionofcomprehensivevirtualizationthatVMwarehasheldforanumberofyears.
Verysimplyput,allapplicationsanddatabases—regardlessoftheirprocessing,memory,networking,orthroughputdemands—arecandidatesforavirtualizedinfrastructure.
VMware,HPE,andIBMbuiltProjectCapstonewithleadingedgecomponentsusedasafoundationtoprovethat100%virtualizationisarealityineventhelargestcomputeenvironment.
AppendixAninitialblogforProjectCapstonewaspreviouslypublished[4].
http://blogs.
vmware.
com/vsphere/2015/08/vmworld-us-2015-spotlight-session-project-capstone-a-collaboration-between-vmw-hp-ibm-no-application-left-behind.
htmlAshortvideoonProjectCapstonethatgivessomehighlightsfromtheprojectisavailableonline[5].
https://www.
youtube.
com/watchv=X4SRxl04uQ0ProjectCapstonewaspresentedatVMworld2015inSanFranciscoandwithexecutivesfromallthreecompaniesparticipating.
Avideoofthispresentationisavailableonline[6].
https://www.
youtube.
com/watchv=O3BTvP46i4cTECHNICALWHITEPAPER/14PeekingAttheFuturewithGiantMonsterVirtualMachinesReferences[1]Hewlett-PackardDevelopmentCompany,L.
P.
(2010)HPnPartitions(nPars),forIntegrityandHP9000midrange.
http://www8.
hp.
com/h20195/v2/GetPDF.
aspx/c04123352.
pdf[2]ToddMuirheadandDaveJaffe.
(2015,July)DVDStore3.
http://www.
github.
com/dvdstore/ds3[3]VMware,Inc.
(2015)UsingNUMASystemswithESXi.
http://pubs.
vmware.
com/vsphere-60/index.
jsp#com.
vmware.
vsphere.
resmgmt.
doc/GUID-7E0C6311-5B27-408E-8F51-E4F1FC997283.
html[4]DonSullivan.
(2015,August)VMworldUS2015SpotlightSession:ProjectCapstone,aCollaborationbetweenVMW,HP&IBM.
http://blogs.
vmware.
com/vsphere/2015/08/vmworld-us-2015-spotlight-session-project-capstone-a-collaboration-between-vmw-hp-ibm-no-application-left-behind.
html[5]IBMSystemsISVs.
(2015,November)ProjectCapstone-Pushingtheperformancelimitsofvirtualization.
https://www.
youtube.
com/watchv=X4SRxl04uQ0[6]VMworld.
(2015,November)VMworld2015:VAPP6952-S-VMwareProjectCapstone,aCollaborationofVMware,HP,andIBM.
https://www.
youtube.
com/watchv=O3BTvP46i4c[7]VMware,Inc.
(2015)ConfigurationMaximumsvSphere6.
0.
https://www.
vmware.
com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.
pdfVMware,Inc.
3401HillviewAvenuePaloAltoCA94304USATel877-486-9273Fax650-427-5001www.
vmware.
comCopyright2016VMware,Inc.
Allrightsreserved.
ThisproductisprotectedbyU.
S.
andinternationalcopyrightandintellectualpropertylaws.
VMwareproductsarecoveredbyoneormorepatentslistedathttp://www.
vmware.
com/go/patents.
VMwareisaregisteredtrademarkortrademarkofVMware,Inc.
intheUnitedStatesand/orotherjurisdictions.
Allothermarksandnamesmentionedhereinmaybetrademarksoftheirrespectivecompanies.
Date:27January2016Commentsonthisdocument:https://communities.
vmware.
com/docs/DOC-30846PeekingAttheFuturewithGiantMonsterVirtualMachinesAbouttheAuthorsLeoDemers,MissionCriticalProductManager,HPEKristyOrtega,EcoSystemOfferingManager,IBMRawleyBurbridge,FlashSystemCorporateSolutionArchitect,IBMToddMuirhead,StaffPerformanceEngineer,VMwareDonSullivan,ProductLineMarketingManagerforBusinessCriticalApplications,VMwareAcknowledgementsTheauthorsthankMarkLohmeyer,MichaelKuhn,RandyMeyer,DrewSher,RawleyBurbridge,BruceHerndon,JimBritton,RezaTaheri,JuanGarcia-Rovetta,MichelleTidwell,andJosephDieckhans.

ftlcloud(超云)9元/月,1G内存/1核/20g硬盘/10M带宽不限/10G防御,美国云服务器

ftlcloud怎么样?ftlcloud(超云)目前正在搞暑假促销,美国圣何塞数据中心的云服务器低至9元/月,系统盘与数据盘分离,支持Windows和Linux,免费防御CC攻击,自带10Gbps的DDoS防御。FTL-超云服务器的主要特色:稳定、安全、弹性、高性能的云端计算服务,快速部署,并且可根据业务需要扩展计算能力,按需付费,节约成本,提高资源的有效利用率。点击进入:ftlcloud官方网站...

鲸云10美元,香港BGPRM 1核 1G 10Mbps峰值带宽 1TB流量,江西CN2-NAT 1核 512MB内存 100M带宽 ,

WHloud Official Notice(鲸云官方通知)(鲸落 梦之终章)]WHloud RouMu Cloud Hosting若木产品线云主机-香港节点上新预售本次线路均为电信CN2 GIA+移动联通BGP,此机型为正常常规机,建站推荐。本次预售定为国庆后开通,据销售状况决定,照以往经验或有咕咕的可能性,但是大多等待时间不长。均赠送2个快照 2个备份,1个默认ipv4官方网站:https:/...

VinaHost,越南vps,国内延时100MS;不限流量100Mbps

vinahost怎么样?vinahost是一家越南的主机商家,至今已经成13年了,企业运营,老牌商家,销售VPS、虚拟主机、域名、邮箱、独立服务器等,机房全部在越南,有Viettle和VNPT两个机房,其中VNPT机房中三网直连国内的机房,他家的产品优势就是100Mbps不限流量。目前,VinaHost商家发布了新的优惠,购买虚拟主机、邮箱、云服务器、VPS超过三个月都有赠送相应的时长,最高送半年...

superdome为你推荐
甲骨文不满赔偿不签合同不满一年怎么补偿www.gegeshe.com《我的电台fm》 she网址是多少?lcoc.topoffsettop和scrolltop的区别www.hyyan.com请问我是HY了吗?在线等www.1diaocha.com请问网络上可以做兼职赚钱吗?现在骗子比较多,不敢盲目相信。请大家推荐下www.jizzbo.comwww.toubai.com是什么网站dpscycle魔兽世界国服,求几个暗影MS的输出宏官人放题戴望舒的《狱中题壁》www.niuniu.com哪里有免费牛牛游戏可以玩啊查看源代码怎么查看一个程序的源代码?
域名注册网 论坛虚拟主机 cn域名个人注册 securitycenter 256m内存 免费cdn加速 tk域名 国外网站代理服务器 e蜗 跟踪路由命令 上海电信测速 web应用服务器 谷歌台湾 如何登陆阿里云邮箱 阿里云邮箱申请 阿里云邮箱个人版 cdn服务 中国电信宽带测速 七十九刀 magento主机 更多