Horizontal Decoupling of Cloud Orchestration for Stabilizing



In a plain and understandable desire for achieving economies of scale, a cloud orchestration software system should be capable of managing a huge farm of hardware servers. However, even using most advanced software configuration and management tools, thefield has trial-and-error reached a common sense that the distribution scale of a cloud orchestrator mustn’t be too large.E.g., VMware, probably among the most experienced players in the trade, stipulates a rule-of-thumb upper bound for its orchestrator vRealize: No more than 1,000 servers per vRealizeevenif the softwareis installedontop quality hardware devices. Scaling-up beyond that level, cloud operation and maintenance would become unstable plus incur sharp costincreasements in operation and maintenance. Recent achievements in hyper efficient CPU virtualization by Docker have seminally ignited additional orders-of-magnitude explosions in the number of micro-servicing CPUs,certainly to add further troubles to worsening scalability in cloud orchestration. Current poor scalability status quo in cloud orchestration means that today’s clouds are in small isolated scatters and patches,and therefore cannotefficiently tap cloud potentialsfrom economies of scale.

The essential problem behind poor scalability in cloud orchestration is that all cloud orchestrators, from commercial offerings or from open source projects, unanimously and conventionally evolve from a horizontally tight coupled architecture. A horizontally tight coupled orchestratoris a bunch of software components which are host knowledge interwoven. By speaking of "host knowledge interwoven", we mean thatthe softwarecomponents in a cloud orchestratorknow the existence, roles and duties of one another right at their birthday of being installed ona farm of server hosts, and throughout their remainder entire lifecycles afterwards. Whena farm gets large, some queues of events and messages will inevitably become long; writelockmechanisms forconsistency protection and CoW DBaccesses will also aggregate momentumto slow downresponsiveness; and occasional unfortunate popup offailures,even merely in a benign timeout sense, occurring at one pointin the farm would highly likely pull down other knowledge interwoven parts. As a matter of fact, all cloud servicing or hosting providers,as long as havinga size,allhave to rely onhuman based operation/maintenance teams 7×24 on-guard the farm, playing similar roles of firefighters!

DaoliCloudpresents Network Virtualization Infrastructure (NVI) technology to horizontally decouple cloud orchestration. The NVI technology minimizes the size of a cloud orchestration region down to over one single hardware server, e.g., in the formulation of OpenStack all-in-one installation. An orchestratormanaging only one server host of course has absolutely no knowledge whatsoever about any other orchestrator managing another server. Thus, any server host in an NVIfarm has no software knowledgeabout any other server host in the farm.While having obviously maximized stability for cloud operation and maintenance,the overlay cloudresources which are pooled by NVI remain tohave unbound scalability. This is because NVI can trans-orchestrator connect overlay nodes in usermode only upon one node initiating communication to another (think of http connection!). NVI can connectvarious virtual CPUsover independentand heterogeneouscloud orchestrators, e.g., connect lightweight micro-servicing Docker containers and heavy-duty hypervisor VMs, which areindependentlyorchestrated by, e.g., Kubernetes and OpenStack. Moreover, NVIcan transparently link different cloud service providers, also in user mode.

The key enabler for any two not-knowing-one-another orchestratorsto serve user-mode connectionfor their respectivelyorchestratedoverlay nodes is a novelOpenFlow formulation for forwarding trans-orchestrator underlay packets. This new SDN formulation succeeds constructing any OSI layer,any form of overlay network without any need of packet encapsulation, i.e., without using any of the trans-host-network protocols such as VLAN, VXLAN, VPN, MPLS, GRE, NVGRE, LISP, STT,Geneve,or anysuchwe have missed from the enumeration! Having avoided trans-host packet encapsulation, there is of course no need for the involving orchestrators to know one anotherin host mode, neitherin the system installation time nor in their remainderentire lifecycles afterwards.It is in such a simple principlethat the SDN innovationof NVI achieves complete horizontal decouplingof cloud orchestration. With connection taking placeonly in user mode,cloud deployment, operation, maintenance, and system upgrading, etc.,can become 100% automated. It is now also plainly manifested that the NVI technology supports inter-cloud patching, also in user mode.

送给中意的TA,背面写上:某年某月某日,

Horizontal Decoupling of Cloud Orchestration for Stabilizing

相关文章:

你感兴趣的文章:

标签云: