Skip to main content

Aneka Hybrid Cloud Architecture

 The Resource Provisioning Framework in Aneka is composed of several components that work together to manage and allocate resources from different providers. Here is an overview of the key components:




Resource Provisioning Service: This is a specific service within Aneka that integrates with the resource pool manager. It provides the necessary interface to be seamlessly integrated into the Aneka container.

Resource Pool Manager: The resource pool manager is responsible for managing all the registered resource pools and determining how to allocate resources from those pools. It offers a uniform interface for requesting additional resources from any private or public provider and abstracts away the complexity of managing multiple pools for the Resource Provisioning Service.

Resource Pool: A resource pool is a container of virtual resources primarily provided by the same resource provider. Each resource pool manages the virtual resources it contains and releases them when they are no longer in use. The resource pool encapsulates the specific implementation of the communication protocol required to interact with the provider and provides a unified interface for acquiring, terminating, and monitoring virtual resources.

When the current capacity is not sufficient to meet the quality of service requirements for specific applications, a provisioning request is made to the Resource Provisioning Service. Based on specific policies, the pool manager selects the appropriate pool instance(s) to provision resources and forwards the request to those pools. Each resource pool translates the forwarded request into the specific protocols required by the external provider and provisions the resources. The provisioned resources join the Aneka cloud by registering themselves with the Membership Catalogue Service, which keeps track of all the connected nodes.

When provisioned resources are no longer in use, a release request is triggered by the scheduling service. The request is forwarded to the corresponding resource pool, which terminates the resources as needed. To optimize resource utilization, provisioned resources are kept active in a local pool until their lease time expires. If a new request arrives within that interval, it can be served without leasing additional resources from the public infrastructure. When a virtual instance is terminated, the Membership Catalogue Service detects the disconnection and updates its registry accordingly.

The interaction flow described above is independent of the specific resource provider integrated into the system. Aneka's design emphasizes modularity and well-designed interfaces between components to accommodate different providers. Resource pools can be dynamically configured and added using dependency injection techniques, allowing for customization of the Resource Provisioning Infrastructure.

The Resource Provisioning Framework in Aneka can be customized by specifying the following elements:

Resource Provisioning Service: The default implementation forwards requests to the resource pool manager. Extensions, such as a distributed resource provisioning service, can be implemented at this level or within the Resource Pool Manager.

Resource Pool Manager: The default implementation provides basic management features for resource and provisioning request forwarding.

Resource Pools: The Resource Pool Manager exposes a collection of resource pools that can be used. Any implementation compliant with the Aneka provisioning API can be added, allowing integration of an open-ended set of external providers.

Provisioning Policy: Scheduling services can be customized with resource provisioning-aware algorithms that consider the required quality of service when scheduling applications.

The architecture of the Resource Provisioning Framework in Aneka shares similarities with other IaaS implementations such as OpenNebula and Nimbus. These implementations also abstract external resource providers and provide extensibility points for scheduling and resource management.

Comments

Popular posts from this blog

2.1 VIRTUAL MACHINES PROVISIONING AND MANAGEABILITY

In this section, we will have an overview on the typical life cycle of VM and its major possible states of operation, which make the management and automation of VMs in virtual and cloud environments easier than in traditional computing environments As shown in Figure above, the cycle starts by a request delivered to the IT department, stating the requirement for creating a new server for a particular service.  IT administration to start seeing the servers’ resource pool, matching these resources with the requirements, and starting the provision of the needed virtual machine.  Once provisioned machine started, it is ready to provide the required service according to an SLA, or a time period after which the virtual is being released.

2.2 VIRTUAL MACHINE MIGRATION SERVICES

Migration service, in the context of virtual machines, is the process of moving a virtual machine from one host server or storage location to another; there are different techniques of VM migration, hot/life migration, cold/regular migration, and live storage migration of a virtual machine. In process of migration, all key machines’ components, such as CPU, storage disks, networking, and memory, are completely virtualized, thereby facilitating the entire state of a virtual machine to be captured by a set of easily moved data files. 2.2.1. Migrations Techniques Live Migration and High Availability Live migration (which is also called hot or real-time migration) can be defined as the movement of a virtual machine from one physical host to another while being powered on.  Live migration process takes place without any noticeable effect from the end user’s point of view (a matter of milliseconds).  One of the most significant advantages of live migration is the fact that it facili...

Open SaaS and SOA

A considerable amount of SaaS software is based on open source software.  When open source software is used in a SaaS,  it referred to as Open SaaS.  The advantages of using open source software are that systems are much cheaper to deploy because you don’t have to purchase the operating system or software, there is less vendor lock-in, and applications are more portable.  The popularity of open source software, from Linux to APACHE, MySQL, and Perl (the LAMP platform) on the Internet, and the number of people who are trained in open source software make Open SaaS an attractive proposition.  The impact of Open SaaS will likely translate into better profitability for the companies that deploy open source software in the cloud, resulting in lower development costs and more robust solutions. SOA (Service-Oriented Architecture): SOA is an architectural approach for designing and developing software systems that are composed of loosely coupled services.  In an SO...