Wednesday, April 29, 2009

Cloud Economics

Problem: Economic allocation of cloud resources within a data center.

Kevin Lai from HP Labs came to discuss his work in this field. Kevin brought up some interesting point and I will summarize two: Optimization should be done across multiple layers and Using a bidding system to optimize provisioning of resources.

Optimization across multiple layers
Kevin argues that optimization cannot merely be done in each component: Application, Distributed storage/computing, Virtualization/Network/OS, Physical/Power. An overarching optimization must be done across these components with each layer coordinating with a general optimizer. However, abstraction has allowed for a reduction of complexity and innovation. Google is an excellent example. Many applications and services for applications (BigTable, Chubby) are built upon GFS. This modular architecture was built up over time as needed. There is also a great deal of innovation occurring in the open source space (Hadoop, etc). While overarching optimization may occur in the future, at this time, it may stifle innovation by preventing changes to a given component.

Bidding System
There is an argument to be made that some sort of bidding system may help mitigate supply and demand issues between cloud providers and customers. However, some customers may want cost guarantees provided by a flat rate per use.
The most interesting aspect of Kevin's proposals are using a bidding system to allocate resources within the data center. Such a bidding system can be used to create a predictability model which can tradeoff a bid for resources, QOS, and a guarantee (probability of completion). On average, this model can allow jobs to complete more work within a given time.
This bidding system can also be used to provision resources. Price inflation for a given resource is an indication that there is under provisioning.

No comments:

Post a Comment