J. Nickoloff and S. Kuenzli (2019), "Docker in Action." 2nd Edition, Manning.
Price Modeling in Regional Deployments
Public Cloud providers, like Amazon, Google, and Azure, offer virtual instances based on complex pricing
policies that take into account the processing power of the VMs, the region that they exist, and the volume
of data transferring from DCs to end-users or/and among regional DCs. The outcome of this project is a new
module for Fogify that will provide to the users the running and projection costs of their running deployment
based on real-time price information from public cloud providers. The main tasks of this project are:
Develop an interface which allows the user to choose resources from a list of VM types and regions (e.g.,
from Amazon). The resource characteristics should be automatically mapped into Fogify resources.
During the execution of an application on top of Fogify, the new module will calculate the current
application
operational cost and a projection of the long-term cost based on the resources usage and network data
transfers.
Resources:
Table of the delays between Amazon's DCs
https://www.cloudping.co/grid/latency/timeframe/1Y
Pricing modeling of Amazon's VMs prices and cost of data transfer
Fogify emulates fog nodes as virtual instances (containers) with restricted processing capabilities. However,
there are components with specific capabilities, such as TPUs or GPUs, which are currently ignored even if
they exist in the underlying infrastructure. To address this issue, the students should extent the Fogify with:
an automated method for extracting the physical node capabilities-properties and injecting them to Fogify.
Specifically, students should use a library for system identification, like psutil
(https://pypi.org/project/psutil/),
that will return the underlying capabilities and introduce specific "labels" to the underlying
Fogify cluster (https://docs.docker.com/engine/swarm/manage-nodes/#add-or-remove-label-metadata)
modeling primitives capable to describe specific real devices/nodes
Auto-scaling Controller
Modern Cloud and Fog Infrastructures provide auto-scaling capabilities. Specifically, auto-scaling monitors a
deployed application and automatically adjusts its capacity to maintain steady, predictable performance.
Towards the later functionality, students will build an auto-scaler controller for Fogify deployments that
will be able to perform scale-in and scale-out actions based on simple rules (e.g., if the last 10 minutes
CPU utilization of instance-x is over 70%, perform horizontal scale-out). Specifically, the implementation
of the module includes:
Simple modeling of the scaling rules
The extraction and analysis of deployment's metrics through FogifySDK
Perform scaling actions through FogifySDK
Improvement of Network Monitoring
The network traffic is a crucial metric for Fog and Cloud infrastructures. Even though Fogify
captures a general network traffic per node, it does not measure the network traffic node-to-node volume.
In this exercise, students will implement a method to extract and save the size of packages that
transfer by source-node to destination-node. Specifically:
The students will apply a sniffing method, by utilizing a sniffing library in Python, on the container's
network interfaces.
Furthermore, the method should efficiently store the extracted data into the monitoring sub-system.
Finally, they will implement an API call for stored data extraction.
Develop a TCO model for the data center of the Department, taking into account the use of the installed
solar energy production using the roof's P/V system.
D. Hardy, M. Kleanthous, I. Sideris, A. G. Saidi, E. Ozer, and Y. Sazeides, “An analytical
framework for estimating TCO and exploring data center design space,” in ISPASS 2013 - IEEE
International Symposium on Performance Analysis of Systems and Software, 2013, pp. 54–63.
Barroso, L. A., & Holzle, U. (2015). The data center as a Computer. An Introduction to
the Design of Warehouse-Scale Machines. In Synthesis Lectures on Computer Architecture (Vol. 2, Issue 1).
Morgan & Claypool Publishers. Chapter 6.
ENEDI project: Energy Efficiency in Public Data Centers.
Athanasios Tryfonos, Andreas Andreou, Nicholas Loulloudes, George Pallis,
Marios D. Dikaiakos and Nikolas Chaztigeorgiou, George E. Georghiou,
ENEDI: Energy
Saving in Datacenters,
Global Conference on Internet of Things" (2018 IEEE GCIoT), Alexandria, Egypt 2018