Title

Improving Application Migration to Serverless Computing Platforms: Latency Mitigation With Keep-Alive Workloads

Publication Date

2019

Document Type

Conference Proceeding

Abstract

Serverless computing platforms provide Function(s)-As-A-Service (FaaS) to end users while promising reduced hosting costs, high availability, fault tolerance, and dynamic elasticity for hosting individual functions known as microservices. Serverless Computing environments abstract infrastructure management including creation of virtual machines (VMs), containers, and load balancing from users. To conserve cloud server capacity and energy, cloud providers allow serverless computing infrastructure to go COLD, deprovisioning hosting infrastructure when demand falls, freeing capacity to be harnessed by others. In this paper, we present on a case study migration of the Precipitation Runoff Modeling System (PRMS), a Java-based environmental modeling application to the AWS Lambda serverless platform. We investigate performance and cost implications of memory reservation size, and evaluate scaling performance for increasing concurrent workloads. We then investigate the use of Keep-Alive workloads to preserve serverless infrastructure to minimize cold starts and ensure fast performance after idle periods for up to 100 concurrent client requests. We show how Keep-Alive workloads can be generated using cloud-based scheduled event triggers, enabling minimization of costs, to provide VM-like performance for applications hosted on serverless platforms for a fraction of the cost. © 2018 IEEE.

First Page

195

Last Page

200

DOI

10.1109/UCC-Companion.2018.00056

This document is currently not available here.

Find in your library

Share

COinS