Performance Analysis of Serverless Computing in Hybrid Cloud Environments

Authors

  • Simran Lamba Samarth Group of Institution College of Engineering, Belhe, India Author

DOI:

https://doi.org/10.15662/IJEETR.2024.0605001

Keywords:

Serverless Computing, Hybrid Cloud, Performance Variance, Cold Start Mitigation, Pre-warming, Container Utilization, Tail Latency

Abstract

Serverless computing has gained traction by abstracting infrastructure management and enabling rapid scaling. Yet when deployed within hybrid cloud environments, where workloads span on-premises, private, and public cloud systems, performance implications become complex. This study explores the performance characteristics of serverless functions in hybrid clouds, focusing on response latency, cold start behavior, resource utilization, and consistency across deployments. Drawing on recent 2023 findings, we observe significant performance variance— serverless function runs can vary by up to 338.76%, averaging 44.28% across repeated invocations, a variability often neglected in research. Additionally, techniques such as SCOPE improve performance testing accuracy (by ~33.8 percentage points) by incorporating consistency and accuracy checks. In edge-cloud hybrid models, strategies like instance pre-warming and reuse policies notably reduce latency while augmenting resource consumption. Tail latency (99th percentile) and queuing behaviors reveal tradeoffs: buffer-aware schedulers reduce cold starts drastically (to as low as 7–14%) but increase queuing time, especially for short-lived functions. For broader workloads, hybrid scheduling improves container utilization (>80%) and reduces container count by up to 60%. This study synthesizes these 2023 insights and presents a benchmark framework for evaluating serverless compute in hybrid clouds. Findings suggest that hybrid strategies—combining pre-warming, buffer-aware scheduling, and multi-tier deployment—can enhance tail performance and utilization, but introduce variability and overhead. We propose best practices: rigorous repeated-run testing, adaptive pre-warming thresholds, deployment-aware scheduling, and hybrid placement policies. The paper concludes with implications for designing performant hybrid serverless systems, emphasizing reproducibility, resource efficiency, and latency optimization.

References

1. Wen et al., “Unveiling Overlooked Performance Variance in Serverless Computing” (2023) – revealed up to 338.76% latency variance and emphasized reproducibility concerns.

2. Wen et al., “SCOPE: Performance Testing for Serverless Computing” (2023) – proposed accurate testing method with ~97.25% reliability and improved over existing techniques by ~33.8 pts.

3. “Latency and Resource Consumption Analysis for Serverless Edge Analytics” (2023) – evaluated pre-warming, reuse mechanisms, and proposed two-tier edge-cloud FaaS with allocation policies.

4. Prediction-based hybrid scheduling model (Applied to FaaS) – reported buffer-aware approach reducing cold starts to 7–14%, improving container utilization to >80%, and cutting container count up to 60%, but increasing queuing delays.

5. MDPI study on containerized parallel tasks – observed ~40× speedup for parallel serverless execution compared to sequential VM execution, and ~23× vs parallel VM execution.

Downloads

Published

2024-09-01

How to Cite

Performance Analysis of Serverless Computing in Hybrid Cloud Environments. (2024). International Journal of Engineering & Extended Technologies Research (IJEETR), 6(5), 8712-8715. https://doi.org/10.15662/IJEETR.2024.0605001