What it is: Serverless computing is a model that enables code to run on cloud-based infrastructure without requiring it to be allocated to a server (virtual or physical). In Serverless Computing, instead of paying for the server your code runs on, you pay whenever the code is activated, which is to say, whenever your application (or function) is run.
What it does: Serverless Computing isn’t actually server-free computing. It’s simply running the running code on servers that belong to your cloud provider, not virtual servers you’ve rented from them. They take care of the provisioning and maintenance of computing resources, and you pay them for how often the code is used.
Why it matters: When you’re running a virtual server, you’re always paying for some capacity, whether or not you’re using it. So, if you have an application that runs occasionally, such as a web app that resizes uploaded images, you’re paying to host that application even when it’s not doing anything. However, with Serverless, you pay per use, which can be extremely economical, particularly for applications that only run in response to specific events. Serverless can also save money by reducing the time spent provisioning and maintaining computing resources. Additionally, code implemented with Serverless Computing scales instantly.
What to do about it: Consider asking your development team whether they’ve investigated the possibility of migrating program functions to Serverless Computing. Note that Serverless isn’t always the more economical or convenient choice and that a development team unaccustomed to Serverless Computing could face a steep learning curve.