About Freestyle Serverless Runs
An introduction to Freestyle's Serverless Runs service and its capabilities.
This document provides a high-level overview of Freestyle's Serverless Runs service.
To get started, check out the Getting Started Guide.
Overview
Serverless Runs execute JavaScript or TypeScript code and return a result. No deployment, no HTTP server—just send code and get output back.
Features
API-First
Send code in an API call, get a result in the response. No build step, no deployment process, no infrastructure to manage. Runs are designed for programmatic use at scale.
Fast Execution
Runs start in milliseconds. Module caching means subsequent runs with the same dependencies are even faster—we don't reinstall packages you've already used.
Network Controls
By default, runs have full internet access. Restrict access with allow/deny rules on specific domains, or route all traffic through your own proxy. Block your users from calling APIs they shouldn't, or funnel requests through your infrastructure for logging and rate limiting.
Cached Modules
Specify node modules and we install them once, then cache them for future runs. Your code stays small and execution stays fast.
Environment Variables
Pass secrets and configuration at runtime. Environment variables are scoped to the run—nothing persists between executions.
TypeScript
TypeScript works out of the box. No compile step, no tsconfig required.
Beyond Lambda
AWS Lambda requires you to deploy a function before you can invoke it. You write code, zip it, upload it, configure triggers, then call it.
Serverless Runs skip all of that. Send code in the request body, get the result in the response. There's no function to manage, no deployment to track, no cold start penalty after periods of inactivity.
This is useful when code is dynamic—generated by an AI, provided by a user, or constructed at runtime. You don't want to deploy a new Lambda every time the code changes. You just want to run it.
Beyond Sandboxes
Sandboxes like Freestyle VMs are often used for AI single use code execution. While sandboxes work for this, they are expensive and comparatively slow. The fastest sandbox platforms will claim 90ms cold starts, Freestyle Serverless Runs have cold starts under 10ms. Serverless Runs are optimized extremely heavily for single use code execution: Our median execution request lasts 84 milliseconds total — making Serverless Run response times faster than the time it takes sandbox platforms to start.
Further, when your code is done executing it shuts down immediately. You are only ever billed for the time and memory when your code is running. This makes it exponentially cheaper than sandbox alternatives. Sandbox platforms set minimum auto stop for VMs of 1 minute or more, meaning for a 84ms execution you pay for 60,000ms. With Serverless Runs you pay for the 84ms you actually use.
Sandbox platforms generally have tight limits on parallel VMs. Freestyle Serverless Execution concurrency limits are between 100 and 1000 times higher depending on the platform. This makes it ideal for lots of isolated agents needing their own code isolation.
When Not to Use Runs
Serverless Runs are for one-shot code execution. If your use case doesn't fit that model, consider:
TypeScript HTTP Servers
If you need to serve HTTP traffic from any TypeScript server like Hono, NextJS, Express with a persistent URL, use Serverless Deployments. Deployments give you domains, websockets, and long-running processes.
Non-JavaScript Workloads
Serverless Runs only support JavaScript and TypeScript. If you need Python, Ruby, Go, or other languages, use VMs.
Persistent State
Runs are stateless—nothing persists between executions. If you need to maintain state across invocations, use VMs or a database.
Long-Running Processes
Runs have a timeout. If you need processes that run for minutes or hours, use VMs.
Binaries
Freestyle Serverless Runs do not support any binaries, if you need to run the binaries in dev servers, image processors like sharp, or other heavy workloads use VMs.