- Services are the basic building blocks of Lever apps.
- To define a service you provide the code and an entry point.
- Methods are the functions that make up the entry point of the service.
- You can invoke methods from outside Lever or from another Lever service.
- Lever runs multiple instances of the service, automatically adjusting their number depending on scale.
Lever services are the basic building blocks of Lever cloud apps. They contain the business logic of your application and they scale automatically with demand. Lever's mantra is that when you write cloud applications you shouldn't be thinking about servers. You should be thinking about services. This fundamentally changes the way we write cloud application, because the idea assumes a need for scalability, even for the simplest apps.
The individual exported functions are called methods. Lever methods can be invoked in several ways, either from outside Lever or from another Lever service:
- Through the Lever CLI.
- Through the HTTP API - only from outside Lever.
- Through the Lever RPC system - use the Lever client libraries for this (currently Node and Go are supported).
There are two types of Lever methods:
- Regular methods - a regular request-reply.
- Streaming methods - they allow the creation of a persistent channel through which a full-duplex conversation could take place between the client and the server. Streaming method names must end with
For more details see the Lever RPCs page.
Under the hood, each Lever service run zero or more Lever instances to serve traffic to that service. The number of instances is adjusted automatically by Lever, in real-time, depending on demand.
Each invokation, regardless of its origin, is thus load-balanced between the instances of a Lever service. You should never assume that consecutive requests to a service will hit the same instance. If you need to rely on such an assumption, either use a streaming method or the concept of resources.
Each instance is allocated a certain amount of memory, computing power and network bandwidth. To adjust these see the ram property in lever.json.
Instances are brought up and stopped as necessary, depending on the amount of traffic the service gets. Because sometimes instances are brought up at request time, it is very important that the implementation does not include any heavy processing as part of the startup procedure. Otherwise this would seriously affect latency on handling some of the requests.
When an instance is no longer needed (because the traffic has gone low), it is scheduled for stopping. There is an additional delay before the instance is actually stopped, in case traffic spikes back up.
Upon closing, the instance is sent a
TERM signal and is allowed to drain for 10 seconds. If the instance does not terminate during this time, it is killed.
Later, you might also want to check out the Quick Start guide, if you haven't already, or the API reference pages. See links on left-hand side.
Updated less than a minute ago