Frequently Asked Questions

Is Lever OS a real OS?

Well, no, not really. At least, not in the traditional sense. It is a cloud operating system. The name comes from the ultimate vision of this project: you don't think about hardware when you write apps for desktop or mobile; similarly, you shouldn't be thinking about servers or infrastructure when you develop apps for the cloud.

Is this based on AWS Lambda?

Nope. Lever OS is not tied to a specific cloud in any way.

Which languages are supported?

For now JavaScript and Go. Python and Java are coming up next.

How does Lever OS differ from other lambda / serverless technologies?

The main advantages of Lever OS are

  • Open-source (Apache License 2.0)
  • Cloud-agnostic
  • Microservice-oriented. It is very easy to interconnect multiple services in a complex backend and they can communicate via the builtin RPC system.
  • Also runs on your laptop. This makes dev-test cycles very very efficient.
  • Designed for real-time serving. No queues involved. Latency is minimal.
  • Supports streaming

What sort of technology / infrastructure does Lever OS provide? What does it solve for me?

  • Orchestration and Resource management (via Docker Swarm).
  • Loadbalancing requests between instances of a service.
  • Autoscaling services.
  • Service auto-discovery - no need to keep track of IPs and ports. Just call using the service name.
  • RPC and an efficient binary JSON-like encoding.
  • Sharding resources within a service. Useful if you need certain requests to be routed to the same instance consistently.
  • Streaming RPCs useful for file-uploads, push notifications and real-time interaction / syncing.
  • Live upgrading via lever deploy.

What tech does Lever OS use?

We rely heavily on Docker and Docker Swarm for containers, orchestration and networking. Other pieces of tech we use are Consul for service discovery and gRPC with Protobuf for the RPC.

What does a Lever deployment look like?

Lever's main component is the leveroshost, which is an agent that needs to run on each physical node that is part of the cluster. The agent's dependencies are

  • A Docker Swarm cluster.
  • A Consul cluster.
  • An Aerospike cluster.

In development these dependencies are automatically brought up and configured by the make fastrun command.

How does it work in more detail?

Lever OS is based on Docker. Running make fastrun brings up an instance of leveroshost on Docker.

When deploying a service onto Lever, the CLI takes the entire directory, tar's it and uploads it to Lever. A special Lever service called admin handles the upload and registers the new service internally.

In rough terms, a summary of what happens behind the scenes when a request for a service comes in is as follows.

  • A leveroshost instance receives it.
  • leveroshost looks within the Lever cloud for nodes where instances of that service is running on. (If there are no instances of that service, leveroshost brings one up).
  • leveroshost picks an instance randomly and routes the request to it.
  • The instance serves the request and then continues to run for a while, waiting, in case other requests come in. After a certain period of time (a few minutes), if no requests come in, it is stopped automatically.
  • In parallel, all the leveroshosts within a Lever cloud collectively keep track of the overall load of each service and scale them up or down accordingly (start / stop instances as necessary).

Can I persist data / run a database on Lever OS?

No. At least not yet. Persistence needs to be achieved via external means (eg a DB-as-a-service).

Can existing legacy services interact with Lever services?

Absolutely! Just use the Lever client libraries to call into a Lever service from a legacy service.

Also, you are free to use any library within the Lever service, which allows you to call a legacy service from Lever.

Where does my stdout (eg console.log(...)) go?

For now all stdout is logged in Lever OS's own output. These messages have the prefix lever://<env>/<service>. This will change in the future and will be accessible via the Lever CLI instead.

How does auto-scaling work in my own infrastructure? Where do the servers come from?

When deploying Lever, you allocate a number of servers to it. Lever can then manage resources and auto-scale services running on it - but this is only limited to the servers you have allocated to it. It does not yet have any integration with any third-party technology that could, for example, make Lever scale beyond the servers you have installed Lever on.

How do I create more environments for development?

You can't. Not yet. Creating environments will be available as a CLI command soon.