Well, no, not really. At least, not in the traditional sense. It is a cloud operating system. The name comes from the ultimate vision of this project: you don't think about hardware when you write apps for desktop or mobile; similarly, you shouldn't be thinking about servers or infrastructure when you develop apps for the cloud.
Nope. Lever OS is not tied to a specific cloud in any way.
The main advantages of Lever OS are
- Open-source (Apache License 2.0)
- Microservice-oriented. It is very easy to interconnect multiple services in a complex backend and they can communicate via the builtin RPC system.
- Also runs on your laptop. This makes dev-test cycles very very efficient.
- Designed for real-time serving. No queues involved. Latency is minimal.
- Supports streaming
- Orchestration and Resource management (via Docker Swarm).
- Loadbalancing requests between instances of a service.
- Autoscaling services.
- Service auto-discovery - no need to keep track of IPs and ports. Just call using the service name.
- RPC and an efficient binary JSON-like encoding.
- Sharding resources within a service. Useful if you need certain requests to be routed to the same instance consistently.
- Streaming RPCs useful for file-uploads, push notifications and real-time interaction / syncing.
- Live upgrading via
Lever's main component is the
leveroshost, which is an agent that needs to run on each physical node that is part of the cluster. The agent's dependencies are
- A Docker Swarm cluster.
- A Consul cluster.
- An Aerospike cluster.
In development these dependencies are automatically brought up and configured by the
make fastrun command.
Lever OS is based on Docker. Running
make fastrun brings up an instance of
leveroshost on Docker.
When deploying a service onto Lever, the CLI takes the entire directory, tar's it and uploads it to Lever. A special Lever service called
admin handles the upload and registers the new service internally.
In rough terms, a summary of what happens behind the scenes when a request for a service comes in is as follows.
leveroshostinstance receives it.
leveroshostlooks within the Lever cloud for nodes where instances of that service is running on. (If there are no instances of that service,
leveroshostbrings one up).
leveroshostpicks an instance randomly and routes the request to it.
- The instance serves the request and then continues to run for a while, waiting, in case other requests come in. After a certain period of time (a few minutes), if no requests come in, it is stopped automatically.
- In parallel, all the
leveroshosts within a Lever cloud collectively keep track of the overall load of each service and scale them up or down accordingly (start / stop instances as necessary).
No. At least not yet. Persistence needs to be achieved via external means (eg a DB-as-a-service).
Absolutely! Just use the Lever client libraries to call into a Lever service from a legacy service.
Also, you are free to use any library within the Lever service, which allows you to call a legacy service from Lever.
For now all stdout is logged in Lever OS's own output. These messages have the prefix
lever://<env>/<service>. This will change in the future and will be accessible via the Lever CLI instead.
When deploying Lever, you allocate a number of servers to it. Lever can then manage resources and auto-scale services running on it - but this is only limited to the servers you have allocated to it. It does not yet have any integration with any third-party technology that could, for example, make Lever scale beyond the servers you have installed Lever on.
You can't. Not yet. Creating environments will be available as a CLI command soon.
Updated less than a minute ago