Action Llama allows you to express a set of agents as code and deploy them to the cloud. Automate your workflows, whatever they might be:
Agents respond to webhooks, cron, or can trigger one another. They follow instructions you define in their ACTIONS.md, just like prompts. They receive only the credentials they need and run in a restricted container. Run locally, push to the cloud when you're ready.
Agent workflows can:
Agents are infrastructure. They should be versioned and deployable, just like code. Agents work together, so they should be bundled together.
Provision and deploy to a VPS in two commands. Currently supports for Hetzner and Vultr, and more can easily be added.
Use any LLM model you want. AL uses Pi so you can use any of the major models, or a custom one.
Core features are stateless. Memory strategies can be added as an option.
Action Llama is an MIT-Licensed open source project, so it's easy to add new functionality.
There are several key extension points:
Action Llama is open source!
The docs will get you started.
Use GitHub Issues if you found a bug or have a feature request.