Action Llama

Deploy agentic workflows to the cloud

npm i -g @action-llama/action-llama@latest
About

Why Action Llama?

Action Llama allows you to express a set of agents as code and deploy them to the cloud. Automate your workflows, whatever they might be:

Agents respond to webhooks, cron, or can trigger one another. They follow instructions you define in their ACTIONS.md, just like prompts. They receive only the credentials they need and run in a restricted container. Run locally, push to the cloud when you're ready.

Agent workflows can:

  • implement Github issues and submit a PR
  • review PRs for security checks
  • fix Github actions that fail
  • triage Sentry errors and create Github issues outlining the problem
Philosophy

How we think about agents

Agents are infrastructure. They should be versioned and deployable, just like code. Agents work together, so they should be bundled together.

Provision and deploy to a VPS in two commands. Currently supports for Hetzner and Vultr, and more can easily be added.

Use any LLM model you want. AL uses Pi so you can use any of the major models, or a custom one.

Core features are stateless. Memory strategies can be added as an option.

Examples

See it in action

Check out Action Llama's agents repo to see live production agents.

Open Source

Add the Functionality you Need

Action Llama is an MIT-Licensed open source project, so it's easy to add new functionality.

There are several key extension points:

  • credentials: support new service integrations
  • webhooks: integrate more web services
  • cloud providers: currently integrates with Hetzner and Vultr, but more can easily be added.
  • dynamic memory strategies: AL is stateless by default, but having a preflight check that implements a dynamic memory strategy would be amazing.
Community

Get Involved

Action Llama is open source!

The docs will get you started.
Use GitHub Issues if you found a bug or have a feature request.