Building a Weather Agent with Jido
Jido has been under active development for the past few months. Most of that work has been foundational, focused on building a solid framework that makes it easy to build complex AI agents in Elixir.
Today, I’m excited to finally share a “real” example of what can be done with Jido. There’s a lot more to come, but this will give you a taste of what’s possible.
The Weather Agent
We’re going to build a simple agent that can answer natural language questions about the weather. It will understand the request, use a tool to fetch weather data, and then provide a response, all orchestrated by Jido and an LLM.
You can run the complete, interactive example in Livebook right now:
The Code
Let’s break down the components involved in building this agent.
1. The Weather Tool (Action)
The first component is the Jido.Tools.Weather
Action. This Action uses the weather
package from @spencerolson to get the weather for a given location. It utilizes the OpenWeatherMap API to fetch the weather data. For demonstration purposes in the Livebook, we default to using test data (test: true
) so you don’t need an API key immediately.
# Definition from jido_tools/lib/jido_tools/weather.ex
defmodule Jido.Tools.Weather do
use Jido.Action,
name: "weather",
description: "Get the weather for a given location via the OpenWeatherMap API",
category: "Weather",
tags: ["weather"],
vsn: "1.0.0",
schema: [
location: [type: :string, doc: "The location to get the weather for"],
units: [type: :string, doc: "Units to use (metric/imperial)", default: "metric"],
hours: [type: :integer, doc: "Number of hours to forecast", default: 24],
format: [type: :string, doc: "Output format (text/map)", default: "text"],
test: [type: :boolean, doc: "Whether to use test data instead of real API", default: true]
]
def run(params, _context) do
# ... logic to fetch weather data (real or test) ...
# ... format the response ...
with {:ok, opts} <- build_opts(params),
{:ok, response} <- Weather.API.fetch_weather(opts) do
{:ok, format_response(response.body, params)}
else
{:error, error} -> {:error, "Failed to fetch weather: #{inspect(error)}"}
end
end
# Private functions to build options and format response...
end
The full code for the Weather Action is available here.
Actions are the fundamental building blocks in Jido. They encapsulate a single, reusable piece of functionality. By using use Jido.Action
, we get:
-
Schema Validation: The
schema
defines the expected input parameters, their types, and documentation. Jido validates incoming parameters against this schema automatically. -
Metadata:
name
,description
,category
, etc., provide context for humans and potentially for AI agents to understand what the Action does. -
Standard Interface: The
run/2
function is the entry point for executing the Action’s logic. -
Tool Conversion: Crucially,
Jido.Action
includes functionality (liketo_tool/0
) that converts the Action’s definition into a format compatible with LLM frameworks like Langchain, including generating a JSON schema for the parameters. This allows the LLM to understand how to call the Action.
You can learn more about Actions in the Actions & Workflows guide.
2. The Weather Agent Module
Next, we define the agent itself. This module brings together the configuration and provides the user-facing interface.
defmodule WeatherAgent do
use Jido.Agent, name: "weather_agent"
# Start the underlying Jido AI Agent server
def start_link(_opts \\ []) do
Jido.AI.Agent.start_link(
agent: __MODULE__, # Link this module to the server
# log_level: :info, # Optional: Increase agent logging
ai: [ # AI-specific configuration
model: {:openai, model: "gpt-4o-mini"}, # Specify the LLM
prompt: """
You are an enthusiastic weather reporter.
<%= @message %>
""", # Define the agent's persona and base prompt
tools: [
Jido.Tools.Weather # Make our Weather Action available as a tool
],
# verbose: true # Optional: Enable verbose Langchain logging
]
)
end
# Delegate specific calls to the underlying Jido.AI.Agent
defdelegate chat_response(pid, message), to: Jido.AI.Agent
defdelegate tool_response(pid, message), to: Jido.AI.Agent
end
# Start the Weather Agent process
{:ok, pid} = WeatherAgent.start_link()
This module uses Jido.Agent
, marking it as a Jido agent process. The key part is Jido.AI.Agent.start_link/1
. This function, provided by the jido_ai
package, sets up a specialized Jido Agent GenServer pre-configured with AI capabilities. We pass it:
-
agent: __MODULE__
: Links this definition to the running process. -
ai: [...]
: A keyword list containing configuration for the AI behavior:-
model
: Specifies which LLM to use (viaJido.AI.Model
abstraction). Here, we use OpenAI’s GPT-4o Mini. -
prompt
: A base prompt defining the agent’s persona. The<%= @message %>
is an EEx template snippet where the user’s actual query will be injected. -
tools
: A list ofJido.Action
modules that the agent is allowed to use. This is where we connect ourJido.Tools.Weather
Action.
-
We also delegate
the tool_response/2
function to Jido.AI.Agent
, providing a clean API for interacting with our specific WeatherAgent
.
3. The Jido AI Agent & Skill (jido_ai
)
The Jido.AI.Agent
module provides the core implementation for AI-powered agents. It builds upon the standard Jido.Agent
and integrates the Jido.AI.Skill
.
# Simplified from jido_ai/lib/jido_ai/agent.ex
defmodule Jido.AI.Agent do
use Jido.Agent, name: "jido_ai_agent"
# ...
@default_opts [
skills: [Jido.AI.Skill], # Default skill providing AI capabilities
agent: __MODULE__
]
def start_link(opts) do
opts = Keyword.merge(@default_opts, opts)
# Starts the GenServer with merged config (including model, prompt, tools passed in)
Jido.Agent.Server.start_link(opts)
end
# Function called by our WeatherAgent.tool_response
def tool_response(pid, message) do
# Build a Jido Signal representing the request
{:ok, signal} = build_signal("jido.ai.tool.response", message)
# Send the signal to the agent process (synchronous call)
call(pid, signal)
end
defp build_signal(type, message) do
Jido.Signal.new(%{ type: type, data: %{message: message} })
end
# ...
end
When WeatherAgent.tool_response/2
is called, it delegates to Jido.AI.Agent.tool_response/2
. This function wraps the user’s message into a Jido.Signal
with the type "jido.ai.tool.response"
and sends it to the agent process using Jido.Agent.call/2
.
Signals are the standard messaging format within Jido. The agent process receives this signal, and this is where the Jido.AI.Skill
comes in.
# Simplified from jido_ai/lib/jido_ai/skill.ex
defmodule Jido.AI.Skill do
use Jido.Skill, name: "jido_ai_skill"
# ... defines opts_schema for :model, :prompt, :tools etc.
# Defines how signal types map to Actions
def router(_opts \\ []) do
[
{"jido.ai.chat.response", %Instruction{action: Jido.AI.Actions.Instructor.ChatResponse}},
{"jido.ai.tool.response", %Instruction{action: Jido.AI.Actions.Langchain.ToolResponse}} # <- Our match!
]
end
# Pre-processes the signal before the Action runs
def handle_signal(%Signal{type: "jido.ai.tool.response"} = signal, skill_opts) do
# Extracts model, prompt, tools from skill_opts (originally from start_link)
# Renders the user message into the base prompt template
# Bundles everything into parameters for the ToolResponse Action
base_prompt = Keyword.get(skill_opts, :prompt)
rendered_prompt = render_prompt(base_prompt, signal.data) # Injects message
tools = Keyword.get(skill_opts, :tools, [])
model = Keyword.get(skill_opts, :model)
# ...
tool_response_params = %{ model: model, prompt: rendered_prompt, tools: tools, ... }
# Update the signal's data payload with the prepared params
{:ok, %{signal | data: tool_response_params}}
end
# ...
end
The Jido.AI.Skill
is configured by default in Jido.AI.Agent
. Its router/1
function maps the incoming signal type "jido.ai.tool.response"
to an Instruction
to execute the Jido.AI.Actions.Langchain.ToolResponse
Action.
Before the action runs, the skill’s handle_signal/2
callback intercepts the signal. It takes the configuration passed during start_link
(like the model, base prompt, and list of tools) and the incoming message from the signal data. It renders the user’s message into the EEx prompt template and packages everything neatly as parameters for the next step: the ToolResponse
Action.
4. The Tool Response Action (jido_ai
)
This is where the core LLM interaction happens, coordinated by Langchain.
# Simplified from jido_ai/lib/jido_ai/actions/langchain/tool_response.ex
defmodule Jido.AI.Actions.Langchain.ToolResponse do
use Jido.Action, name: "generate_tool_response"
# ... schema defines :model, :prompt, :tools, etc.
alias Jido.AI.Actions.Langchain, as: LangchainAction # The lower-level Langchain action
def run(params, _context) do
# Params received from Jido.AI.Skill.handle_signal
# (contains rendered prompt, model, list of tool Actions)
# Prepare parameters for the underlying Langchain action
completion_params = %{
model: params.model,
prompt: params.prompt, # Already rendered with user message
tools: params.tools, # List of Jido.Action modules [Jido.Tools.Weather]
temperature: params.temperature || 0.7,
verbose: params.verbose || false
}
# Delegate to the lower-level Langchain action
case LangchainAction.run(completion_params, %{}) do
{:ok, %{content: content, tool_results: tool_results}} ->
# If successful, return the final content and any tool results
{:ok, %{ result: content, tool_results: tool_results }}
{:error, reason} ->
# Handle errors from the Langchain interaction
{:error, reason}
end
end
end
This action receives the prepared parameters from the Jido.AI.Skill
. Its main job is to:
-
Package the
model
, the finalprompt
(with the user message injected), and the list of availabletools
([Jido.Tools.Weather]
). -
Call the underlying
Jido.AI.Actions.Langchain.run/2
function. This lower-level action handles the actual communication with the LLM via thelangchain
hex package. -
Crucially,
Jido.AI.Actions.Langchain
takes the list of Jidotools
, converts them into the format the LLM expects (usingYourAction.to_tool()
), and enables Langchain’s function/tool-calling mechanism. -
The LLM receives the prompt and the descriptions of the available tools. It decides if a tool is needed. If yes (like in our weather query), it determines which tool (
weather
) and what parameters to use (e.g.,%{location: "Tokyo"}
). -
Langchain receives this decision back from the LLM. It then finds the corresponding Jido Action (
Jido.Tools.Weather
) and executes itsrun/2
function with the parameters provided by the LLM (%{location: "Tokyo"}
). -
The result from
Jido.Tools.Weather.run/2
(the weather data) is sent back to the LLM. - The LLM uses this result, combined with the original prompt (“You are an enthusiastic weather reporter…”), to generate the final, user-friendly text response.
-
This final text response (
content
) is returned byLangchainAction.run/2
and then byToolResponse.run/2
.
Pulling It All Together
So, when you call WeatherAgent.tool_response(pid, "What is the weather like in Tokyo right now?")
:
-
WeatherAgent
->Jido.AI.Agent
: The call delegates, creating aSignal
type"jido.ai.tool.response"
with the message. -
Jido.Agent.Server
: Receives the signal. -
Jido.AI.Skill
: The router maps the signal type toJido.AI.Actions.Langchain.ToolResponse
.handle_signal
prepares the parameters (rendered prompt, model, tools). -
Jido.AI.Actions.Langchain.ToolResponse
: Executes itsrun
function with the prepared params. -
Jido.AI.Actions.Langchain
: Called byToolResponse
. It talks to the LLM (GPT-4o Mini), providing the prompt and theJido.Tools.Weather
tool definition. -
LLM Decision: The LLM decides to call the
weather
tool withlocation: "Tokyo"
. -
Langchain Execution: Langchain triggers
Jido.Tools.Weather.run(%{location: "Tokyo"}, _context)
. -
Jido.Tools.Weather
: Fetches (test) weather data and returns it. - LLM Formatting: The weather data goes back to the LLM.
- Final Response: The LLM generates the enthusiastic weather report based on the data and the initial prompt.
- Return: The final text bubbles back up through the actions, skill, and agent server to the original caller.
The power here is how Jido components (Agents, Skills, Actions) and jido_ai
provide the structure and orchestration, letting Langchain and the LLM handle the complex natural language understanding and tool coordination.
Agent Showcase
Let’s see it handle a few more questions, just like in the Livebook:
# Ask about tomorrow's weather
WeatherAgent.tool_response(pid, "Will I need an umbrella in Paris tomorrow?")
# Ask about multiple locations
WeatherAgent.tool_response(pid, "Compare the weather in New York and San Francisco today.")
# Ask a more complex weather question
WeatherAgent.tool_response(pid, "Is it a good day for hiking in the mountains near Seattle?")
Conclusion
This Weather Agent example, while simple, demonstrates the core concepts of building AI-powered agents with Jido and jido_ai
. By composing Actions (like Jido.Tools.Weather
) and leveraging the pre-built Jido.AI.Agent
and Jido.AI.Skill
, we can quickly create agents that understand natural language, use tools to interact with the world (or APIs), and respond intelligently.
The separation of concerns – Actions for specific tasks, Skills for capability management, and Agents for state and process lifecycle – makes the system modular and extensible.
There’s much more to explore in Jido, including state management, complex workflows, signal routing, and more. Check out the resources below to learn more!
Additional Resources
- Website: https://agentjido.xyz
- Jido Documentation: https://hexdocs.pm/jido
- Jido AI Documentation: https://hexdocs.pm/jido_ai
- Jido Tools Documentation: https://hexdocs.pm/jido_tools