Leveraging Claude Code and Codex with AgentFS and LlamaIndex Workflows

In the world of AI development, combining different tools can significantly boost your capabilities. Honestly, it’s quite exciting! This article serves as a hands-on guide on how to run Claude Code and Codex within an innovative, fully-virtualized file system known as AgentFS. We’ll orchestrate this environment with LlamaIndex Workflows, and I can’t wait to show you how to read unstructured files — including PDFs and Word documents — with the help of LlamaCloud. Try this AI for Customer Engagement method for new insights!

Setting Up and Running the Environment

So, first things first — let’s clone the repository and get cracking. Open your terminal and run the following command:

git clone https://github.com/run-llama/agentfs-claude
cd agentfs-claude

Next up, you’ll want to install the necessary dependencies. You can use other package managers if you’re comfortable with them, but let’s be honest — pnpm is the way to go:

pnpm install 
# you can use other package managers, but pnpm is preferred

If you’re excited to run the demo with Codex, you’ll need to grab the Codex SDK separately, too. Just a heads-up — it’s usually disabled by default because it’s quite hefty (over 140 MB):

pnpm add @openai/codex-sdk

And hey, don’t forget! To run the demo with Codex, you’ll want to start the MCP server in a separate terminal window within the same directory:

pnpm run mcp-start

You can then check out the MCP server live at http://localhost:3000/mcp. To configure Codex correctly, you’ll want to include the MCP setup in the config.toml file. Just remember to set it up in your global Codex settings at $HOME/.codex/config.toml. Oh, and the AGENTS.md file contains instructions for using the server if you need some guidance.

When you’re ready to kick off the demo, simply run this command:

# for the first time
pnpm run start

# If you want to add more files to the database
pnpm run clean-start

Just follow the prompts in the terminal — it’ll be your personal guide.

Understanding How It Works

Now, let’s dive into how everything works. The operations involving filesystem interactions are executed through AgentFS rather than interacting directly with actual files. This magic is made possible by the filesystem MCP, which provides several essential tools:

  • read_file: Access a file by providing its path.
  • write_file: Create or modify a file by specifying its path and content.
  • edit_file: Change a file’s content by indicating the old string and what to replace it with.
  • list_files: Display all available files in the system.
  • file_exists: Check if a file exists using its path.

So, here’s a peek behind the curtain when an agent runs:

  • Text files in the current directory get uploaded to a LibSQL database and indexed quickly.
  • Non-text files, like PDFs or Word documents, are processed by LlamaParse and uploaded with their content converted to markdown.
  • When the agent runs a filesystem operation, it uses the tools provided by the filesystem MCP.
  • Other functionalities, such as WebSearch or Task management, work just like they normally do.
  • And if the agent tries to use restricted tools like Read, Write, or Edit, a PreToolUse hook kicks in, blocking the call and redirecting it to the right filesystem MCP tools instead.

By integrating with LlamaIndex Workflows, we create a smooth environment:

  • Files load into AgentFS right at the start of each workflow.
  • User input, such as prompts or session details, is gathered to facilitate this human-in-the-loop process.
  • Once everything’s set, the agent executes seamlessly.

Example Prompt for the Agent

To use the agent effectively, try prompting it like this:

Explore all the available files to you, find the task file, read it and act in accordance with it. If needed, read other files (such as the document on observability in LlamaIndex Workflows).

Here’s how it goes: the agent will first list all available files, then dive into data/task.docx to follow the task instructions. If necessary, it might also check out data/observability_in_llamaindex_workflows.pdf as directed by the task.

Contributing to the Project

Thinking about contributing? That’s awesome! Before you dive in, make sure your code follows the formatting and linting guidelines by running:

pnpm run check

Also, double-check that all tests pass (feel free to add more tests if necessary):

# here node v22+ is needed
pnpm run test

Once you’ve cleaned up your code and ensured all tests pass, you can create a pull request from a non-default branch of your fork like feat/awesome-feature or fix/great-fix.

🎉

Thanks for reading!

If you found this article helpful, share it with others

📬 Stay Updated

Get the latest AI insights delivered to your inbox