Planex: A Command-Line Coding Agent for Large-Scale Projects

Cursor has a limit on how big your project can be because of the model's context size. Everyday people struggle with things not working or cursor messing up their projects. I found something better. It's called Planex, a command-line coding agent built for large-scale projects. In this article, I'll show you how to install it, how it works, and what you can do with it. Let's get started.

What is Planex?

The special thing about Planex is that it can handle up to 2 million tokens of context directly, which is a lot. It can also index directories with up to 20 million tokens or more. This is possible because it uses tree-sitter project maps, a relatively new feature in code editors that helps you navigate code better.

Not only that, it uses multiple models through the OpenRouter API key and automatically picks the best one at any given time. This is why they say it's designed to be resilient for large codebases. Let's go ahead and see how to install it.

Installation

Now let's talk about the installation options.

Note: If you're on Windows, you need WSL or it won't work.

There are three ways to run it: 1. Planex Cloud: You don't need separate API keys. Everything runs in the cloud, and you can get started quickly. The quick-start guides are in the GitHub repo. 2. Planex Cloud with your own API keys: You bring your own keys but still use the cloud service. 3. Self-hosted local mode: You run Planex yourself with Docker and use your own API keys.

In this demo, we'll be working with the local mode. Let's set it up locally.

Local Mode Setup

This is the local mode quick-start guide, which is linked in the GitHub repo.

First, you need to clone the GitHub repo and start the server.

Note: Before you run this command, make sure Docker is installed, set up, and running. Otherwise, it will throw an error.

Here is the command to run: bash git clone https://github.com/plan-existential/planex.git cd planex docker compose up -d Now the server is running locally for Planex.

The next command goes into a new terminal to install the Planex CLI. bash /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/plan-existential/planex-cli/main/install.sh)" The Planex CLI is now installed. It will attempt to use sudo during installation, so you'll need to enter your password as well.

After that, in the same terminal, you'll sign into Planex. It will create a user for you because you're running it locally. Just copy and paste the command: bash planex auth login After running it, it will ask how you're using Planex. Remember the three options I mentioned? We're selecting local mode. You'll see a host address. If you look back, the server command is running Planex on a specific port, which shows up in the default option. Just press enter. It will create a user, sign you in.

And now, to start Planex, you just use the planex command. It will spin up a REPL for you in the project directory you want to work in.

Once in your desired directory, you need to initialize Planex. Before you do that, you need to expose your OpenRouter API key and your OpenAI API key. That's what Planex will use.

After you have your API keys exposed, you can initialize Planex in any repo you want using this command: bash planex init Or a shortened version as well: bash px init Both work the same way and will initialize Planex. For more commands, you can check the instructions. There are also other commands you can use, like setting the configuration and changing Planex to auto mode. I'll explain what auto mode is in a moment. Right now, we're in chat mode.

To enable the mode which starts writing code, we use this command: bash /tell There is also a multi-line reading mode, but it is disabled for now.

How Planex Works

Planex has actually provided a pretty detailed demo. This is how Planex works:

You start in chat mode, and whatever you want to build, you tell it your ideas and brainstorm with it. Even if you don't know anything about the tech stack you're going to use, just flesh out your ideas. You don't need to have everything planned from the start. If you provide a ready-made project, Planex can go through the files and figure out where everything should go, thanks to the way it handles large context sizes.

When you're ready to code, you can switch to Tell mode. It will automatically ask you, and once enabled, it will start the implementation. Just like many newer tools, Planex breaks down the main task into smaller steps, each focused on a single goal, and it works through them one by one.

Another thing is that whatever changes or files Planex creates happen inside a sandbox. After every tell mode session, you get a prompt where you can review the changes, apply them, or reject them. Both versions of the file stay separate until you approve the changes. This means you can test what you built first. Like Cursor, you can also run commands and set things up.

Another feature is debugging. If any commands fail after you accept them, you can turn on full auto mode, which will try different fixes by itself.

Warning: Be aware, full auto mode will use a lot of tokens and burn through your OpenRouter and API key credits, which can get expensive.

Demo: Improving a Swift App

I've opened a Swift app I made using Gemini 2.5 in a previous article and told Plandex that this is the project. It read everything, recognized that it's a macOS menu bar application built with Swift and SwiftUI, and laid out the application overview, architecture, key components, and user flow. It now understands how the app works.

Let's see if we can make some changes to the app. Here's the prompt I'm going with:

I'm asking it to improve the UI for the Swift app, which is usually hard for AI models. If the implementation isn't done step by step, they struggle because they're not deeply trained on Swift and can't retain context well for the code.

Let's see how it performs.

It reasoned through the project and is now asking to switch to tell mode. Let's do that. Now it's asking if we want to send another prompt or begin the implementation. Let's begin the implementation.

It built the plan and presented me with a menu. It didn't change the main files but created a new script to automatically build the app. I pressed A to apply the changes, and now it's asking if I want to execute. I said yes, and now it's executing. It will build the app, get the logs, and make changes based on that.

This isn't a big error, but if a command fails, it gives you a menu where you can choose to debug once or debug in full auto mode. That's what I'm going to use right now.

Another plan has been built, and now it's applying the changes. Once all the changes are done, I'll show you what it has made.

The app was built, and I compiled it using Xcode. The result was a new settings screen. I can now go into settings, click on the accent color, and adjust it to any color I want. It's using the built-in macOS and SwiftUI components, which is great. You can now change the accent color to anything, like making it blue.

I made the original app in a previous article using Gemini 2.5, and that was the only model that could do it. This is probably why the tool succeeded, because it uses multiple models from OpenRouter. Everything works, and it looks pretty nice.

That's it for this article. Thanks as always for reading.