A Developer's Guide to Using the Gemini CLI for Real-World Tasks
The Gemini CLI has been making waves in the development community, offering a powerful new way to integrate AI into daily workflows. This article provides a real-world example of its capabilities, demonstrating how to build a feature, fix a bug, and generate documentation for an MCP server, often handling these tasks concurrently.
Getting Started with Gemini CLI
Installation instructions are available on the official GitHub repository. The tool bears a striking resemblance to other AI command-line interfaces. Installation is straightforward using a global system command.
Authentication can be linked to a Google account, which provides a generous number of free daily requests without needing a dedicated Gemini API key.
Once installed, launch the tool by running gemini
in your terminal. The first launch prompts for a theme preference. You can change it anytime by typing theme
and selecting from the available options, such as Dracula.
A Practical Workflow Example
For an efficient development setup, using a terminal multiplexer like tmux
is highly recommended. This allows for a multi-pane layout where you can have Gemini running in one pane and your project shell in another.
The project for this demonstration is a Python library called random-number-mcp
.
Shell Integration
Within the Gemini interface, typing !
switches to a shell mode. This allows you to execute standard shell commands directly without them being processed by the model. For example:
!ls
!tree -L 2
This is useful for exploring the project structure. Typing !
again returns you to the agent mode.
To begin, you can instruct Gemini to familiarize itself with the project by reading a key file:
Read the README file to understand the project and then wait for further instruction.
Gemini processes this and confirms its understanding almost immediately.
Understanding the Project and the Bug
Before diving into the code, it's essential to understand how the MCP server works. It provides several tools that a language model can use, such as random_int
, random_shuffle
, and random_floats
.
For instance, a prompt like "roll a d20" would trigger the server to return a random integer between 1 and 20. A more complex query, such as "what should I listen to?" with a list of options, would use the random_choices
tool.
However, a bug exists in this implementation. When the language model passes weights to the random_choices
tool, it sends them as a string instead of a list of numbers.
"weights": "['0.7', '0.2', '0.1']"
This causes a validation error on the server, which expects a list of floats or integers. The goal is to use Gemini to help fix this issue.
Tackling Multiple Tasks Concurrently
A common development scenario involves handling multiple tasks at once. In this case, the to-do list includes:
1. Fixing the validation error.
2. Adding documentation for running the app locally.
3. Implementing a new random_sample
function.
A powerful strategy is to spawn a separate Gemini instance for each distinct task. This keeps the context for each task isolated.
Task 1: Fixing the Validation Error
In the first Gemini instance, the task is to address the validation error. The error occurs in the server.py
file.
The initial prompt to Gemini looks like this:
The user is getting an error with validation.
Attaches
server.py
file to the context.I want to accept a string as input in case the user tries to do that, and then convert it to a list in the app.
Gemini analyzes the request and proposes a change. Its first attempt involves using ast.literal_eval
to evaluate the string.
# First attempt by Gemini
import ast
# ... inside the function
if isinstance(weights, str):
weights = ast.literal_eval(weights)
This is a potential security risk, as literal_eval
can execute arbitrary code. A much safer approach is to parse the string as JSON.
The feedback provided to Gemini is simple and direct:
Don't use
ast
. UseJSON
.
This iterative feedback loop is crucial. While Gemini works on refining the solution, you can move on to the next task.
Task 2: Building Documentation
In a second Gemini instance, the goal is to create documentation. The prompt includes a clear example of what the final output should look like.
Build out the documentation for being able to run this app locally using UV with Claude desktop for this tool.
Here is an example of the format:
### Running Locally with Claude Desktop To run this tool locally for development with Claude Desktop, you can add it to your `tools.json` config file. ```json { "name": "random_number_dev", "description": "Generates random numbers, shuffles lists, and makes random choices.", "owner": "GALA", "contact_email": "[email protected]", "source_code_url": "https://github.com/gala-labs/random-number-mcp", "server_url": "http://0.0.0.0:3456/invoke" }
Gemini takes this instruction and begins drafting the documentation in the background.
Task 3: Adding a New Feature
A third Gemini instance is opened to add a new random_sample
tool.
The prompt is specific:
Add a random sampling tool (withOUT replacement) using
random.sample
from the Python standard library.
Note on Performance When running multiple instances and intensive tasks, Gemini might automatically switch from a more powerful model (like Pro) to a faster, less capable one (like Flash). This can lead to a decrease in the quality of the output, which is a notable drawback to be aware of.
Reviewing and Refining the Results
After letting the AI work, it's time to review the changes.
Bug Fix Update
The first instance, tasked with the bug fix, now proposes a much better solution using json.loads
.
# Second, improved attempt by Gemini
import json
# ... inside the function
if isinstance(weights, str):
weights = json.loads(weights)
To make it more robust, a final piece of feedback is given:
Raise an error if there's a problem with the
json.loads
.
This demonstrates the value of keeping contexts separate, allowing for focused, iterative refinement on a single task.
New Feature Integration
The instance working on the random_sample
tool successfully added the function to the tools.py
file. However, it initially forgot to import and expose it in the main server.py
file.
A quick follow-up prompt corrects this:
Source this tool up in the server file for users.
Gemini, with the correct context, quickly makes the change. It then attempts to run tests to validate its work. The initial test command fails because it doesn't use the project's specific environment (uv
).
Another corrective prompt is needed:
Try that shell command again, but use
uv
as my environment. Refer to the README for instructions.
Gemini figures it out, runs the tests correctly with uv run pytest
, and they pass. It even generated a comprehensive test suite for the new random_sample
tool.
Final Verification
The last step is to test all the changes together. The documentation generated by Gemini provides the necessary configuration for running the local server.
This configuration is added to the Claude Code tools.json
file, pointing to the local development server.
After restarting the environment to load the new tools, the bug fix is tested first. The prompt that previously caused an error now works perfectly. The server correctly handles the string-formatted weights
and returns a valid response.
Next, the new random_sample
feature is tested with a prompt designed to trigger it:
I've been meaning to meditate more, but I can never pick the right time. You need to pick it for me. Pick two random days of the week and tell me what time I should meditate.
The model correctly uses the new random_sample
tool to pick two unique days, demonstrating that the new feature is fully integrated and working as expected. The final output successfully provides two distinct meditation times on different days.
Join the 10xdev Community
Subscribe and get 8+ free PDFs that contain detailed roadmaps with recommended learning periods for each programming language or field, along with links to free resources such as books, YouTube tutorials, and courses with certificates.