Substage converts natural language prompts into command line commands.
1
Select files, enter prompt
Just tell Substage what you want to do with the files you have selected.
Make a jpg plz
2
Substage generates a command
Using an AI model such as GPT-4o, it creates a command to be run on your Mac’s terminal.
sips -s format jpeg ocean.png —out ocean.jpg
3
Confirm if necessary
Substage evaluates the risk of running the command and might require confirmation to proceed. You can sanity check the command if you'd like.
4
Command is run
Any output is boiled down into a summary of what happened.
Converted successfully!
Pricing
Substage is free to try for 2 weeks.
Substage subscriptions allow access to ALL of our AI models, including OpenAI, Anthropic, Google and Mistral. It's batteries included and ready to go - no setup required, just powerful AI at your fingertips.
Subscriptions not your thing? You can buy a permanent Bring your own AI license to Substage. All our great features, powered by your LLMs.
All Substage licenses come with Bring Your Own AI which allows you to:
Use your own API keys for OpenAI, Anthropic, Google and Mistral.
Use your own local AI models, powered by Ollama, LM Studio or other OpenAI-compatible APIs.
Privacy and AI
Substage helps you work with files and folders on your Mac by translating natural language into terminal commands. To do this, it sends your prompt—plus some context—to AI providers such as OpenAI, Anthropic, Mistral, and Google.
By default, Substage also includes the paths of selected files and folders. This allows the AI to make more relevant suggestions—for example, proposing the name "screenshots.zip" when you select files named "screenshot1.jpg" and "screenshot2.jpg." That said, we know even filenames can be sensitive, and we're exploring ways to reduce what gets shared—such as sending only file extensions instead.
Substage never directly accesses or sends file contents. However, when summarising Terminal output, some content may be included. For instance, if you run a command that prints a file's contents and ask Substage to summarise the result, that content will be processed by the AI. Substage doesn't store any of this data itself.
Substage integrates with leading AI providers like OpenAI, Anthropic, Mistral, and Google. Each of these services has similar privacy policies—they typically retain data for a limited period for purposes like abuse monitoring, but state they don't use this data for model training.
For more privacy-conscious workflows, Substage also supports local AI models via tools like Ollama or LM Studio—though this requires additional setup and isn't the default experience.
We're always working to strike the right balance between usefulness and privacy, and we'd love to hear your thoughts or suggestions. You can:
Send this using the Messages app to "[email protected]" - perhaps not the most convenient way to send a file, but it works, if you get the full Apple Account name correct! It will request permission the first time you run it.
Anything else I should add to this list? Email me or discuss on the Discord!
Limitations
Newsflash! AI can make mistakes. 😱
For most commands, we highly recommand small models such as GPT 4o-mini. Substage is intended to be used for quick individual operations, such as file conversion, and with small AI models, this can be a quick and snappy experience.
As soon as you increase the complexity of your request, things can get more unreliable. We recommend that more complex requests are only done by developers and tech-savvy users who understand the Terminal commands that are generated.
In order to understand Substage's further limitations, it's worth reviewing how it works under the hood:
Your prompt is converted to a Terminal command by an LLM.
A small amount of context is sent to the LLM, including the names of the files you have selected, and most importantly, the file extensions.
This will result in a command that will run locally on your Mac, at which point the LLM is no longer involved. Whether it's a video conversion, a zip operation or a Word doc to RTF conversion, the LLM doesn't seem the contents of these files.
However, there is one exception to this: Substage also uses an LLM to summarise the output of the Terminal command. If the output of the command was the entire content of a text file for example, then the LLM can summarise this content.
So what won't work?
Given that it's primarily a one step process, it's important to understand that the following will not work:
"Organise this folder": Not possible, because the LLM isn't aware of the entire content on your computer, so wouldn't know where to start - this requires a lot more than a single Terminal command. However, if you have a specific idea of what you need, such as "put all mp4 files in a videos folder", then it works great.
Have a conversation with AI: Not possible, because each command is isolated. We have a feature request to allow some back and forth, but it's not yet implemented.
In addition, more complex operations aren't entirely recommended, for example if you describe multiple steps in a single prompt. It's best to describe each step separately, and run them one at a time. For example:
"Convert all the mp3 files into aac, put them in a folder called 'audio' along with the originals, in separate sub-folders, and zip them all up": While this may work, it can be risky, since it's easy for the AI to get a tiny detail wrong. For most commands, we highly recommand small models such as GPT 4o-mini since it's nice and snappy, but in this instance (if you insist on trying it!), we'd recommend using a larger model such as GPT 4o.
We don't currently have integration with AI providers beyond the described Terminal command conversion process. So, for example, you can't ask Substage to:
"Rename these images based on their contents" - Substage currently has no mechanism for image content analysis, and would be a multi-step process that isn't currently supported.
And finally, a quick fire round of things that won't work in Substage:
"Download all images from <some url>" - This would likely require a multi-step process. While it's conveivably possible to write this in a single line Terminal command, LLM aren't likely to get it right.
"Install <some package> using homebrew" - We disallow installing new software for safety reasons.
"Ask ChatGPT to generate an image of a donkey" - We don't integrate directly with AI providers behind Terminal commands.