Support

Have a question or found a bug? We're happy to help. Check the FAQ below or open an issue on GitHub.

Found a bug or have a feature request?

Open an issue on GitHub — it's the fastest way to get help.

Open an Issue

Getting Started

Local models

On first launch, im.ai lets you browse and download open-source models directly within the app. Select a model from the list, tap Download, and wait for it to complete. Once downloaded, it runs entirely on your device — no internet needed.

Cloud providers

To connect a cloud provider, tap the model picker in the toolbar, choose a provider, and enter your API key when prompted. Keys are stored securely in the system Keychain.

Frequently Asked Questions

Which Mac and iOS versions are required?

im.ai requires macOS 14 Sonoma or later and iOS 17 or later. Apple Silicon Macs (M1 and newer) deliver the best local inference performance.

How much storage do local models use?

Model sizes vary widely. Small models like SmolLM2 360M take under 500 MB, while larger models like Llama 3 8B (Q4) use around 5 GB. You can delete any downloaded model from within the app to free up space.

Can I use the app without an internet connection?

Yes — once a local model is downloaded, im.ai works completely offline. Cloud providers require an internet connection by nature.

Are my conversations stored in the cloud?

No. Conversation history is stored only in the app's local storage on your device. We have no servers and receive no conversation data of any kind. See our Privacy Policy for full details.

Is im.ai free?

Yes. im.ai is completely free to download and use. There are no in-app purchases, no subscriptions, and no premium tiers.

A cloud provider says my API key is invalid. What do I do?

Double-check that you've copied the key correctly with no extra spaces. Make sure the key is active in your provider's dashboard and that your account is in good standing. Some providers require billing to be set up before issuing keys.

The app crashes or a model fails to load. What should I do?

Large models can exceed available RAM and cause the OS to terminate the app. Try a smaller or more heavily quantized model variant (Q4 instead of Q8, or a 3B instead of 7B). If the issue persists, please open an issue on GitHub with your device model and macOS/iOS version.

Contact & Community

The best way to reach us is through GitHub: