Boosting Large Language Models (LLMs) applications with tools to interact with third-party services enables LLMs to retrieve updated knowledge and perform actions on behalf of users. However, the added capability brings security and privacy risks. In the current paradigm, users delegate potentially sensitive resources to LLM Apps, which makes the platforms overprivileged. For instance, malicious platforms or rogue models can exploit shared email-sending or TAP platform tokens stealthily. We propose LLMacaroon, a practical and secure architecture that distrusts applications for sharing sensitive resources and shifts control back to users. LLMacaroon achieves flexible, controlled sharing via macaroons and improves transparency and control via a local action proxy with optionally human in the loop. We demonstrate that LLMacaroon requires minimal changes to existing LLM apps and is compatible with major platforms like ChatGPT for various use cases.