Boosting Large Language Models (LLMs) applications with tools to interact with third-party services enables LLMs to retrieve updated knowledge and perform actions on behalf of users. However, the added capability brings security and privacy risks. In the current paradigm, users delegate potentially sensitive resources to LLM Apps, which makes the platforms overprivileged. For instance, malicious platforms or rogue models can exploit shared email-sending or TAP platform tokens stealthily. We propose LLMacaroon, a practical and secure architecture that distrusts applications for sharing sensitive resources and shifts control back to users. LLMacaroon achieves flexible, controlled sharing via macaroons and improves transparency and control via a local action proxy with optionally human in the loop. We demonstrate that LLMacaroon requires minimal changes to existing LLM apps and is compatible with major platforms like ChatGPT for various use cases.
Posts
Introduction The inherently dynamic nature of human communication entails adaptability in both spoken and written discourse, as individuals navigate diverse contexts and audiences. This linguistic malleability, commonly referred to as style, encompasses many of textual attributes including but not limited to formality, politeness, diction, and emotional tenor. Text style transfer (TST), a long-standing endeavor within the field of natural language processing (NLP), seeks to transform specific stylistic attributes while preserving the fundamental meaning of the text.
WARNING: This article was written by the author during high-school, in a non-professional capacity. Meta-learning, or learning to learn, is a paradigm of machine learning algorithms that can generalize itself with meta-knowledge of a certain form such that it can apply to various settings. While it is originally a hallmark of human intelligence, numerous meta-learning perspectives and approaches are springing up in recent years. This paper provides an overview of recent meta-learning approaches, especially for Model-Agnostic Meta-Learning (and its derivatives), Meta-Reinforcement Leaning, and Few-shot (or Zero/One-shot), three emerging methods in the past five years.