Search Results for author: Yujia Gong

Found 2 papers, 0 papers with code

Context Injection Attacks on Large Language Models

no code implementations30 May 2024 Cheng'an Wei, Kai Chen, Yue Zhao, Yujia Gong, Lu Xiang, Shenchen Zhu

This paper identifies how such integration can expose LLMs to misleading context from untrusted sources and fail to differentiate between system and user inputs, allowing users to inject context.

LLM Factoscope: Uncovering LLMs' Factual Discernment through Inner States Analysis

no code implementations27 Dec 2023 Jinwen He, Yujia Gong, Kai Chen, Zijin Lin, Chengan Wei, Yue Zhao

In this paper, we introduce the LLM factoscope, a novel Siamese network-based model that leverages the inner states of LLMs for factual detection.

Cannot find the paper you are looking for? You can Submit a new open access paper.