Extracting Training Data from Large Language Models
자연어 인공지능 모델 해킹하기
Abstraction
It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model.
We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data. These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible even though each of the above sequences are included in just one document in the training data.
We comprehensively evaluate our extraction attack to understand the factors that contribute to its success. For example, we find that larger models are more vulnerable than smaller models. We conclude by drawing lessons and discussing possible safeguards for training large language models.
- 구글, 하버드, 스탠포드, OpenAI, 애플이 공동 발표한 논문에 따르면, 큰 언어 모델에 질문하는 것만으로 학습에 사용되었던 구체적인 데이터를 추출해 낼 수 있었다.
- GPT-2를 대상으로 한 공격은 뉴스 헤드라인, 집주소와 같은 개인정보를 아주 높은 정확도로 추출해 냈다.
- 비단 GPT-2 뿐만 아니라 다른 언어 모델도 이런 공격에 취약할 수 있으니 학습 데이터를 전처리하는데 더욱 신경써야 한다
See also
Documentation
- Extracting Training Data from Large Language Models
- https://arxiv.org/abs/2012.07805