Open In Colab

데이터 불러오기, 패키지 설치하기

!pip install ratsnlp
Collecting ratsnlp
  Downloading ratsnlp-1.0.1-py3-none-any.whl (42 kB)
     |████████████████████████████████| 42 kB 1.2 MB/s 
Requirement already satisfied: flask>=1.1.4 in /usr/local/lib/python3.7/dist-packages (from ratsnlp) (1.1.4)
Collecting transformers==4.10.0
  Downloading transformers-4.10.0-py3-none-any.whl (2.8 MB)
     |████████████████████████████████| 2.8 MB 41.6 MB/s 
Requirement already satisfied: torch>=1.9.0 in /usr/local/lib/python3.7/dist-packages (from ratsnlp) (1.10.0+cu111)
Collecting flask-cors>=3.0.10
  Downloading Flask_Cors-3.0.10-py2.py3-none-any.whl (14 kB)
Collecting flask-ngrok>=0.0.25
  Downloading flask_ngrok-0.0.25-py3-none-any.whl (3.1 kB)
Collecting Korpora>=0.2.0
  Downloading Korpora-0.2.0-py3-none-any.whl (57 kB)
     |████████████████████████████████| 57 kB 7.0 MB/s 
Collecting pytorch-lightning==1.3.4
  Downloading pytorch_lightning-1.3.4-py3-none-any.whl (806 kB)
     |████████████████████████████████| 806 kB 63.7 MB/s 
Requirement already satisfied: tensorboard!=2.5.0,>=2.2.0 in /usr/local/lib/python3.7/dist-packages (from pytorch-lightning==1.3.4->ratsnlp) (2.7.0)
Collecting PyYAML<=5.4.1,>=5.1
  Downloading PyYAML-5.4.1-cp37-cp37m-manylinux1_x86_64.whl (636 kB)
     |████████████████████████████████| 636 kB 73.3 MB/s 
Requirement already satisfied: tqdm>=4.41.0 in /usr/local/lib/python3.7/dist-packages (from pytorch-lightning==1.3.4->ratsnlp) (4.62.3)
Requirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from pytorch-lightning==1.3.4->ratsnlp) (21.3)
Collecting torchmetrics>=0.2.0
  Downloading torchmetrics-0.7.0-py3-none-any.whl (396 kB)
     |████████████████████████████████| 396 kB 70.6 MB/s 
Collecting future>=0.17.1
  Downloading future-0.18.2.tar.gz (829 kB)
     |████████████████████████████████| 829 kB 73.8 MB/s 
Collecting pyDeprecate==0.3.0
  Downloading pyDeprecate-0.3.0-py3-none-any.whl (10 kB)
Collecting fsspec[http]>=2021.4.0
  Downloading fsspec-2022.1.0-py3-none-any.whl (133 kB)
     |████████████████████████████████| 133 kB 68.4 MB/s 
Requirement already satisfied: numpy>=1.17.2 in /usr/local/lib/python3.7/dist-packages (from pytorch-lightning==1.3.4->ratsnlp) (1.19.5)
Collecting sacremoses
  Downloading sacremoses-0.0.47-py2.py3-none-any.whl (895 kB)
     |████████████████████████████████| 895 kB 45.7 MB/s 
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers==4.10.0->ratsnlp) (2.23.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers==4.10.0->ratsnlp) (4.10.1)
Collecting huggingface-hub>=0.0.12
  Downloading huggingface_hub-0.4.0-py3-none-any.whl (67 kB)
     |████████████████████████████████| 67 kB 6.9 MB/s 
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers==4.10.0->ratsnlp) (2019.12.20)
Collecting tokenizers<0.11,>=0.10.1
  Downloading tokenizers-0.10.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (3.3 MB)
     |████████████████████████████████| 3.3 MB 66.8 MB/s 
Requirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers==4.10.0->ratsnlp) (3.4.2)
Requirement already satisfied: click<8.0,>=5.1 in /usr/local/lib/python3.7/dist-packages (from flask>=1.1.4->ratsnlp) (7.1.2)
Requirement already satisfied: itsdangerous<2.0,>=0.24 in /usr/local/lib/python3.7/dist-packages (from flask>=1.1.4->ratsnlp) (1.1.0)
Requirement already satisfied: Jinja2<3.0,>=2.10.1 in /usr/local/lib/python3.7/dist-packages (from flask>=1.1.4->ratsnlp) (2.11.3)
Requirement already satisfied: Werkzeug<2.0,>=0.15 in /usr/local/lib/python3.7/dist-packages (from flask>=1.1.4->ratsnlp) (1.0.1)
Requirement already satisfied: Six in /usr/local/lib/python3.7/dist-packages (from flask-cors>=3.0.10->ratsnlp) (1.15.0)
Collecting aiohttp
  Downloading aiohttp-3.8.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)
     |████████████████████████████████| 1.1 MB 66.1 MB/s 
Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from huggingface-hub>=0.0.12->transformers==4.10.0->ratsnlp) (3.10.0.2)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from Jinja2<3.0,>=2.10.1->flask>=1.1.4->ratsnlp) (2.0.1)
Collecting dataclasses>=0.6
  Downloading dataclasses-0.6-py3-none-any.whl (14 kB)
Collecting xlrd>=1.2.0
  Downloading xlrd-2.0.1-py2.py3-none-any.whl (96 kB)
     |████████████████████████████████| 96 kB 7.7 MB/s 
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->pytorch-lightning==1.3.4->ratsnlp) (3.0.7)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers==4.10.0->ratsnlp) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers==4.10.0->ratsnlp) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers==4.10.0->ratsnlp) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers==4.10.0->ratsnlp) (2021.10.8)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard!=2.5.0,>=2.2.0->pytorch-lightning==1.3.4->ratsnlp) (3.17.3)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard!=2.5.0,>=2.2.0->pytorch-lightning==1.3.4->ratsnlp) (1.0.0)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.7/dist-packages (from tensorboard!=2.5.0,>=2.2.0->pytorch-lightning==1.3.4->ratsnlp) (0.37.1)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard!=2.5.0,>=2.2.0->pytorch-lightning==1.3.4->ratsnlp) (0.6.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard!=2.5.0,>=2.2.0->pytorch-lightning==1.3.4->ratsnlp) (3.3.6)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard!=2.5.0,>=2.2.0->pytorch-lightning==1.3.4->ratsnlp) (1.43.0)
Requirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard!=2.5.0,>=2.2.0->pytorch-lightning==1.3.4->ratsnlp) (1.35.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard!=2.5.0,>=2.2.0->pytorch-lightning==1.3.4->ratsnlp) (0.4.6)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard!=2.5.0,>=2.2.0->pytorch-lightning==1.3.4->ratsnlp) (57.4.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard!=2.5.0,>=2.2.0->pytorch-lightning==1.3.4->ratsnlp) (1.8.1)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard!=2.5.0,>=2.2.0->pytorch-lightning==1.3.4->ratsnlp) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard!=2.5.0,>=2.2.0->pytorch-lightning==1.3.4->ratsnlp) (4.2.4)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard!=2.5.0,>=2.2.0->pytorch-lightning==1.3.4->ratsnlp) (4.8)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard!=2.5.0,>=2.2.0->pytorch-lightning==1.3.4->ratsnlp) (1.3.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers==4.10.0->ratsnlp) (3.7.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard!=2.5.0,>=2.2.0->pytorch-lightning==1.3.4->ratsnlp) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard!=2.5.0,>=2.2.0->pytorch-lightning==1.3.4->ratsnlp) (3.1.1)
Requirement already satisfied: charset-normalizer<3.0,>=2.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp->fsspec[http]>=2021.4.0->pytorch-lightning==1.3.4->ratsnlp) (2.0.11)
Collecting aiosignal>=1.1.2
  Downloading aiosignal-1.2.0-py3-none-any.whl (8.2 kB)
Collecting multidict<7.0,>=4.5
  Downloading multidict-6.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (94 kB)
     |████████████████████████████████| 94 kB 4.2 MB/s 
Collecting yarl<2.0,>=1.0
  Downloading yarl-1.7.2-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (271 kB)
     |████████████████████████████████| 271 kB 67.5 MB/s 
Collecting async-timeout<5.0,>=4.0.0a3
  Downloading async_timeout-4.0.2-py3-none-any.whl (5.8 kB)
Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp->fsspec[http]>=2021.4.0->pytorch-lightning==1.3.4->ratsnlp) (21.4.0)
Collecting frozenlist>=1.1.1
  Downloading frozenlist-1.3.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (144 kB)
     |████████████████████████████████| 144 kB 54.0 MB/s 
Collecting asynctest==0.13.0
  Downloading asynctest-0.13.0-py3-none-any.whl (26 kB)
Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers==4.10.0->ratsnlp) (1.1.0)
Building wheels for collected packages: future
  Building wheel for future (setup.py) ... done
  Created wheel for future: filename=future-0.18.2-py3-none-any.whl size=491070 sha256=3f1aeef72f51ff19e32178a95e240969639ff6bca15732e4f6f3c316c125c369
  Stored in directory: /root/.cache/pip/wheels/56/b0/fe/4410d17b32f1f0c3cf54cdfb2bc04d7b4b8f4ae377e2229ba0
Successfully built future
Installing collected packages: multidict, frozenlist, yarl, asynctest, async-timeout, aiosignal, PyYAML, pyDeprecate, fsspec, aiohttp, xlrd, torchmetrics, tokenizers, sacremoses, huggingface-hub, future, dataclasses, transformers, pytorch-lightning, Korpora, flask-ngrok, flask-cors, ratsnlp
  Attempting uninstall: PyYAML
    Found existing installation: PyYAML 3.13
    Uninstalling PyYAML-3.13:
      Successfully uninstalled PyYAML-3.13
  Attempting uninstall: xlrd
    Found existing installation: xlrd 1.1.0
    Uninstalling xlrd-1.1.0:
      Successfully uninstalled xlrd-1.1.0
  Attempting uninstall: future
    Found existing installation: future 0.16.0
    Uninstalling future-0.16.0:
      Successfully uninstalled future-0.16.0
Successfully installed Korpora-0.2.0 PyYAML-5.4.1 aiohttp-3.8.1 aiosignal-1.2.0 async-timeout-4.0.2 asynctest-0.13.0 dataclasses-0.6 flask-cors-3.0.10 flask-ngrok-0.0.25 frozenlist-1.3.0 fsspec-2022.1.0 future-0.18.2 huggingface-hub-0.4.0 multidict-6.0.2 pyDeprecate-0.3.0 pytorch-lightning-1.3.4 ratsnlp-1.0.1 sacremoses-0.0.47 tokenizers-0.10.3 torchmetrics-0.7.0 transformers-4.10.0 xlrd-2.0.1 yarl-1.7.2
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
import pandas as pd
import numpy as np

import warnings
warnings.filterwarnings("ignore")

path = '/content/drive/MyDrive/sentence/'

train = pd.read_csv(path + 'train_data.csv')
test = pd.read_csv(path + 'test_data.csv')
sample_submission = pd.read_csv(path + 'sample_submission.csv')

train.head()
index premise hypothesis label
0 0 씨름은 상고시대로부터 전해져 내려오는 남자들의 대표적인 놀이로서, 소년이나 장정들이... 씨름의 여자들의 놀이이다. contradiction
1 1 삼성은 자작극을 벌인 2명에게 형사 고소 등의 법적 대응을 검토 중이라고 하였으나,... 자작극을 벌인 이는 3명이다. contradiction
2 2 이를 위해 예측적 범죄예방 시스템을 구축하고 고도화한다. 예측적 범죄예방 시스템 구축하고 고도화하는 것은 목적이 있기 때문이다. entailment
3 3 광주광역시가 재개발 정비사업 원주민들에 대한 종합대책을 마련하는 등 원주민 보호에 ... 원주민들은 종합대책에 만족했다. neutral
4 4 진정 소비자와 직원들에게 사랑 받는 기업으로 오래 지속되고 싶으면, 이런 상황에서는... 이런 상황에서 책임 있는 모습을 보여주는 기업은 아주 드물다. neutral

필요한 패키지를 설치하고, 데이터를 불러옵니다.

BERT 토크나이저에 대해

from Korpora import Korpora
import os


nsmc = Korpora.load('nsmc', force_download=True)

def write_lines(path, lines):
    with open(path, 'w', encoding = 'utf-8') as f:
        for line in lines:
            f.write(f'{line}\n')

write_lines('/root/train.txt', nsmc.train.get_all_texts())
write_lines('/root/test.txt', nsmc.test.get_all_texts())
    Korpora 는 다른 분들이 연구 목적으로 공유해주신 말뭉치들을
    손쉽게 다운로드, 사용할 수 있는 기능만을 제공합니다.

    말뭉치들을 공유해 주신 분들에게 감사드리며, 각 말뭉치 별 설명과 라이센스를 공유 드립니다.
    해당 말뭉치에 대해 자세히 알고 싶으신 분은 아래의 description 을 참고,
    해당 말뭉치를 연구/상용의 목적으로 이용하실 때에는 아래의 라이센스를 참고해 주시기 바랍니다.

    # Description
    Author : e9t@github
    Repository : https://github.com/e9t/nsmc
    References : www.lucypark.kr/docs/2015-pyconkr/#39

    Naver sentiment movie corpus v1.0
    This is a movie review dataset in the Korean language.
    Reviews were scraped from Naver Movies.

    The dataset construction is based on the method noted in
    [Large movie review dataset][^1] from Maas et al., 2011.

    [^1]: http://ai.stanford.edu/~amaas/data/sentiment/

    # License
    CC0 1.0 Universal (CC0 1.0) Public Domain Dedication
    Details in https://creativecommons.org/publicdomain/zero/1.0/

[nsmc] download ratings_train.txt: 14.6MB [00:00, 218MB/s]
[nsmc] download ratings_test.txt: 4.90MB [00:00, 94.3MB/s]

NSMC는 네이버 영화 리뷰 자료입니다. 실습을 위해 자료를 다운 받고 텍스트 형태로 저장했습니다.

from tokenizers import BertWordPieceTokenizer
wordpiece_tokenizer = BertWordPieceTokenizer(lowercase = False)
wordpiece_tokenizer.train(
    files=['/root/train.txt', '/root/test.txt'],
    vocab_size = 10000,
)

os.makedirs('/gdrive/My Drive/nlpbook/wordpiece', exist_ok= True)
wordpiece_tokenizer.save_model('/gdrive/My Drive/nlpbook/wordpiece')
['/gdrive/My Drive/nlpbook/wordpiece/vocab.txt']

BERT 방식으로 토크나이저를 구축했습니다. 토크나이저란 문장을 토큰 시퀀스로 나눈것을 의미하는데요.

BERT 방식은 말뭉치에서 자주 등장하는 문자열을 토큰으로 인식한 뒤 문자열을 병합해 어휘 집합을 구축합니다.

이때 말뭉치의 우도를 가장 높이는 쌍을 먼저 병합하게 됩니다. 두 말뭉치가 동시에 자주 등장할 수록 병합 될 가능성이 높겠죠.

BERT 방식을 간단히 설명하긴 힘들기 때문에 댓글로 질문 남겨주시면 감사하겠습니다.

!head /gdrive/My\ Drive/nlpbook/wordpiece/vocab.txt
[PAD]
[UNK]
[CLS]
[SEP]
[MASK]
!
"
%
&
'

BERT로 구성된 어휘집합의 앞 내용들입니다. 여기서 PAD은 더미 토큰으로 길이를 맞춰주는 역할을 합니다.

from transformers import BertTokenizer
tokenizer_bert = BertTokenizer.from_pretrained(
    '/gdrive/My Drive/nlpbook/wordpiece',
    do_lower_case = False,
)
sentences = [
    "아 더빙.. 진짜 짜증나네요 목소리",
    "흠...포스터보고 초딩영화줄....오버연기조차 가볍지 않구나",
    "별루 였다..",
]
tokenized_sentences = [tokenizer_bert.tokenize(sentence) for sentence in sentences]
tokenized_sentences
file /gdrive/My Drive/nlpbook/wordpiece/config.json not found
[['아', '더빙', '.', '.', '진짜', '짜증나', '##네요', '목소리'],
 ['흠',
  '.',
  '.',
  '.',
  '포스터',
  '##보고',
  '초딩',
  '##영화',
  '##줄',
  '.',
  '.',
  '.',
  '.',
  '오버',
  '##연기',
  '##조차',
  '가볍',
  '##지',
  '않',
  '##구나'],
 ['별루', '였다', '.', '.']]

BERT 토크나이저 모델로 예시 문장을 토큰화 시켰습니다. 이때 ## 기호는 문장의 시작이 아닌 것을 의미합니다.

batch_inputs = tokenizer_bert(
    sentences,
    padding = 'max_length',
    max_length = 12,
    truncation = True,
)
batch_inputs.keys()
dict_keys(['input_ids', 'token_type_ids', 'attention_mask'])

앞선 코드는 시각적으로 살펴보기 위해 한 것이고 이 부분은 실제 모델 입력값입니다.

'input_ids', 'token_type_ids', 'attention_mask', 3가지 입력값이 나옵니다.

input_ids는 실제 문장을 의미하고, token_type_ids는 문장이 2개 입력됬을 경우 0과 1로 두 문장을 구분해줍니다.

마지막으로 attention_mask는 어디까지가 실제 문장인지, 공백 문장이 어딨는지를 알려줍니다.

batch_inputs['input_ids']
[[2, 621, 2631, 16, 16, 1993, 3678, 1990, 3323, 3, 0, 0],
 [2, 997, 16, 16, 16, 2609, 2045, 2796, 1981, 1168, 16, 3],
 [2, 3274, 9508, 16, 16, 3, 0, 0, 0, 0, 0, 0]]

어휘 집합에 있는 토큰과 매칭되는 숫자가 실제로 출력됩니다.

모델 기초 환경설정

from torch.cuda import is_available
import torch
from ratsnlp.nlpbook.classification import ClassificationTrainArguments


args = ClassificationTrainArguments(
    pretrained_model_name='beomi/kcbert-base',
    downstream_task_name = 'pair-classification', # 문장 쌍 분류를 할 예정이므로    
    downstream_model_dir='/gdrive/My Drive/nlpbook/checkpoint-paircls',
    batch_size = 32 if torch.cuda.is_available() else 4,
    learning_rate=5e-5,
    max_seq_length=64,
    epochs = 5,
    tpu_cores=0 if torch.cuda.is_available() else 8,
    seed = 7,
)

beomi/kcbert-base 라는 프리트레인을 마친 언어 모델을 사용하여 분석을 진행합니다.

여기서는 ClassificationTrainArguments 클래스를 활용해 사용하는 파라미터 값을 정해집니다.

from ratsnlp import nlpbook
nlpbook.set_seed(args)
nlpbook.set_logger(args)
INFO:ratsnlp:Training/evaluation parameters ClassificationTrainArguments(pretrained_model_name='beomi/kcbert-base', downstream_task_name='pair-classification', downstream_corpus_name=None, downstream_corpus_root_dir='/content/Korpora', downstream_model_dir='/gdrive/My Drive/nlpbook/checkpoint-paircls', max_seq_length=64, save_top_k=1, monitor='min val_loss', seed=7, overwrite_cache=False, force_download=False, test_mode=False, learning_rate=5e-05, epochs=5, batch_size=32, cpu_workers=2, fp16=False, tpu_cores=0)
INFO:ratsnlp:Training/evaluation parameters ClassificationTrainArguments(pretrained_model_name='beomi/kcbert-base', downstream_task_name='pair-classification', downstream_corpus_name=None, downstream_corpus_root_dir='/content/Korpora', downstream_model_dir='/gdrive/My Drive/nlpbook/checkpoint-paircls', max_seq_length=64, save_top_k=1, monitor='min val_loss', seed=7, overwrite_cache=False, force_download=False, test_mode=False, learning_rate=5e-05, epochs=5, batch_size=32, cpu_workers=2, fp16=False, tpu_cores=0)
INFO:ratsnlp:Training/evaluation parameters ClassificationTrainArguments(pretrained_model_name='beomi/kcbert-base', downstream_task_name='pair-classification', downstream_corpus_name=None, downstream_corpus_root_dir='/content/Korpora', downstream_model_dir='/gdrive/My Drive/nlpbook/checkpoint-paircls', max_seq_length=64, save_top_k=1, monitor='min val_loss', seed=7, overwrite_cache=False, force_download=False, test_mode=False, learning_rate=5e-05, epochs=5, batch_size=32, cpu_workers=2, fp16=False, tpu_cores=0)
INFO:ratsnlp:Training/evaluation parameters ClassificationTrainArguments(pretrained_model_name='beomi/kcbert-base', downstream_task_name='pair-classification', downstream_corpus_name=None, downstream_corpus_root_dir='/content/Korpora', downstream_model_dir='/gdrive/My Drive/nlpbook/checkpoint-paircls', max_seq_length=64, save_top_k=1, monitor='min val_loss', seed=7, overwrite_cache=False, force_download=False, test_mode=False, learning_rate=5e-05, epochs=5, batch_size=32, cpu_workers=2, fp16=False, tpu_cores=0)
set seed: 7

같은 결과를 재현하기 위해 시드값을 고정해줍니다.

from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained(
    'beomi/kcbert-base',
    do_lower_case = False,
)

실제 kcbert-base 모델이 사용하는 토크나이저를 선언합니다. 이를 통해 입력문장을 토큰화 시킵니다.

np.random.seed(42)
tem = np.random.choice(train.shape[0], 5000, replace=False)
val = train.iloc[tem]

평가 데이터 셋을 트레인 데이터 셋중 일부를 사용해 추출합니다.

from ratsnlp.nlpbook.classification import ClassificationExample
from ratsnlp.nlpbook.classification import ClassificationFeatures

train_dataset = []
val_dataset = []
label = {"entailment" : 0, "contradiction" : 1, "neutral" : 2}

# for i in range(train.shape[0]):
#     text_a = train['premise'].loc[i]
#     text_b = train['hypothesis'].loc[i]
#     label = train['label'].loc[i]
#     examples.append(ClassificationExample(text_a=text_a, text_b=text_b, label=label))

for i in range(train.shape[0]):
    token = tokenizer(
        train['premise'].iloc[i], train['hypothesis'].iloc[i],
        padding = 'max_length',
        max_length = 64,
        truncation = True,
    )
    train_dataset.append(ClassificationFeatures(token.input_ids, token.attention_mask,
                                                token.token_type_ids, label = label[train['label'].iloc[i]]))
    
for i in range(val.shape[0]):
    token = tokenizer(
        val['premise'].iloc[i], val['hypothesis'].iloc[i],
        padding = 'max_length',
        max_length = 64,
        truncation = True,
    )
    val_dataset.append(ClassificationFeatures(token.input_ids, token.attention_mask, 
                                              token.token_type_ids, label = label[val['label'].iloc[i]]))

입력된 문장을 앞서 정의한 토크나이저 모델을 이용해 토큰화를 합니다.

이때 ClassificationFeatures 클래스를 이용하면 분류를 위한 라벨까지 한 객체에 넣을 수 있습니다.

max_length = 64에 의미는 문장당 최대 토큰의 길이를 64로 설정한 것 입니다. truncation = True 은 64가 넘는 문장을 잘라낸다는 설정입니다.

from torch.utils.data import DataLoader, RandomSampler

train_dataloader = DataLoader(
    train_dataset,
    batch_size = 32,
    sampler = RandomSampler(train_dataset, replacement = False),
    collate_fn = nlpbook.data_collator, # 뽑은 인스턴스를 배치로 바꿔줌(텐서 형태로)
    drop_last = False, 
    num_workers = 1,
)

val_dataloader = DataLoader(
    val_dataset,
    batch_size = 32,
    sampler = RandomSampler(val_dataset, replacement = False),
    collate_fn = nlpbook.data_collator, # 뽑은 인스턴스를 배치로 바꿔줌(텐서 형태로)
    drop_last = False, 
    num_workers = 1,
)

토큰화한 데이터 셋을 DataLoader 함수를 이용해 배치화 시킵니다.

from transformers import BertConfig, BertForSequenceClassification

pretrained_model_config = BertConfig.from_pretrained(
    'beomi/kcbert-base',
    num_labels = 3, # 레이블이 3개이기 때문에
)

model = BertForSequenceClassification.from_pretrained(
    'beomi/kcbert-base',
    config = pretrained_model_config
)
Some weights of the model checkpoint at beomi/kcbert-base were not used when initializing BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.weight']
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at beomi/kcbert-base and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

사용할 모델은 kcbert-base 모델로 토크나이저를 수행했으며 두 문장의 관계를 모순/중립/참 세가지로 분류하는 것이 목적입니다.

from transformers import PreTrainedModel
from transformers.optimization import AdamW
from ratsnlp.nlpbook.metrics import accuracy
from pytorch_lightning import LightningModule
from torch.optim.lr_scheduler import ExponentialLR, CosineAnnealingWarmRestarts
from ratsnlp.nlpbook.classification.arguments import ClassificationTrainArguments


class ClassificationTask(LightningModule):

    def __init__(self, model, learning_rate):
        super().__init__()
        self.model = model
        self.learning_rate = learning_rate

    def configure_optimizers(self):
        optimizer = AdamW(self.parameters(), lr=self.learning_rate)
        scheduler = ExponentialLR(optimizer, gamma=0.5)

        return {
            'optimizer': optimizer,
            'scheduler': scheduler,
        }

    def training_step(self, inputs, batch_idx):
        # outputs: SequenceClassifierOutput
        outputs = self.model(**inputs)
        preds = outputs.logits.argmax(dim=-1)
        labels = inputs["labels"]
        acc = accuracy(preds, labels)
        self.log("loss", outputs.loss, prog_bar=False, logger=True, on_step=True, on_epoch=False)
        self.log("acc", acc, prog_bar=True, logger=True, on_step=True, on_epoch=False)
        return outputs.loss

    def validation_step(self, inputs, batch_idx):
        # outputs: SequenceClassifierOutput
        outputs = self.model(**inputs)
        preds = outputs.logits.argmax(dim=-1)
        labels = inputs["labels"]
        acc = accuracy(preds, labels)
        self.log("val_loss", outputs.loss, prog_bar=True, logger=True, on_step=False, on_epoch=True)
        self.log("val_acc", acc, prog_bar=True, logger=True, on_step=False, on_epoch=True)
        return outputs.loss
task = ClassificationTask(model, 5e-5)

파이토치 내 LightningModule 클래스를 상속해 ClassificationTask 를 만들었습니다.

모델 사용하기

trainer = nlpbook.get_trainer(args)
trainer.fit(
    task,
    train_dataloader = train_dataloader,
    val_dataloaders = val_dataloader
)
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

  | Name  | Type                          | Params
--------------------------------------------------------
0 | model | BertForSequenceClassification | 108 M 
--------------------------------------------------------
108 M     Trainable params
0         Non-trainable params
108 M     Total params
435.683   Total estimated model params size (MB)

파이토치 라이트닝 라이브러리를 상속받은 get_trainer로 GPU 설정, 체크포인트 등을 자동으로 설정해줍니다.

그 후 학습을 진행하게 됩니다. 여기서 저는 모든 트레인 데이터를 학습에 사용하고 싶었습니다.

다만 이렇게 되면 val 값을 신용할수는 없겠죠. (val 데이터가 학습 데이터 내부 값이기 때문)

model.eval()
BertForSequenceClassification(
  (bert): BertModel(
    (embeddings): BertEmbeddings(
      (word_embeddings): Embedding(30000, 768, padding_idx=0)
      (position_embeddings): Embedding(300, 768)
      (token_type_embeddings): Embedding(2, 768)
      (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
      (dropout): Dropout(p=0.1, inplace=False)
    )
    (encoder): BertEncoder(
      (layer): ModuleList(
        (0): BertLayer(
          (attention): BertAttention(
            (self): BertSelfAttention(
              (query): Linear(in_features=768, out_features=768, bias=True)
              (key): Linear(in_features=768, out_features=768, bias=True)
              (value): Linear(in_features=768, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (output): BertSelfOutput(
              (dense): Linear(in_features=768, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (intermediate): BertIntermediate(
            (dense): Linear(in_features=768, out_features=3072, bias=True)
          )
          (output): BertOutput(
            (dense): Linear(in_features=3072, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (1): BertLayer(
          (attention): BertAttention(
            (self): BertSelfAttention(
              (query): Linear(in_features=768, out_features=768, bias=True)
              (key): Linear(in_features=768, out_features=768, bias=True)
              (value): Linear(in_features=768, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (output): BertSelfOutput(
              (dense): Linear(in_features=768, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (intermediate): BertIntermediate(
            (dense): Linear(in_features=768, out_features=3072, bias=True)
          )
          (output): BertOutput(
            (dense): Linear(in_features=3072, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (2): BertLayer(
          (attention): BertAttention(
            (self): BertSelfAttention(
              (query): Linear(in_features=768, out_features=768, bias=True)
              (key): Linear(in_features=768, out_features=768, bias=True)
              (value): Linear(in_features=768, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (output): BertSelfOutput(
              (dense): Linear(in_features=768, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (intermediate): BertIntermediate(
            (dense): Linear(in_features=768, out_features=3072, bias=True)
          )
          (output): BertOutput(
            (dense): Linear(in_features=3072, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (3): BertLayer(
          (attention): BertAttention(
            (self): BertSelfAttention(
              (query): Linear(in_features=768, out_features=768, bias=True)
              (key): Linear(in_features=768, out_features=768, bias=True)
              (value): Linear(in_features=768, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (output): BertSelfOutput(
              (dense): Linear(in_features=768, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (intermediate): BertIntermediate(
            (dense): Linear(in_features=768, out_features=3072, bias=True)
          )
          (output): BertOutput(
            (dense): Linear(in_features=3072, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (4): BertLayer(
          (attention): BertAttention(
            (self): BertSelfAttention(
              (query): Linear(in_features=768, out_features=768, bias=True)
              (key): Linear(in_features=768, out_features=768, bias=True)
              (value): Linear(in_features=768, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (output): BertSelfOutput(
              (dense): Linear(in_features=768, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (intermediate): BertIntermediate(
            (dense): Linear(in_features=768, out_features=3072, bias=True)
          )
          (output): BertOutput(
            (dense): Linear(in_features=3072, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (5): BertLayer(
          (attention): BertAttention(
            (self): BertSelfAttention(
              (query): Linear(in_features=768, out_features=768, bias=True)
              (key): Linear(in_features=768, out_features=768, bias=True)
              (value): Linear(in_features=768, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (output): BertSelfOutput(
              (dense): Linear(in_features=768, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (intermediate): BertIntermediate(
            (dense): Linear(in_features=768, out_features=3072, bias=True)
          )
          (output): BertOutput(
            (dense): Linear(in_features=3072, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (6): BertLayer(
          (attention): BertAttention(
            (self): BertSelfAttention(
              (query): Linear(in_features=768, out_features=768, bias=True)
              (key): Linear(in_features=768, out_features=768, bias=True)
              (value): Linear(in_features=768, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (output): BertSelfOutput(
              (dense): Linear(in_features=768, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (intermediate): BertIntermediate(
            (dense): Linear(in_features=768, out_features=3072, bias=True)
          )
          (output): BertOutput(
            (dense): Linear(in_features=3072, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (7): BertLayer(
          (attention): BertAttention(
            (self): BertSelfAttention(
              (query): Linear(in_features=768, out_features=768, bias=True)
              (key): Linear(in_features=768, out_features=768, bias=True)
              (value): Linear(in_features=768, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (output): BertSelfOutput(
              (dense): Linear(in_features=768, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (intermediate): BertIntermediate(
            (dense): Linear(in_features=768, out_features=3072, bias=True)
          )
          (output): BertOutput(
            (dense): Linear(in_features=3072, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (8): BertLayer(
          (attention): BertAttention(
            (self): BertSelfAttention(
              (query): Linear(in_features=768, out_features=768, bias=True)
              (key): Linear(in_features=768, out_features=768, bias=True)
              (value): Linear(in_features=768, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (output): BertSelfOutput(
              (dense): Linear(in_features=768, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (intermediate): BertIntermediate(
            (dense): Linear(in_features=768, out_features=3072, bias=True)
          )
          (output): BertOutput(
            (dense): Linear(in_features=3072, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (9): BertLayer(
          (attention): BertAttention(
            (self): BertSelfAttention(
              (query): Linear(in_features=768, out_features=768, bias=True)
              (key): Linear(in_features=768, out_features=768, bias=True)
              (value): Linear(in_features=768, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (output): BertSelfOutput(
              (dense): Linear(in_features=768, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (intermediate): BertIntermediate(
            (dense): Linear(in_features=768, out_features=3072, bias=True)
          )
          (output): BertOutput(
            (dense): Linear(in_features=3072, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (10): BertLayer(
          (attention): BertAttention(
            (self): BertSelfAttention(
              (query): Linear(in_features=768, out_features=768, bias=True)
              (key): Linear(in_features=768, out_features=768, bias=True)
              (value): Linear(in_features=768, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (output): BertSelfOutput(
              (dense): Linear(in_features=768, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (intermediate): BertIntermediate(
            (dense): Linear(in_features=768, out_features=3072, bias=True)
          )
          (output): BertOutput(
            (dense): Linear(in_features=3072, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (11): BertLayer(
          (attention): BertAttention(
            (self): BertSelfAttention(
              (query): Linear(in_features=768, out_features=768, bias=True)
              (key): Linear(in_features=768, out_features=768, bias=True)
              (value): Linear(in_features=768, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (output): BertSelfOutput(
              (dense): Linear(in_features=768, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (intermediate): BertIntermediate(
            (dense): Linear(in_features=768, out_features=3072, bias=True)
          )
          (output): BertOutput(
            (dense): Linear(in_features=3072, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
      )
    )
    (pooler): BertPooler(
      (dense): Linear(in_features=768, out_features=768, bias=True)
      (activation): Tanh()
    )
  )
  (dropout): Dropout(p=0.1, inplace=False)
  (classifier): Linear(in_features=768, out_features=3, bias=True)
)

모델을 평가모드로 바꿔줍니다.

모델 이용하기

for i in range(test.shape[0]):
    token = tokenizer(
        test['premise'].iloc[i], test['hypothesis'].iloc[i],
        padding = 'max_length',
        max_length = 64,
        truncation = True,
    )
    token['input_ids'] = [token['input_ids']]
    token['attention_mask'] = [token['attention_mask']]
    token['token_type_ids'] = [token['token_type_ids']]

    outputs = model(**{k: torch.tensor(v) for k, v in token.items()})
    prob = outputs.logits.softmax(dim = 1)
    labels = torch.argmax(prob)
    if labels == 0:
        test['label'].iloc[i] = 'entailment'
    elif labels == 1:
        test['label'].iloc[i] = 'contradiction'
    else:
        test['label'].iloc[i] = 'neutral'

테스트 데이터를 토크나이저를 이용해 토큰화를 한 뒤 모델에 입력해줍니다.

모델의 출력값 outputs 은 길이가 3인 벡터를 출력해주는데, 이를 소프트맥스 함수에 통과시켜주면 각 범주의 확률값이 나옵니다.

확률값을 이용해 테스트 데이터가 어느 범주에 속해있는지 구해줍니다.

test['label'].value_counts()
neutral          589
contradiction    585
entailment       492
Name: label, dtype: int64
train['label'].value_counts()
entailment       8561
contradiction    8489
neutral          7948
Name: label, dtype: int64

범주당 비슷한 비율로 구분한 것을 보아 결과가 터무니없지는 않은 것 같습니다.

sample_submission['label'] = test['label']
sample_submission.to_csv('sentence_3.csv',index=False)

나온 결과를 제출파일에 저장하여 제출했습니다. Public 결과는 약 0.624 정도로 앞선 모델이 다소 과적합되어 보입니다.

느낀점

우선 딥러닝 관련 이론 위주로 항상 공부하다가 실전 데이터를 사용했는데, 확실히 흥미로웠습니다.

직접 구현했다기 보단 라이브러리를 불러온 형식이 많고, 코드 부분부분까지 정확하게 이해하진 못했습니다.

클래스나 함수 내 입력값 형식 관련해서 진행할 때 많은 오류를 겪었습니다. 어짜피 한번 겪을 과정이라 생각했지만 조금 힘들었네요.

다음에는 같은 데이터를 가지고 데이콘 운영자님이 작성하신 베이스라인 코드를 이해가 되는 한에서 리뷰를 해보겠습니다.