Skip to content

Maskrcnn-benchmark

Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.

Highlights

  • PyTorch 1.0: RPN, Faster R-CNN and Mask R-CNN implementations that matches or exceeds Detectron accuracies
  • Very fast: up to 2x faster than Detectron and 30% faster than mmdetection during training. See MODEL_ZOO.md for more details.
  • Memory efficient: uses roughly 500MB less GPU memory than mmdetection during training
  • Multi-GPU training and inference
  • Batched inference: can perform inference using multiple images per batch per GPU
  • CPU support for inference: runs on CPU in inference time. See our webcam demo for an example
  • Provides pre-trained models for almost all reference Mask R-CNN and Faster R-CNN configurations with 1x schedule.

How to train

Custom Training

데이터셋 추가를 위해 아래와 같이 패치해야 한다.

diff --git a/maskrcnn_benchmark/config/paths_catalog.py b/maskrcnn_benchmark/config/paths_catalog.py
index 522ef88..f325888 100644
--- a/maskrcnn_benchmark/config/paths_catalog.py
+++ b/maskrcnn_benchmark/config/paths_catalog.py
@@ -7,6 +7,18 @@ import os
 class DatasetCatalog(object):
     DATA_DIR = "datasets"
     DATASETS = {
+        "companyName_190207_train": {
+            "img_dir": "companyName190207/train",
+            "ann_file": "companyName190207/annotations/train.json"
+        },
+        "companyName_190207_val": {
+            "img_dir": "companyName190207/val",
+            "ann_file": "companyName190207/annotations/val.json"
+        },
+        "companyName_190207_test": {
+            "img_dir": "companyName190207/test",
+            "ann_file": "companyName190207/annotations/test.json"
+        },
         "coco_2017_train": {
             "img_dir": "coco/train2017",
             "ann_file": "coco/annotations/instances_train2017.json"
@@ -92,7 +104,18 @@ class DatasetCatalog(object):

     @staticmethod
     def get(name):
-        if "coco" in name:
+        if "companyName" in name:
+            data_dir = DatasetCatalog.DATA_DIR
+            attrs = DatasetCatalog.DATASETS[name]
+            args = dict(
+                root=os.path.join(data_dir, attrs["img_dir"]),
+                ann_file=os.path.join(data_dir, attrs["ann_file"]),
+            )
+            return dict(
+                factory="COCODataset",
+                args=args,
+            )
+        elif "coco" in name:
             data_dir = DatasetCatalog.DATA_DIR
             attrs = DatasetCatalog.DATASETS[name]
             args = dict(

추가된 Coco 데이터에 iscrowd 속성이 없다면 Index Exception이 발생된다. 이 현상을 방지하기 위해 아래와 같이 수정해야 한다.

diff --git a/maskrcnn_benchmark/data/datasets/coco.py b/maskrcnn_benchmark/data/datasets/coco.py
index 4d74ab2..46bf6bb 100644
--- a/maskrcnn_benchmark/data/datasets/coco.py
+++ b/maskrcnn_benchmark/data/datasets/coco.py
@@ -6,6 +6,13 @@ from maskrcnn_benchmark.structures.bounding_box import BoxList
 from maskrcnn_benchmark.structures.segmentation_mask import SegmentationMask


+def iscrowd_is_0(obj):
+    try:
+        return obj["iscrowd"] == 0
+    except:
+        return True
+
+
 class COCODataset(torchvision.datasets.coco.CocoDetection):
     def __init__(
         self, ann_file, root, remove_images_without_annotations, transforms=None
@@ -29,7 +36,7 @@ class COCODataset(torchvision.datasets.coco.CocoDetection):
                 if all(
                     any(o <= 1 for o in obj["bbox"][2:])
                     for obj in anno
-                    if obj["iscrowd"] == 0
+                    if iscrowd_is_0(obj)
                 ):
                     ids_to_remove.append(img_id)

@@ -51,7 +58,7 @@ class COCODataset(torchvision.datasets.coco.CocoDetection):

         # filter crowd annotations
         # TODO might be better to add an extra field
-        anno = [obj for obj in anno if obj["iscrowd"] == 0]
+        anno = [obj for obj in anno if iscrowd_is_0(obj)]

         boxes = [obj["bbox"] for obj in anno]
         boxes = torch.as_tensor(boxes).reshape(-1, 4)  # guard against no boxes

훈련을 위한 config 파일은 아래와 같다. serverid@serverid-SYS-7048GR-TR:~/Project/maskrcnn-benchmark$ cat configs/e2e_mask_rcnn_R_50_FPN_1x_custom.yaml

MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  WEIGHT: "catalog://ImageNetPretrained/MSRA/R-50"
  BACKBONE:
    CONV_BODY: "R-50-FPN"
    OUT_CHANNELS: 256
  RPN:
    USE_FPN: True
    ANCHOR_STRIDE: (4, 8, 16, 32, 64)
    PRE_NMS_TOP_N_TRAIN: 2000
    PRE_NMS_TOP_N_TEST: 1000
    POST_NMS_TOP_N_TEST: 1000
    FPN_POST_NMS_TOP_N_TEST: 1000
  ROI_HEADS:
    USE_FPN: True
  ROI_BOX_HEAD:
    POOLER_RESOLUTION: 7
    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)
    POOLER_SAMPLING_RATIO: 2
    FEATURE_EXTRACTOR: "FPN2MLPFeatureExtractor"
    PREDICTOR: "FPNPredictor"
  ROI_MASK_HEAD:
    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)
    FEATURE_EXTRACTOR: "MaskRCNNFPNFeatureExtractor"
    PREDICTOR: "MaskRCNNC4Predictor"
    POOLER_RESOLUTION: 14
    POOLER_SAMPLING_RATIO: 2
    RESOLUTION: 28
    SHARE_BOX_FEATURE_EXTRACTOR: False
  MASK_ON: True
DATASETS:
  TRAIN: ("coco_2014_train", "coco_2014_valminusminival", "companyName_190207_train")
  TEST: ("coco_2014_minival", "companyName_190207_val")
  #TRAIN: ("coco_2014_train", "coco_2014_valminusminival",)
  #TEST: ("coco_2014_minival",)
DATALOADER:
  SIZE_DIVISIBILITY: 32
SOLVER:
  BASE_LR: 0.02
  WEIGHT_DECAY: 0.0001
  STEPS: (60000, 80000)
  MAX_ITER: 90000

GPU 4개로 훈련하기 위해 아래와 같은 명령을 사용한다.

$ opy -m torch.distributed.launch --nproc_per_node=4 tools/train_net.py --config-file /home/serverid/Project/maskrcnn-benchmark/configs/e2e_mask_rcnn_R_50_FPN_1x_custom.yaml

훈련 데이터를 위한 디렉토리는 아래와 같이 유지시켜야 한다.

maskrcnn-benchmark/datasets
- companyName190207
-- annotations
-- test
-- train
-- val
- coco
-- annotations
-- test2014
-- train2014
-- val2014

Modify the cfg parameters

If you experience out-of-memory errors, you can reduce the global batch size. But this means that you'll also need to change the learning rate, the number of iterations and the learning rate schedule.

Here is an example for Mask R-CNN R-50 FPN with the 1x schedule:

python tools/train_net.py --config-file "configs/e2e_mask_rcnn_R_50_FPN_1x.yaml" SOLVER.IMS_PER_BATCH 2 SOLVER.BASE_LR 0.0025 SOLVER.MAX_ITER 720000 SOLVER.STEPS "(480000, 640000)" TEST.IMS_PER_BATCH 1 MODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN 2000

This follows the scheduling rules from Detectron. Note that we have multiplied the number of iterations by 8x (as well as the learning rate schedules), and we have divided the learning rate by 8x.

We also changed the batch size during testing, but that is generally not necessary because testing requires much less memory than training.

Furthermore, we set MODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN 2000 as the proposals are selected for per the batch rather than per image in the default training. The value is calculated by 1000 x images-per-gpu. Here we have 2 images per GPU, therefore we set the number as 1000 x 2 = 2000. If we have 8 images per GPU, the value should be set as 8000. Note that this does not apply if MODEL.RPN.FPN_POST_NMS_PER_BATCH is set to False during training. See #672 for more details.

Finetuning from Detectron weights on custom datasets

Create a script tools/trim_detectron_model.py like here. You can decide which keys to be removed and which keys to be kept by modifying the script.

Then you can simply point the converted model path in the config file by changing MODEL.WEIGHT.

For further information, please refer to #15.

Support tensorboardX

diff --git a/c2py/maskrcnn_benchmark/engine/trainer.py b/c2py/maskrcnn_benchmark/engine/trainer.py
index 38a9e52..85e281a 100644
--- a/c2py/maskrcnn_benchmark/engine/trainer.py
+++ b/c2py/maskrcnn_benchmark/engine/trainer.py
@@ -8,6 +8,10 @@ import torch.distributed as dist

 from maskrcnn_benchmark.utils.comm import get_world_size
 from maskrcnn_benchmark.utils.metric_logger import MetricLogger
+from maskrcnn_benchmark.utils.tensorboard_logger import TensorboardLogger
+
+ENABLE_TENSORBOARD = True
+


 def reduce_loss_dict(loss_dict):
@@ -47,7 +51,10 @@ def do_train(
 ):
     logger = logging.getLogger("maskrcnn_benchmark.trainer")
     logger.info("Start training")
-    meters = MetricLogger(delimiter="  ")
+    if ENABLE_TENSORBOARD:
+        meters = TensorboardLogger()
+    else:
+        meters = MetricLogger(delimiter="  ")
     max_iter = len(data_loader)
     start_iter = arguments["iteration"]
     model.train()
@@ -78,7 +85,14 @@ def do_train(

         batch_time = time.time() - end
         end = time.time()
-        meters.update(time=batch_time, data=data_time)
+        if ENABLE_TENSORBOARD:
+            meters.update(
+                loss=losses_reduced,
+                time=batch_time,
+                data=data_time,
+                **loss_dict_reduced)
+        else:
+            meters.update(time=batch_time, data=data_time)

         eta_seconds = meters.time.global_avg * (max_iter - iteration)
         eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))
diff --git a/c2py/maskrcnn_benchmark/utils/tensorboard_logger.py b/c2py/maskrcnn_benchmark/utils/tensorboard_logger.py
new file mode 100644
index 0000000..0ce637a
--- /dev/null
+++ b/c2py/maskrcnn_benchmark/utils/tensorboard_logger.py
@@ -0,0 +1,22 @@
+import torch
+from maskrcnn_benchmark.utils.metric_logger import MetricLogger
+
+from tensorboardX import SummaryWriter
+
+
+class TensorboardLogger(MetricLogger):
+    def __init__(self, log_dir='log', start_iter=0, delimiter="\t"):
+        super(TensorboardLogger, self).__init__(delimiter)
+        self.iteration = start_iter
+        self.writer = SummaryWriter(log_dir)
+
+    def update(self, **kwargs):
+        super(TensorboardLogger, self).update(**kwargs)
+        if self.writer:
+            for k, v in kwargs.items():
+                if isinstance(v, torch.Tensor):
+                    v = v.item()
+                assert isinstance(v, (float, int))
+                self.writer.add_scalar(k, v, self.iteration)
+        self.iteration += 1

Dataset

데이터셋 목록은 ./maskrcnn_benchmark/config/paths_catalog.py파일에 위치한다. yaml 설정 파일에 원하는 설정을 추가할 수 있다.

Load PKL or PTH files

pkl파일 또는 pth파일을 로드하는 파일은 maskrcnn_benchmark/utils/c2_model_loading.py 이다. 함수는 _load_c2_pickled_weights이다.

def _load_c2_pickled_weights(file_path):
    with open(file_path, "rb") as f:
        if torch._six.PY3:
            data = pickle.load(f, encoding="latin1")
        else:
            data = pickle.load(f)
    if "blobs" in data:
        weights = data["blobs"]
    else:
        weights = data
    return weights

Log message

...
2019-04-22 18:57:23,850 maskrcnn_benchmark.trainer INFO: eta: 0:01:19   iter: 7800  loss: 0.0443 (0.1279)   loss_classifier: 0.0053 (0.0304)    loss_box_reg: 0.0021 (0.0152)   loss_mask: 0.0363 (0.0769)  loss_objectness: 0.0000 (0.0013)    loss_rpn_box_reg: 0.0001 (0.0042)   time: 0.3861 (0.3973)   data: 0.1825 (0.1856)   lr: 0.000025    max mem: 4071
...
  • loss: 손실율
  • loss_classifier
  • loss_box_reg
  • loss_mask
  • loss_objectness
  • loss_rpn_box_reg
  • time
  • data
  • lr
  • max mem

Validation during training

검증 데이터셋 (Validation set)이 없는 이유에 대한 정보.

저자 답변: #785 에서 논의한 것처럼, 훈련 스크립트가 작동 할 때 하이퍼 파라미터를 변경하지 않기 때문에 별도의 유효성 검사 데이터 세트가 거의 필요하지 않습니다.

Parameter analysis

매개변수 분석에 관련된 내용 정리.

MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  WEIGHT: "catalog://ImageNetPretrained/MSRA/R-50"
  BACKBONE:
    CONV_BODY: "R-50-FPN"
  RESNETS:
    BACKBONE_OUT_CHANNELS: 256
  RPN:
    USE_FPN: True  # 피처 피라미드 구조 인 FPN 사용 여부에 따라 True를 선택하여 여러 피처 맵에서 후보 영역을 추출합니다.
    ANCHOR_STRIDE: (4, 8, 16, 32, 64)  # ANCHOR의 계단 크기
    PRE_NMS_TOP_N_TRAIN: 2000 # 훈련 중 NMS 이전 후보 지역 수
    PRE_NMS_TOP_N_TEST: 1000  # Test, NMS 후 후보 지역 수
    POST_NMS_TOP_N_TEST: 1000
    FPN_POST_NMS_TOP_N_TEST: 1000
    # FPN_POST_NMS_PER_BATCH
  ROI_HEADS:
    USE_FPN: True
  ROI_BOX_HEAD:
    POOLER_RESOLUTION: 7
    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)  # 코코 데이터 세트가 80 (배경)이기 때문에 데이터 세트 카테고리 번호, 기본값은 81, 내 데이터 세트는 4 카테고리 만, 배경은 5 카테고리 (원문: 資料集類別數, 預設是81, 因為coco資料集為80 1(背景),我的資料集只有4個類別,加上背景也就是5個類別)
    POOLER_SAMPLING_RATIO: 2
    FEATURE_EXTRACTOR: "FPN2MLPFeatureExtractor"
    PREDICTOR: "FPNPredictor"
  ROI_MASK_HEAD:
    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)
    FEATURE_EXTRACTOR: "MaskRCNNFPNFeatureExtractor"
    PREDICTOR: "MaskRCNNC4Predictor"
    POOLER_RESOLUTION: 14
    POOLER_SAMPLING_RATIO: 2
    RESOLUTION: 28
    SHARE_BOX_FEATURE_EXTRACTOR: False
  MASK_ON: True
DATASETS:
  ## Case01:
  TRAIN: ("companyName_train",)
  TEST: ("companyName_val",)
  ## Case02:
  # TRAIN: ("companyName_train", "companyName_val")
  # TEST: ("companyName_test",)
DATALOADER:
  SIZE_DIVISIBILITY: 32
SOLVER:
  BASE_LR: 0.02  # 시작 학습 속도, 학습 속도 조정 여러 전략을 가지고, 액세스 프레임 워크는 전략을 사용자 정의
  WEIGHT_DECAY: 0.0001  # 반복 간격을 달리하여 학습 속도를 조정하도록 설정되어 있습니다. (주: Regularization 에서 관련 용어가 나오는듯?)
  STEPS: (60000, 80000)
  MAX_ITER: 90000  # 최대 반복 횟수
  IMS_PER_BATCH: 2  # 일괄 처리에 포함 된 그림 수
TEST:
  IMS_PER_BATCH: 1

Troubleshooting

Force cuda build

nvidia-docker 등에서 빌드할 때 런타임이 로드되지 않을 때 직접 nvcc를 사용해야할 수 있다. 이때 아래와 같이 빌드하면 된다.

FORCE_CUDA=1 /usr/local/c2core/bin/python3.7 setup.py build

만약 pytorch에서 AssertionError각 발생되면 PyTorch#Found no NVIDIA driver on your system항목을 참조하면 된다.

CERTIFICATE_VERIFY_FAILED

CERTIFICATE_VERIFY_FAILED항목 참조.

Cannot import '_download_url_to_file' from 'torch.utils.model_zoo'

maskrcnn-benchmark/maskrcnn_benchmark/utils/model_zoo.py에서 Cannot import '_download_url_to_file' from 'torch.utils.model_zoo'에러가 발생된다면 아래와 같이 수정하면 된다.

#from torch.utils.model_zoo import _download_url_to_file
#from torch.utils.model_zoo import urlparse
#from torch.utils.model_zoo import HASH_REGEX
from torch.hub import _download_url_to_file
from torch.hub import urlparse
from torch.hub import HASH_REGEX

PyTorch가 업데이트되면서 발생된 버그이다.

AttributeError: 'list' object has no attribute 'resize'

훈련 시작시 아래와 같은 에러가 발생할 수 있다.

AttributeError: Traceback (most recent call last):
File "/home/anaconda3/envs/nms/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 108, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/anaconda3/envs/nms/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 108, in
samples = collate_fn([dataset[i] for i in batch_indices])
File "maskscoring_nms/maskrcnn_benchmark/data/datasets/coco.py", line 52, in getitem
img, anno = super(COCODataset, self).getitem(idx)
File "/home/anaconda3/envs/nms/lib/python3.7/site-packages/torchvision-0.3.0a0+8a64dbc-py3.7-linux-x86_64.egg/torchvision/datasets/coco.py", line 114, in getitem
img, target = self.transforms(img, target)
File "maskscoring_nms/maskrcnn_benchmark/data/transforms/transforms.py", line 15, in call
image, target = t(image, target)
File "maskscoring_nms/maskrcnn_benchmark/data/transforms/transforms.py", line 58, in call
target = target.resize(image.size)

AttributeError: 'list' object has no attribute 'resize'

이 경우 torchvision의 버전을 0.2.1으로 조정한 후 사용해 보자.

$ pip uninstall torchvision
$ pip install torchvision==0.2.1

참고로 2019년 8월 26일 현재, Ubuntu Linux 18.04 에서 정상 동작한 PIP 전체 정보는 아래와 같다.

serverid@serverid-Z390-AORUS-ELITE:~/Project/c2core$ /usr/local/c2core/bin/pip list
Package            Version     Location
------------------ ----------- --------------------------------------------------------------
async-generator    1.10
attrs              19.1.0
cffi               1.12.3
Click              7.0
cycler             0.10.0
Cython             0.29.13
decorator          4.4.0
Flask              1.1.1
idna               2.8
imageio            2.5.0
itsdangerous       1.1.0
Jinja2             2.10.1
joblib             0.13.2
kiwisolver         1.1.0
MarkupSafe         1.1.1
maskrcnn-benchmark 0.1         /home/serverid/Project/c2core/script/python/maskrcnn_benchmark
matplotlib         3.1.1
networkx           2.3
ninja              1.9.0.post1
numpy              1.17.0
opencv-python      4.1.0.25
outcome            1.0.0
pandas             0.25.0
Pillow             6.1.0
pip                19.2.3
protobuf           3.9.0
pybind11           2.3.0
pycocotools        2.0.0
pycparser          2.19
pynng              0.4.0
pyparsing          2.4.1.1
python-dateutil    2.8.0
pytz               2019.1
PyWavelets         1.0.3
PyYAML             5.1.1
scikit-image       0.15.0
scikit-learn       0.21.2
scipy              1.3.0
setuptools         40.8.0
six                1.12.0
sklearn            0.0
sniffio            1.1.0
sortedcontainers   2.1.0
subprocess32       3.5.4
tensorboardX       1.8
torch              1.1.0
torchvision        0.2.1
tqdm               4.32.2
trio               0.11.0
Werkzeug           0.15.5
yacs               0.1.6

See also

Favorite site