インタラクションデザイン演習実習a

差分

このページの2つのバージョン間の差分を表示します。

この比較画面へのリンク

両方とも前のリビジョン 前のリビジョン
次のリビジョン
前のリビジョン
最新のリビジョン両方とも次のリビジョン
インタラクションデザイン演習実習a [2018/05/23 03:17] – [ScratchInputZ] babaインタラクションデザイン演習実習a [2018/05/23 03:28] – [Abstract] baba
行 102: 行 102:
 {{youtube>re8Kh71v8m4?medium}} {{youtube>re8Kh71v8m4?medium}}
  
-====== You Only Look Once: Unified, Real-Time Object Detection ======+====== 論文読解4:You Only Look Once: Unified, Real-Time Object Detection ======
 [[https://arxiv.org/pdf/1506.02640.pdf]] [[https://arxiv.org/pdf/1506.02640.pdf]]
 >Joseph Redmon∗, Santosh Divvala∗†, Ross Girshick¶, Ali Farhadi∗† >Joseph Redmon∗, Santosh Divvala∗†, Ross Girshick¶, Ali Farhadi∗†
行 108: 行 108:
 We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to per- form detection. Instead, we frame object detection as a re- gression problem to spatially separated bounding boxes and associated class probabilities. A single neural network pre- dicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to per- form detection. Instead, we frame object detection as a re- gression problem to spatially separated bounding boxes and associated class probabilities. A single neural network pre- dicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance.
 Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detec- tors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other de- tection methods, including DPM and R-CNN, when gener- alizing from natural images to other domains like artwork. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detec- tors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other de- tection methods, including DPM and R-CNN, when gener- alizing from natural images to other domains like artwork.
 +
 +{{youtube>VOC3huqHrss?medium}}
 +
 +ここでは,リアルタイム物体検出で有名なYOLOの論文を読み,実際にYOLOを利用してみます.YOLOそのものを実装しませんが,深層学習におけるネットワークのデザインや意味に関して概略的に学び,それを利用したアプリケーションを作成するところまでを行ってみます.
 +
 +===== YOLO をOpenframeworksで動かす =====
 +