Not only in childhood but also adulthood, we need some trainings to read music scores, which sometimes make music hard to learn and enjoy. In this article, we shall propose the system that enables users to play their handwritten musical notations by our musical interface.

Since 1960s, Optical Music Recognition (OMR) has become matured in the field of printed score. In recent, Yamamoto proposed musical interactive system that directly utilizes printed music score as a instrument with keypoints matching. However, few researches on handwritten notations have been done, as well as on interactive system for OMR. We combined notating with performing in order to make the music more intuitive for users and give aids to learn music to users.


 具体的にはその場で紙に記述した五線譜及び音符情報をイメージセンサにてスキャニング操作をすることで,記述された楽曲を演奏可能にするシステムを提案します.スキャン操作はユーザが手動で行うため,再生速度や逆再生,好きな場所からの再生等の直感的操作が可能になります.さらにはスキャニングデバイスの操作を付け加えることで,ベンドやコーラス等の変化をリアルタイムに付加させることも可能です.本システムのパフォーマンス時のスケッチを図 1 に示します.パフォーマは紙面に簡易な五線譜法によって楽曲を記述し,それをデバイスによってなぞることで即座に楽器音を出力できます.

figure 1. A sketch of the system

video 0. 雨だれの前奏曲 video1. Introduction


Our system mainly consists of a scan device, computer and sound module. A user can play simple music by tracing notes with the scan device as shown in figure 1. The computer processes captured images by using openCV and our algorithm at 30fps, then outputs sounds according to the data from the notations. We do not need any special materials other than this system. A user can use normal white paper and his/her own pen. Our device is built with a USB camera, microcontroller, and vibration motor( see figure 2 ). The vibration motor is used for tactile feedback while the user is playing.


 video.1では自動演奏機能付きグランドピアノを利用して,gocenデバイスからピアノを弾いています. また,ソフト音源を利用することで様々な音色も演奏することができます.

figure 2. Gocen device consisting of USB camera, microcontroller, switchers and vibration motor

figure 3. Captured and processed image


Our system is not only an OMR system but also performance system. We developed several musical interactions for this interface.


Note on/off

video.2 Note On / Off

A user can make a sound by getting the green bar on the computer display (see figure 3) to pass through a simplified note, while pressing “manual play button”.



video.3 Velocity

Our system can detect the size of a musical notation. It is interpreted as control note velocity. Figure 3 shows the relation between a recognized musical notation of processed image and its velocity.


Pitch bend

video.4 Bend

While a user is playing a note, he/she can change the pitch of the note by moving the device vertically like a vibrato.

 主に弦楽器を利用する場合,発音後に音程を一定に保つだけでなく,音程にゆらぎをつけることで,ビブラートや(逆) ターンといったアーティキュレーションを表現できます.そこで本研究において,ユーザは発音後,デバイスを上下に動かすことで音程にピッチベンド効果を与えることができます.これによって前述のアーティキュレーションを表現可能になります.発音時の符頭位置を基にカメラの上下動作量を検出しています.

Changing a instrument

video.5 setting the instrument and options.

A user can change the instrument of sound by covering a text he/she prints to indicate the name of instrument, such as pf(piano), bs(bass), gt(guitar), dr(drums), etc., with the device, by means of Optical Character Recognition1 .

 一般的に記譜上の楽器を示す場合,ピアノであれば pf, ギターであれば gt といった省略文字列が利用されます.そこで本システムにおいても同様に,手書きで記述した文字列をカメラで取得,処理することで任意の楽器選択機能を実現しました.

Record as an sequencer

A user can record sound events into timeline, while pressing “recording button” and make a loop like a sequencer. Each recorded note will be set in the quantized timeline.