Welcome! Here at Lee Laboratory of Nagoya Institute of Technology, we focus on human-to-human and human-to-machine communication through speech and language, and are conducting research on speech recognition, spoken dialogue systems, natural language processing, speech-based interaction, and avatar communication. Our goal is to advance technologies related to spoken language processing and to realize highly sophisticated, voice- and language-driven man–machine interfaces that are truly natural and user-friendly for everyone.

Our research topic involves:

  • Speech recognition and synthesis
  • Spoken dialogue system
  • Natural language processing
  • CG based humanoid agent interaction
  • Avatar communication

Lee Lab is running in collaboration with Sako laboratory. We also have a cooperative relationship with Tokuda, Nankaku and Hashimoto Laboratory.


About the PI

Professor

LEE Akinobu

Professor

LEE Akinobu was born in Kyoto, Japan, on December 19, 1972. He received the B.E. and M.E. degrees in information science, and the Ph.D. degree in informatics from Kyoto University, Kyoto, Japan, in 1996, 1998 and 2000, respectively. He worked on Nara Institute of Science and Technology as an assistance professor from 2000-2005. Currently he is a professor of Nagoya Institute of Technology, Japan. His research interests include speech recognition, spoken language understanding, and spoken dialogue system. He is a member of IEEE, ISCA, JSAI, IPSJ and the Acoustical Society of Japan.

He is also a researcher who loves coding and has been involved in open-source activities for over 25 years. Below is a list of open-source software and CG avatars for which he serves as the lead developer:

  • ASR engine Julius (from 1996)
  • CG Agent Interaction Toolkit MMDAgent (2011~)
  • Extended version for CG Avatar interaction: MMDAgent-EX MMDAgent-EX (2020~)
  • High-quality Open CG Avatars: Gene and Uka (2023~)

News and Posts


New members joined on 2022

Following members has been joined to Lee and Sako lab on this April.

  • 磯谷 光 / ISOGAI Hikaru (M1)
  • LOURENCO CORREA Iago / LOURENCO CORREA Iago (M1)
  • 市川 奎吾 / ICHIKAWA Keigo
  • 柄澤 光一朗 / KARASAWA Koichiro
  • 川地 奎多 / KAWACHI Keita
  • 川又 朱莉 / KAWAMATA Akari
  • 中野 俊輔 / NAKANO Syunsuke
  • 藤岡 侑貴 / FUJIOKA Yuki
  • 三浦 麻登伊 / MIURA Matoi
  • 吉村 涼平 / YOSHIMURA Ryohei
  • 神田 陸人 / KANDA Rikuto
  • 木全 亮太朗 / KIMATA Ryotaro
  • 齋藤 大輔 / SAITO Daisuke

Joined CAO/JST Moonshot Project

From December 2020, Our lab has been participating in the “Avatar Symbiotic Society” project, a moonshot-type research and development project led by Professor Ishiguro of Osaka University.

“CG-Specific Dialogue” is our research theme. We are focusing on a new conversation system with CG characters that seamlessly integrates an autonomous dialogue system and human remote control (avatars) to realize truly usable, rich-enpowered human-communication system on the next era. We are now working in collaboration with various research institutes.

New members joined on 2021

Following members has been joined to Lee and Sako lab on this April.

  • ビン ヒデオくん / BIN Hideo (M1)
  • 義井健史くん / YOSHII Kensi (M1)
  • 岩澤 芙弓さん / IWASAWA Fuyumi
  • 江崎 友都くん / ESAKI Yuto
  • 志満津 奈央さん / SHIMAZU Nao
  • 田中 愛菜さん / TANAKA Mana
  • 東 省吾くん / HIGASHI Shogo
  • 藤田 敦也くん / FUJITA Atsuya
  • 村松 洸兵くん / MURAMATSU Kohei
  • 辰巳 花菜さん / TATSUMI Kana
  • 宮下 陸くん / MIYASHITA Riku

Julius-4.6 Released

Julius version 4.6 has been released. You can get it from its GitHub site.

What’s new in Julius-4.6

Julius-4.6 is a minor release with new features and fixes, including GPU integration and grammar handling updates.

GPU-based DNN-HMM computation

(Take a look at v4.6 performance comparison on YouTube!)

Now Julius can compute DNN-HMM with GPU. Total decoding will be four times faster than CPU-based computation on Julius-4.5.

Requires CUDA version 8, 9 or 10.2 and NVIDIA card. You should build Julius with nvcc to enable it. See INSTALL.txt for details.

Julius: added a new feature

Julius has merged a pull request that adds a new feature “grammar search on the 1st pass”. To use it, get the latest code on master branch.

It enables applying full grammar on the 1-pass, thus outputs more reliable (grammar-constrained) result at the 1st pass.

Background The grammar-based recognition on Julius does not apply the full grammar on the 1st pass, but applies only the word-pair constraint extracted from the grammar for efficiency. The errors on the 1st pass caused by the loose constraint will be recovered on the final pass, so the approximation basically does not impact on the final result.

New members joined on 2020

Following members has been joined to Lee and Sako lab on this April.

  • 菊地 源くん / KIKUCHI Gen (M1)
  • 畑中 哲哉くん / HATANAKA Tetsuya
  • 松本 優太くん / MATUMOTO Yuta
  • 池口 弘尚くん / IKEGUCHI Hironao
  • 愛甲 拓海くん / AIKO Takumi
  • 小木曽 雄飛くん / OGISO Yuto
  • 渡邉 大地くん / WATANABE Daichi
  • 堀田 義眞くん / HOTTA Yoshimasa
  • 白井 建くん / SHIRAI Takeru
  • 小川 凜人くん / OGAWA Rinto

The 2019 bachelor thesis presentation was held

The graduation thesis presentation meeting was held. The following ten members gave presentations:

  • 松岡 優太「楽曲の再生履歴情報を用いた GA による自動メロディ生成」
  • 中川 樹「ホルンを対象とした音響信号による音色悪化要因判別」
  • 尾関 日向「音楽音響信号を対象としたギターパートの分離」
  • 池田 将「対話的楽曲推薦を目的とした発話からの感情要素と感情強度推定手法」
  • 高井 幸輝「マルチコーパスおよびマルチラベルを用いた Pre-training Fusion による自発音声の感情分類」
  • 岡本 空大「ニューラルネットワークを用いた講演音声の自動発話印象推定における推定精度の向上」
  • 大平原 海斗「Weakly-labeled Data を用いた音響イベント検出」
  • 加藤 弘泰「音声対話システムにおけるユーザ話速およびコーパスに基づく話速制御」
  • 西山 達也「話者情報を用いたBERTによる複数人対話における応答選択」
  • 石島 侑弥「対話状態追跡における対話行為タグを用いた重要対話履歴抽出」

The 2019 master's thesis review meeting was held.

The 2019 master’s thesis review meeting was held. The following members gave presentations:

  • 森 凜太朗「言語対の音素事後確率を用いた第二言語学習者の発音習熟度判別」
  • 降籏 暢基「擬人化エージェントを用いた情動表出による対話性認知の獲得」
  • 冨田 直希「頑健な音声対話システムのための言語情報と統計情報による対話シナリオ拡張」
  • 尾関 晃英「ニューラル対話システムにおける発話評価器を用いたスタイル性の高い応答文生成」
  • 田中 涼太「収束的- 拡散的デコーディングを用いた事実知識に基づく対話生成」
  • 神谷 祐太朗「バイオリンを対象とした楽譜情報からのビブラート予測」
  • 河島 有孝「コード認識による押弦絞り込みを取り入れた異弦同音を区別可能な演奏音からのタブ譜推定」
  • 谷口 拓海「音色とピッチの揺らぎを考慮した歌声区間推定」
  • NGUYEN TU NAM「転移学習と合成画像を用いた指文字認識」