Welcome! Here at Lee Laboratory of Nagoya Institute of Technology, we are conducting research on speech recognition, speech dialogue, natural language processing, speech interface, and speech interaction, targeting human-to-human and human-to-machine communication using speech and language.

Our aim is to develop technologies for intelligent information processing, including speech, language, and dialogue, as well as realize natural and easy-to-use voice and language interfaces.

Our research topic involves:

  • Speech recognition
  • Spoken dialogue system
  • Natural language processing
  • Conversational speech interface
  • Speech interaction
  • Social media and user-based contents creation, related with spoken dialogue system

Lee Lab is running in collaboration with Sako laboratory. We also have a cooperative relationship with Tokuda, Nankaku and Hashimoto Laboratory.


About the PI

Professor

LEE Akinobu

Professor

LEE Akinobu was born in Kyoto, Japan, on December 19, 1972. He received the B.E. and M.E. degrees in information science, and the Ph.D. degree in informatics from Kyoto University, Kyoto, Japan, in 1996, 1998 and 2000, respectively. He worked on Nara Institute of Science and Technology as an assistance professor from 2000-2005. Currently he is a professor of Nagoya Institute of Technology, Japan. His research interests include speech recognition, spoken language understanding, and spoken dialogue system. He is a member of IEEE, ISCA, JSAI, IPSJ and the Acoustical Society of Japan. He is also a developer of open-source speech recognition software Julius and CG agent-based speech interaction toolkit MMDAgent.


News and Posts


New members joined on 2024

Following members has been joined to Lee and Sako lab on this April. 池田 康希 / IKEDA Kouki (M1) 黄 永展 / HUANG Yongzhan (M1) 阪上 聡吾 / SAKAUE Sogo 清水 誠広 / SHIMIZU Masahiro 鈴木 颯真 / SUZUKI Soma 鈴木 萌々音 / SUZUKI Momone 仲田 樹 / NAKADA Itsuki 星野 琴未 / HOSHINO Kotomi 箕成 侑音 / MINARI Yukito 山田 航暉 / YAMADA Koki 𠮷田 拓実 / YOSHIDA Takumi

We have released MMDAgent-EX

We have released MMDAgent-EX, our open-source platform for CG avatar based spoken dialogue system, multimodal dialogue and avatar communication. Links: Press release (by NITech, in Japanese) Official site GitHub

New members joined on 2023

Following members has been joined to Lee and Sako lab on this April. 吉村 涼平 / YOSHIMURA Ryohei 齋藤 大輔 / SAITO Daisuke 岡本 海 / Umi OKAMOTO 笠間 健太郎 / Kentaro KASAMA 月東 菜乃 / Nano GATTO 金子 優 / Yu KANEKO 酒井 健壱 / Kenichi SAKAI 目瀬 道瑛 / Michiaki MESE 嶋崎 純一 / Junichi SHIMAZAKI 伊藤 誠一郎 / Seiichiro ITO 梅田 唯花 / Yuika UMEDA 鈴木 香保 / Kaho SUZUKI

New members joined on 2022

Following members has been joined to Lee and Sako lab on this April. 磯谷 光 / ISOGAI Hikaru (M1) LOURENCO CORREA Iago / LOURENCO CORREA Iago (M1) 市川 奎吾 / ICHIKAWA Keigo 柄澤 光一朗 / KARASAWA Koichiro 川地 奎多 / KAWACHI Keita 川又 朱莉 / KAWAMATA Akari 中野 俊輔 / NAKANO Syunsuke 藤岡 侑貴 / FUJIOKA Yuki 三浦 麻登伊 / MIURA Matoi 吉村 涼平 / YOSHIMURA Ryohei 神田 陸人 / KANDA Rikuto 木全 亮太朗 / KIMATA Ryotaro 齋藤 大輔 / SAITO Daisuke

Joined CAO/JST Moonshot Project

From December 2020, Our lab has been participating in the “Avatar Symbiotic Society” project, a moonshot-type research and development project led by Professor Ishiguro of Osaka University. “CG-Specific Dialogue” is our research theme. We are focusing on a new conversation system with CG characters that seamlessly integrates an autonomous dialogue system and human remote control (avatars) to realize truly usable, rich-enpowered human-communication system on the next era. We are now working in collaboration with various research institutes.

New members joined on 2021

Following members has been joined to Lee and Sako lab on this April. ビン ヒデオくん / BIN Hideo (M1) 義井健史くん / YOSHII Kensi (M1) 岩澤 芙弓さん / IWASAWA Fuyumi 江崎 友都くん / ESAKI Yuto 志満津 奈央さん / SHIMAZU Nao 田中 愛菜さん / TANAKA Mana 東 省吾くん / HIGASHI Shogo 藤田 敦也くん / FUJITA Atsuya 村松 洸兵くん / MURAMATSU Kohei 辰巳 花菜さん / TATSUMI Kana 宮下 陸くん / MIYASHITA Riku

Julius-4.6 Released

Julius version 4.6 has been released. You can get it from its GitHub site. What’s new in Julius-4.6 Julius-4.6 is a minor release with new features and fixes, including GPU integration and grammar handling updates. GPU-based DNN-HMM computation (Take a look at v4.6 performance comparison on YouTube!) Now Julius can compute DNN-HMM with GPU. Total decoding will be four times faster than CPU-based computation on Julius-4.5. Requires CUDA version 8, 9 or 10.