[ 日本語 | English ]
Showa-ku, Nagoya, Aichi 4668555 JAPAN
ri@nitech.ac.jp
Welcome! Here at Lee Laboratory of Nagoya Institute of Technology, we focus on human-to-human and human-to-machine communication through speech and language, and are conducting research on speech recognition, spoken dialogue systems, natural language processing, speech-based interaction, and avatar communication. Our goal is to advance technologies related to spoken language processing and to realize highly sophisticated, voice- and language-driven man–machine interfaces that are truly natural and user-friendly for everyone.
LEE Akinobu was born in Kyoto, Japan, on December 19, 1972. He received the B.E. and M.E. degrees in information science, and the Ph.D. degree in informatics from Kyoto University, Kyoto, Japan, in 1996, 1998 and 2000, respectively. He worked on Nara Institute of Science and Technology as an assistance professor from 2000-2005. Currently he is a professor of Nagoya Institute of Technology, Japan. His research interests include speech recognition, spoken language understanding, and spoken dialogue system. He is a member of IEEE, ISCA, JSAI, IPSJ and the Acoustical Society of Japan.
He is also a researcher who loves coding and has been involved in open-source activities for over 25 years. Below is a list of open-source software and CG avatars for which he serves as the lead developer:
Assistant Professor Sei Ueno joined Lee lab in April. Together with Assist. Prof. Ueno, we will continue our research on spoken language information processing and spoken dialogue interfaces.
From December 2020, Our lab has been participating in the “Avatar Symbiotic Society” project, a moonshot-type research and development project led by Professor Ishiguro of Osaka University.
“CG-Specific Dialogue” is our research theme. We are focusing on a new conversation system with CG characters that seamlessly integrates an autonomous dialogue system and human remote control (avatars) to realize truly usable, rich-enpowered human-communication system on the next era. We are now working in collaboration with various research institutes.
Julius version 4.6 has been released. You can get it from its GitHub site.
What’s new in Julius-4.6 Julius-4.6 is a minor release with new features and fixes, including GPU integration and grammar handling updates.
GPU-based DNN-HMM computation (Take a look at v4.6 performance comparison on YouTube!)
Now Julius can compute DNN-HMM with GPU. Total decoding will be four times faster than CPU-based computation on Julius-4.5.
Requires CUDA version 8, 9 or 10.
Julius has merged a pull request that adds a new feature “grammar search on the 1st pass”. To use it, get the latest code on master branch.
It enables applying full grammar on the 1-pass, thus outputs more reliable (grammar-constrained) result at the 1st pass.
Background The grammar-based recognition on Julius does not apply the full grammar on the 1st pass, but applies only the word-pair constraint extracted from the grammar for efficiency.