JTES (Japanese Twitter-based Emotional Speech)

We designed an emotional speech database that can be used for emotion recognition as well as recognition and synthesis of speech with various emotions. The database was designed by compiling tweets acquired from Twitter and selecting emotion- dependent tweets considering phonetic and prosodic balance. We classified gathered tweets into four emotions: joy, anger, sadness and neutral, and then selected 50 sentences from sentences of each emotion based on the entropy-based algorithm. We compared the selected sentence sets with randomly selected sentence sets from aspects of phonetic and prosodic balance and sentence length, and confirmed that the sets selected by the algorithm were more balanced. Next, we recorded emotional speech based on the selected sentences. Then, we evaluated the speech from the viewpoint of emotional recognition and emotional speech recognition.

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


License


  • propietary

Modalities


Languages