
Music search system interface
In this system, similar to the character coordination system, membership functions are predefined, and fuzzy rules are used to extract user preference rules related to music. The results from the basic experiments have confirmed that the system operates effectively to some extent in acquiring user preferences; however, there are also issues unique to music.
The main difference between music (audio data) and character coordination (static image data) is whether the data is time-series or not. Time series data refers to information that changes over time, such as sound or video. In the case of time series data, evaluating each individual piece requires a certain amount of time to perceive the stimuli from the data (i.e., what kind of music it is), making it more time-consuming to assess compared to static image data, which can be evaluated at a glance.
Additionally, there are cases in fuzzy rules where it is difficult to effectively visualize the features being used and their significance. These issues indicate that there is still considerable room for growth in research related to the affective search agent model.
Related our works
[International Conference] Hiroshi Takenouchi, Yuna Ishihara, Masataka Tokumaru, "Preference Rule Extraction with Kansei Retrieval Agent Using Fuzzy Reasoning for Music Retrieval", the 20th World Congress of the International Fuzzy Systems Association (IFSA), MA4-2, pp.140-145, 2023-08 (Daegu, Korea). Best Presentation Award!
[International Conference]Hiroshi Takenouchi, Airi Hattori, Masataka Tokumaru, "Music Recommendation System Considering Musical Score Features using Kansei Retrieval Agents with Fuzzy Inference", International Symposium on Affective Science and Engineering 2022 (ISASE2022), PM-2A-05, 2022-03 (Online).
*Bold: Students of this laboratory