This research focuses on the computational design process of synesthesia spaces. Within the scope of the knowledge-based design, a system is developed with inputs and outputs. Two of the five human senses, sight and hearing are selected as the inputs, and the 3D output is generated to express the two input senses together, which is printable, and the generated expression can be perceived by touching. In this way, the system can detect the data from digital media such as music or movie and produces 3D models to be perceived by touch.
This system enables the transformation of video inputs into 3D models and is a prototype and needs to be developed further to improve the ease of use. So, the interaction with media sources via touch becomes possible at the end of the process. Also, synesthesia exhibition areas can be created by using this program. It is assumed that the experience of digital media by different senses can bring another dimension to the exhibition work. Moreover, it may also make possible the visual and hearing-impaired people to interact with digital media differently.