Sound Design Process

My composition and mixing practice is deeply linked with sound design — I treat them as one discipline. Every project starts from raw material and ends as a carefully designed sound experience.

I-go-I-e project

Buddhist Temple Recordings, Kyoto and footage from Tokyo Festival of Modular.

Sound design and composition collaboration with Biwa master. Field recordings in Buddhist temples, combined with FM (Digitone) and granular synthesis (Make Noise Morphagne – processing both voice and FM synth patches)

Buchla 200e

Generative Industrial / Cyberpunk Soundscapes

Generative patches using wavetables, LPGs, feedback routing and patchbay sequencing. State-based preset sequencing for evolving textures and rhythmic structures.

Na Styku with Lucjan Kościółek

Polish Traditional Instruments x Modular

https://www.instagram.com/p/CEa5kh_hiEJ/

https://www.instagram.com/p/CDIm6xlhgAC/

Lira and suka combined with Eurorack modular synthesis. Acoustic-electronic sound design. Processing recordings thru granular synthesis and Eurorack soundscapes.

Active Development of UE5 Audio MCP (alpha, open source)

AI-powered game audio toolkit — MetaSounds graph generation & export, project scanning, and Wwise/Blueprint knowledge base via MCP for Unreal Engine 5.7.

  • One MCP server, 63 tools
  • UE5 Plugin with C++ TCP commands
  • 178 engine-verified MetaSounds nodes.
  • Optimised for Unreal Engine 5.7 and Wwise 2025.
  • 5 custom C++ nodes — ReSID SID chip emulator (oscillator, envelope, filter, voice, full chip)
  • Editor menu — Scan Project, Export MetaSounds, Server Status from UE5 menu bar
  • Engine sync — 842 MS nodes + 979 BP functions synced from live UE5 editor

https://github.com/koshimazaki/UE-AUDIO-MCP

Slavic Beasts

Mythological Creature Sound Design

Installation featuring sound design for Slavic mythological creatures. Mixed composed and created all the audio content in the project. This includes field recordings, voice actor performance (Cheri Mainwaring) and sound synthesis. Installation had multiple layers that were triggered real-time as individual loops dedicated to each creature, triggered by audience via webcam and Kinect.