sentient
ios synthesizer with a machine learning algorithm
Synthesizer with a neural network that studies everyday sounds and gradually builds a unique wavetable for each user.
The controller with the eyes represents the neural network’s learning system. When the eyes are open, the system learns; when they are closed, learning is disabled. A long tap toggles learning on or off. Dragging vertically changes the learning rate, while dragging horizontally controls how strongly the winning neuron influences its neighbors. This influence is shown by the number of open eyes and the width of the pupils. While learning, the neural network visibly `swells`.

When a new sound is analyzed, the neuron that best matches it updates its internal representation and `swells` more than the others. Nearby neurons also update, but less strongly, forming a smooth sound space inside the network.

Two grids represent the two `hemispheres` of the neural network. Each cell corresponds to a neuron. Tapping a cell selects the neuron whose waveform will be used. An orange dot marks the selected neuron, and the curve above it shows the fragment of waveform stored by that neuron — the wavetable used to generate notes or modulate parameters.
The controller with the planets manages memories. Dragging it moves through time and lets you browse stored states. A double tap returns to the present moment. A long tap opens a menu with three actions marked by Russian letters: `С` for save from sohranit, `У` for delete from udalit, and `З` for close from zakryt.

The planet interface represents time symbolically. The crescent shadow shows the time of day, the moon’s position indicates the day of the month, and the satellite’s position represents the orbital period of an artificial body similar to the ISS.

Below the neural grids are six synthesis sections. The first is marked with the Russian letter `Г`, from generator, and selects the oscillator waveform. Two sliders control the fundamental frequency and volume.

The remaining sections provide modulation. Sliders control depth and rate, while letters indicate the destination parameter: `П` pitch from podstroika, `П` pan from panorama, `Д` drive from drive, `Ч` filter frequency from chastota, and `Р` resonance from rezonans.
The slider with stars controls the filter. Dragging horizontally selects the filter type, while dragging vertically adjusts drive. The area to the right controls filter frequency and resonance.

At the bottom of the interface is a keyboard of eight keys represented by Lissajous figures — curves created by combining two harmonic oscillations. Each figure corresponds to a note. Above the keys are two buttons: `У` hold from uderzhanie and `П` tuning from podstroika.
In this version 0.1.1 the main critical issues have been fixed. Some layout work remains, and the parameter tree will be visible in hosts such as AUM. Neural network training is available only in the Standalone version, while saved memories remain available in AUv3.