earGram is software for interactive exploration of large databases of audio snippets. It was developed by Gilberto Bernardes (FEUP, SMC Group – INESC/TEC) in Pure Data. earGram extends concatenative sound synthesis to embed generative strategies as selection procedures and aims at exploring creative spaces with reduced user interaction.
The software has four methods that automatically re-arrange and explore a database of sound events (corpus) by other rules then their original temporal order into musically coherent outputs. Of notice are the system data mining capabilities that reveal musical patterns and temporal organizations, as well as several visualization tools that represent a vital decision-making aid during performance.
To download earGram click here
Read more about earGram in the following publications:
Composing Music by Selection: Content-Based Algorithmic-Assisted Audio Composition , PhD thesis, Faculty of Engineering, University of Porto, 2014.
EarGram: an Application for Interactive Exploration of Concatenative Sound Synthesis in Pure Data , In Lecture Notes in Computer Science, From Sounds to Music and Emotions, Revised Selected Papers of the 9th International Symposium on Computer Music Modelling and Retrieval, volume 7900, 2013.
EarGram: an Application for Interactive Exploration of Large Databases of Audio Snippets for Creative Purposes , In Proceedings of the 9th International Symposium on Computer Music Modelling and Retrieval, 2012.
See also, the main earGram homepage: here