CMMR 2017 Logo

CMMR 2017

De 25 a 28 de Setembro de 2017 o Laboratório de I&D de Computação Sonora e Musical em conjunto com a Orquestra Jazz de Matosinhos e o CNRS – Laboratoire de Mécanique et d’Acoustique (França) irão organizar o CMMR 2017.

O evento decorrerá no CARA (Centro de Alto Rendimento Artístico) em Matosinhos, Porto.

Convidamos os interessados a submeter artigos, musica e demos que se alinhem com o tema da conferência, tendo as seguintes datas em mente:

  • 2 de Abril, 2017 – Artigos / Música / Demos – Prazo limite para submissão
  • 9 de Junho, 2017 – Notificação de aceitação
  • 2 de Julho, 2017 – Limite de Camera-ready
  • 9 de Julho, 2017 – Limite para o registo de autores

A lista completa de tópicos e instruções adicionais sobre a preparação de submissões pode ser encontrada em:
http://cmmr2017.inesctec.pt/call-for-contributions/call-for-papers/
http://cmmr2017.inesctec.pt/call-for-contributions/call-for-music/
http://cmmr2017.inesctec.pt/call-for-contributions/call-for-demos/
Aguardamos anciosamente por receber as suas submissões e por receber a todos no Porto e Matosinhos!

CMMR 2017 Logo

CMMR 2017

During 25-28th September, 2017, the Sound and Music Computing Group at INESC TEC together with the Orquestra Jazz de Matosinhos and CNRS – Laboratoire de Mécanique et d’Acoustique (France) will organise CMMR 2017.

CMMR 2017 will be held at CARA (Centro de Alto Rendimento Artístico) in Matosinhos, Porto.

We invite submissions for papers, music, and demos aligned with the conference theme, with the following important dates:
2nd April, 2017 – Paper / Music / Demo submission deadline
9th June, 2017 – Notification of acceptance
2nd July, 2017 – Camera-ready deadline
9th July, 2017 – Author registration deadline
The full list of topics and further instructions about preparation of submissions can be found here:

We greatly look forward to receiving your submissions, and welcoming you to Porto and Matosinhos in September!

RobotDance

Lego NXT Dancing to the Rhythm

A Lego-NXT-based humanoid robot dancing to the rhythm of prodigy. It reacts in real-time to multimodal events, from musical note-onsets and dance floor colors, with user-defined dance motions. [paper]

Synchronized Robot Dancing

RoboNova dancing to Blur. Each sequence of moves is synchronized to the musical beats on-the-fly, retrieved from the real-time INESC Porto Beat Tracker (IBT). [paper1]; [paper2]

Humanoid Robot Dancing Samba

Simulated humanoid robot NAO dancing samba. The dance movements were mapped from MoCap (motion capture) recordings of human samba performances. [paper1]; [paper2]; [paper3]

Beat Tracking for Robot Audition and Live Beat-Synchronous Robot Dancing

Interactive robot dancing system that simultaneously processes and responds to the beat of continuous musical stimuli and to the speech of a human interactor in a real-world scenario. [paper1]; [paper2]; [paper3]

MIReS

  • Topic: Music Information Retrieval
  • Partners: UPF (Barcelona), BMAT (Barcelona), OFAI (Vienna), IRCAM (Paris), INESC Porto (Porto), Stromatalite (London), QMUL (London)
  • Project leader: Stromatolite
  • Funding: EU
  • Dates: 2011-2013
  • External link
Palco 3.0

Palco3.0

  • Topic: Improving music recommendations in the portuguese music social network Palco Principal:
    • social networks of musicians an listeners
    • free music data, with a focus on Portuguese underground music
  • Research on:
    • music audio signal classification (genres, tags, etc.)
    • audio-based music similarity
    • music similarity based on user behaviors
    • music similarity based on textual data
    • playlist generation
  • Partners: Palco Principal Lda, INESC Porto, Economics Faculty University of Porto, Science Faculty University of Porto
    • Project leader: Palco Principal Lda
    • Funding: QREN
    • Dates: 2009-2011

COST IC0601 Action on Sonic Interaction Design

SID is the COST IC0601 Action on Sonic Interaction Design.

  • Topic: Sonic Interaction Design
  • Partners
  • Project Chair: Universita IUAV di Venezia
  • Funding: COST
  • Dates: 2007-2011

In the SID COST Action, our group participated in a number of Short-Term Scientific Missions:

  1. “Soundscape analysis using music principles and technologies”Ming Yang from University of Sheffield, to our group; February-March 2011
  2. IMPROVe Sound Object Instrument”Richard Widerberg from Valand School of Fine Arts, University of Gothenburg, to our group; November 2010
  3. Interactive Music 2.0: User-driven ad hoc dance clubs”Yago De Quay from our group to University of Oslo, Dept. of Musicology and Informatics: Aug-Sept 2010
  4. “Mapping Human Dance Gestures onto Robots”João Oliveira from our group to IPEM, Ghent University; November 2009
  5. “Interactions between dance movements and music rhythmic structure – The case of groove in Samba”Luiz Naveda from IPEM, Ghent University to our group; May 18-June 1st, 2009
  6. “UrbanSync: Gathering of urban signals” Stephan Baumann; from German Research Center for Artificial Intelligence, Kaiserslautern, to our group; 6/10/08 – 26/10/08
  7. “Exploring contact sound synthesis via haptic interactions”Mathieu Lagrange; from IRCAM to our group; April 2008
  8. “Implementation of Auditory Scene Analysis cues in the free sound analysis framework MARSYAS”Luis Gustavo Martins; from our group to CIRMMT, Montreal; June 2008

Improvasher: a real-time mashup system for live musical input

Matthew E. P. Davies1, Adam M. Stark2, Fabien Gouyon1 and Masataka Goto3
1INESC TEC, Porto, Portugal
2Independent Researcher, London, United Kingdom
3National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan

This webpage contains video examples to accompany the paper accepted at the NIME 2014 conference.

Demo 1 – Live Guitar / Michael Jackson mashup


In this demo, beat slices from two Michael Jackson songs are used to make the real-time mashup accompaniment.


Demo 2 – Average White Band / Slap Bass mashup


In this demo, a recording of slap bass playing is used to make the real-time mashup to the Average White Band.


Demo 3 – Live acoustic guitar / Bass mashup


In this demo, we use the bass to accompany the live acoustic guitar input.


Demo 4 – Steely Dan DJ style live mashup


In this demo the user controls when the mashup is active adding it over the top of the input in real-time.

 

This research was partially funded by the FCT postdoctoral grant (SFRH/BPD/88722/2012), the Media Arts and Technologies project (MAT), NORTE-07-0124-FEDER-000061, financed by the North Portugal Regional
Operational Programme (ON.2 – O Novo Norte), under the National Strategic Reference Framework (NSRF), through the European Regional Development Fund (ERDF), and by national funds, through the Portuguese funding agency, Fundação para a Cîencia e a Tecnologia (FCT), and by OngaCrest, CREST, JST.

WEVE

This is a CouchApp for conducting automated listening tests online. CouchApps are web applications which can be served directly from CouchDB. This gives them the nice property of replicating just like any other data stored in CouchDB. They are also simple to write as they can use the built-in jQuery libraries and plugins that ship with CouchDB.