PhD Defense in Digital Media: ”Computing by going back in time: Composing video sequences through multimodal generative coordination”

Candidate:
Luís Henrique Pinto Arandas

Date, Time and Place:
June 03, 14:30, Sala de Atos FEUP

President of The Jury:
António Fernando Vasconcelos Cunha Castro Coelho, PhD, Associate Professor with Habilitation, Faculdade de Engenharia, Universidade do Porto.

Members:
Luísa Maria Lopes Ribas, PhD, Assistant Professor, Departamento de Design de Comunicação, Faculdade de Belas-Artes, Universidade de Lisboa;
David Fernandes Semedo, PhD, Assistant Professor, Departamento de Informática, Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa;
André Sier, PhD, Invited Assistant Professor, Departamento de Artes Visuais e Design, Universidade de Évora;
José Miguel Santos Araújo Carvalhais Fonseca, PhD, Full Professor, Faculdade de Belas Artes, Universidade do Porto (Supervisor);
Gilberto Bernardes de Almeida, PhD, Assistant Professor, Departamento de Engenharia Informática, Faculdade de Engenharia, Universidade do Porto.

The thesis was co-supervised by Professor Mick Grierson, Professor in Computing, and research leader at the Institute of Creative Computing at the University of the Arts, London.

Abstract
This project proposes a set of methods, inspired by the human experience of vision and time, for developing video sequences using trained generative models. The methods serve the production of video sequences, with patterns derived from trained models found in literature on metacreation and in the artworld. This project defines possible futures where the now pervasive generative models can be reused in computer simulations that focus on the human experience of video and mental images; models which, because of how they are trained through archives and records representing both the human and the physical world, can capture media themselves and represent specific moments in time.

The research outputs are in film and audiovisual installation, proposing that practice can further benefit from self-reference, using deep generative models as synthesisers of video, sound, and text. The methods produced take advantage of natural language guidance and deep generative models in ways that can be understood as sampling, sequencing, and translation, following computing literature and AI design. Each result can be understood in larger domains such as: 1) short films from text inputs, in the film Irreplaceable Biography; 2) discursive installations from video datasets, in the installation Time as meaning; and 3) short films from video inputs, in the film Man lost in the convergence of time and the collaboration all YIN no YANG. This research extends on the use of generative practice following a construct of language in the human mind, behaviour, and visual experience as inspiration for the experience of video. These projects further define what can be a broader understanding of directionality and representation of the past using systems of memory that learn, are networked and produced following structure found in nature and human experience.

Keywords: Video composition; Deep generative models; Time-travelling; Human visual experience; Predictive representations.

Posted in Events, Highlights, News, PhD Defenses.