Rui Nóbrega

Rui Nóbrega is a former post-doc researcher at INESC TEC and former Invited Auxiliary Professor at University of Porto in the Faculty of Engineering (DEI-FEUP). He has a Computer Science Ph.D. degree (Doutoramento em Informática) and has been involved in several computer software projects in collaboration with private companies, public institutions and academia. He studied at the Faculty of Science and Technology in the NOVA University of Lisbon, obtaining degrees in Computer Science (Ph.D.) and Computer Software Engineering (M.Sc., B.Sc.). Before working at FEUP, he worked in the IT company Novabase as a programmer and in the research center CITI (NOVA). Throughout his research at CITI he has collaborated with several entities (Fundação Gulbenkian, LNEC, Protecção Civil, Fundação Museu Berardo, FCSH-UNL, Hospital de S. João) and private companies (Duvideo, Inovagency, BIN, NextWay, Atelier Joana Vasconcelos) by providing high-level consulting skills in multimedia, augmented reality, computer vision and interactive and mobile interfaces. Rui Nóbrega has published several scientific articles in international conferences and had several international experiences. These experiences include a mid-term research visit to the ICG lab at the Technical University of Graz, Austria. Finally he was also a teaching assistant at his university. In general he enjoys working with computer graphics, computer vision, augmented reality, multimedia, interfaces, intelligent algorithms and new interaction devices.

rui.nobrega [ _ at _]

ruinobrega [ __at__]

rui.s.nobrega [ _ at _ ]

Areas of Interest

  • Computer Graphics
  • Interaction
  • Computer Vision
  • Augmented Reality


Google Scholar 


  • [DOI] D. Pinto, J. Costa, R. Nóbrega, H. {Da Silva}, and A. Coelho, “Graphical Simulation of Clinical Scenarios for Medical Training,” Proceedings – icgi 2018: international conference on graphics and interaction, 2019.
    author = {Pinto, Diana and Costa, Jo{\~{a}}o and N{\'{o}}brega, Rui and {Da Silva}, Hugo and Coelho, Ant{\'{o}}nio},
    doi = {10.1109/ITCGI.2018.8602866},
    isbn = {9781538620809},
    journal = {Proceedings - ICGI 2018: International Conference on Graphics and Interaction},
    keywords = {Computer Graphics,Medical Simulation,Serious Games},
    title = {{Graphical Simulation of Clinical Scenarios for Medical Training}},
    year = {2019}


  • T. Matos, R. Nóbrega, R. Rodrigues, and M. Pinheiro, “Dynamic annotations on an interactive web-based 360° video player,” in Proceedings of the 23rd international acm conference on 3d web technology, 2018, p. 22.
    author = {Matos, Teresa and N{\'{o}}brega, Rui and Rodrigues, Rui and Pinheiro, Marisa},
    booktitle = {Proceedings of the 23rd International ACM Conference on 3D Web Technology},
    organization = {ACM},
    pages = {22},
    title = {{Dynamic annotations on an interactive web-based 360° video player}},
    year = {2018}
  • [DOI] R. Nóbrega, J. Jacob, A. Coelho, J. Ribeiro, J. Weber, and S. Ferreira, “Leveraging Pervasive Games for Tourism,” International journal of creative interfaces and computer graphics, vol. 9, iss. 1, p. 1–14, 2018.
    abstract = {Creating an augmented reality (AR) urban tourism application presents several interactivity challenges on how to convey an engaging multimedia experience on-site. This article describes a methodology for fast prototyping of multimedia mobile applications dedicated to urban tourism storytelling with special focus on AR techniques. Following the lessons learned in previous applications the systematic creation of location-based augmented reality (LBAR) applications is explored in this article. The goal is to create serious games for tourism that follow a main narrative but where the story can automatically adapt itself to the current location of the player, assimilate possible detours and allow posterior out-of-location playback. Adaptable stories can use dynamic information from map sources such as points of interest (POI), elevation or virtual buildings. The article discusses and presents solutions for media acquisition, interactive storytelling, game-design interface and multi-disciplinary coordination for mobile app development.},
    author = {N{\'{o}}brega, Rui and Jacob, Jo{\~{a}}o and Coelho, Ant{\'{o}}nio and Ribeiro, Jo{\~{a}}o and Weber, Jessika and Ferreira, Soraia},
    doi = {10.4018/ijcicg.2018010101},
    issn = {1947-3117},
    journal = {International Journal of Creative Interfaces and Computer Graphics},
    keywords = {augmented reality,location-based games,mobile applications,tourism applications},
    number = {1},
    pages = {1--14},
    title = {{Leveraging Pervasive Games for Tourism}},
    volume = {9},
    year = {2018}


  • L. Santos, D. Pereira, P. Beça, R. Nóbrega, and A. Coelho, “Aplicação móvel para divulgação do património natural no turismo,” in Invtur 2017, 2017.
    author = {Santos, Liliana and Pereira, Daniel and Be{\c{c}}a, P. and N{\'{o}}brega, Rui and Coelho, Ant{\'{o}}nio},
    booktitle = {INVTUR 2017},
    title = {{Aplica{\c{c}}{\~{a}}o m{\'{o}}vel para divulga{\c{c}}{\~{a}}o do patrim{\'{o}}nio natural no turismo}},
    year = {2017}
  • [DOI] J. Jacob, R. Nóbrega, A. Coelho, and R. Rodrigues, “Adaptivity and Safety in Location-Based Games,” in 2017 9th international conference on virtual worlds and games for serious applications (vs-games), 2017, p. 173–174.
    author = {Jacob, Jo{\~{a}}o and N{\'{o}}brega, Rui and Coelho, Ant{\'{o}}nio and Rodrigues, Rui},
    booktitle = {2017 9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)},
    doi = {10.1109/VS-GAMES.2017.8056592},
    pages = {173--174},
    publisher = {Ieee},
    title = {{Adaptivity and Safety in Location-Based Games}},
    type = {Proceedings Paper},
    year = {2017}
  • [DOI] R. Nóbrega and N. Correia, “Interactive 3D content insertion in images for multimedia applications,” Multimedia tools and applications, vol. 76, iss. 1, p. 163–197, 2017.
    abstract = {This article addresses the problem of creating interactive mixed reality applications where virtual objects interact in images of real world scenarios. This is relevant to create games and architectural or space planning applications that interact with visual elements in the images such as walls, floors and empty spaces. These scenarios are intended to be captured by the users with regular cameras or using previously taken photographs. Introducing virtual objects in photographs presents several challenges, such as pose estimation and the creation of a visually correct interaction between virtual objects and the boundaries of the scene. The two main research questions addressed in this article include, the study of the feasibility of creating interactive augmented reality (AR) applications where virtual objects interact in a real world scenario using the image detected high-level features and, also, verifying if untrained users are capable and motivated enough to perform AR initialization steps. The proposed system detects the scene automatically from an image with additional features obtained using basic annotations from the user. This operation is significantly simple to accommodate the needs of non-expert users. The system analyzes one or more photos captured by the user and detects high-level features such as vanishing points, floor and scene orientation. Using these features it will be possible to create mixed and augmented reality applications where the user interactively introduces virtual objects that blend with the picture in real time and respond to the physical environment. To validate the solution several system tests are described and compared using available external image datasets.},
    author = {N{\'{o}}brega, Rui and Correia, Nuno},
    doi = {10.1007/s11042-015-3031-5},
    issn = {1573-7721},
    journal = {Multimedia Tools and Applications},
    number = {1},
    pages = {163--197},
    title = {{Interactive 3D content insertion in images for multimedia applications}},
    url = {},
    volume = {76},
    year = {2017}


  • [DOI] C. Carvalheiro, R. Nóbrega, H. da Silva, and R. Rodrigues, “User Redirection and Direct Haptics in Virtual Environments,” in Proceedings of the 2016 acm on multimedia conference – mm ’16, 2016, p. 1146–1155.
    abstract = {? 2016 ACM.This paper proposes a haptic interaction system for Virtual Reality (VR) based on a combination of tracking devices for hands and objects and a real-to-virtual mapping system for user redirection. In our solution the user receives haptic stimuli by manipulating real objects mapped to virtual objects. This solution departs from systems that rely on haptic devices (e.g., haptic gloves) as interfaces for the user to interact with objects in the Virtual Environment (VE). As such, the proposed solution makes use of direct haptics (touching) and redirection techniques to guide the user through the virtual environment. Using the mapping framework, when the user touches a virtual object in the VE, he will simultaneously be physically touching the equivalent real object. A relevant feature of the framework is the possibility to de-fine a warped mapping between the real and virtual worlds, such that the relation between the user and the virtual space can be different from the one between the user and the real space. This is particularly useful when the application requires the emulation of large virtual spaces but the physical space available is more confined. To achieve this, both the user's hands and the objects are tracked. In the presented prototype we use a head-mounted depth sensor (i.e., Leap Motion) and a depth-sensing camera (i.e., Kinect). To assess the feasibility of this solution, a functional prototype and a room setup with core functionality were implemented. The test sessions with users evaluated the mapping accuracy, the user execution time and the awareness of the user regarding the warped space when performing tasks with redirection. The results gathered indicate that the solution can be used to provide direct haptic feedback in VR applications and for warping space perception within certain limits.},
    author = {Carvalheiro, Cristiano and N{\'{o}}brega, R. and da Silva, Hugo and Rodrigues, Rui},
    booktitle = {Proceedings of the 2016 ACM on Multimedia Conference - MM '16},
    doi = {10.1145/2964284.2964293},
    isbn = {9781450336031},
    keywords = {Haptic interaction,Virtual reali,[Direct haptics,direct haptics,haptic interaction,virtual reality},
    pages = {1146--1155},
    title = {{User Redirection and Direct Haptics in Virtual Environments}},
    url = {},
    year = {2016}
  • [DOI] J. Meira, J. Marques, J. Jacob, R. Nóbrega, R. Rodrigues, A. Coelho, and A. A. de Sousa, “Video annotation for immersive journalism using masking techniques,” in 2016 23rd portuguese meeting on computer graphics and interaction (epcgi), 2016, p. 1–7.
    abstract = {{\textcopyright} 2016 IEEE.This paper proposes an interactive annotation technique for 360° videos that allows the use of traditional video editing techniques to add content to immersive videos. Using the case study of immersive journalism the main objective is to diminish the entry barrier for annotating 360° video pieces, by providing a different annotation paradigm and a set of tools for annotation. The spread of virtual reality systems and immersive content has been growing substantially due to technological progress and cost reductions in equipment and software. From all the technologies employed in virtual reality systems, 360° video is one that currently presents unique conditions to be widely used by various industries - especially for communication purposes. From the various areas that can benefit from the usage of virtual reality systems, the communication field is one that requires innovation in the way that narratives are built, especially in virtual reality systems. In the case of immersive journalism, 360° video technology is currently one of the most used mediums by several media outlets. This kind of news content, whose innovative role should be highlighted, is still being studied in the field of journalism, needing a clearly defined set of rules and good practises. In order to improve the introduction of virtual elements in the 360° videos this paper proposes a set of annotation paradigms for 1) Media information display and 2) Narrative and attention focusing. In this paper we present a list of possible techniques that solve the problem of immersive annotation, as well as a description of a prototype that was developed to test these concepts. The prototype implements an annotation technique based on masked videos and the extension of standard subtitle file formats. Finally a fast-track user study was developed to evaluate the acceptance of the visualisation techniques and to refine the set of tools.},
    author = {Meira, Jo{\~{a}}o and Marques, Jo{\~{a}}o and Jacob, Jo{\~{a}}o and N{\'{o}}brega, Rui and Rodrigues, Rui and Coelho, Ant{\'{o}}nio and de Sousa, A. Augusto},
    booktitle = {2016 23rd Portuguese Meeting on Computer Graphics and Interaction (EPCGI)},
    doi = {10.1109/EPCGI.2016.7851189},
    isbn = {978-1-5090-5387-2},
    keywords = {360,360° Video,Annotation,Computer Graphics,Immersive Journalism,Interaction,Virtual Reality,Visualization,and the immer-,annotation,computer graphics,fov,his field of view,immersive journalism,interaction,s attention focus,take into consideration the,these narrative paradigms should,user,video,virtual reality,visualization},
    month = {nov},
    pages = {1--7},
    publisher = {IEEE},
    title = {{Video annotation for immersive journalism using masking techniques}},
    url = {},
    year = {2016}
  • [DOI] R. Nóbrega, J. Jacob, R. Rodrigues, A. Coelho, and A. D. A. Sousa, “Augmenting Physical Maps : an AR Platform for Geographical Information Visualization,” in Eg 2016 – posters, 2016, p. 1017–4656.
    author = {N{\'{o}}brega, R. and Jacob, J. and Rodrigues, Rui and Coelho, A. and Sousa, A. Augusto De},
    booktitle = {EG 2016 - Posters},
    doi = {10.2312/egp.20161052},
    editor = {Magalhaes, Luis Gonzaga and Mantiuk, Rafal},
    pages = {1017--4656},
    publisher = {The Eurographics Association},
    title = {{Augmenting Physical Maps : an AR Platform for Geographical Information Visualization}},
    year = {2016}


  • B. Guedes, R. Nóbrega, F. Pinho, and A. Coelho, “Graphic Interactive Mobile Land Demarcation,” in Epcgi 2015 – posters, 2015.
    author = {Guedes, Bruno and N{\'{o}}brega, Rui and Pinho, F. and Coelho, Ant{\'{o}}nio},
    booktitle = {EPCGI 2015 - Posters},
    keywords = {gis,gps,land demarcation,mobile interfaces},
    title = {{Graphic Interactive Mobile Land Demarcation}},
    year = {2015}
  • [DOI] C. M. de Brito, J. T. {Pinheiro Neto Jacob}, R. Nóbrega, and A. M. {Nogueira Santos}, “Balance Assessment in Fall-Prevention Exergames,” in Proceedings of the 17th international acm sigaccess conference on computers &\#38; accessibility, 2015, p. 439–440.
    abstract = {To assess the success of fall-prevention oriented exergames two digital games were developed taking advantage of the Wii Balance Board (WBB) capabilities. The objective was to evaluate exergames' potential to address elderly adults' poor motivation to exercise practice regularly despite its benefits related to fall-prevention. The system uses the WBB to keep track of the player's center of pressure and computes important balance assessment measures related to it to eventually provide a mean to monitor their patients. The presented demo will feature the two exergames.},
    author = {de Brito, Carlos Miguel and {Pinheiro Neto Jacob}, Jo{\~{a}}o Tiago and N{\'{o}}brega, Rui and {Nogueira Santos}, Ant{\'{o}}nio Manuel},
    booktitle = {Proceedings of the 17th International ACM SIGACCESS Conference on Computers {\&}{\#}38; Accessibility},
    doi = {10.1145/2700648.2811342},
    isbn = {978-1-4503-3400-6},
    keywords = {exergames,fall-prevention,human-computer interaction,natural user interfaces},
    pages = {439--440},
    title = {{Balance Assessment in Fall-Prevention Exergames}},
    url = {},
    year = {2015}
  • R. Nobrega, D. Cabral, G. Jacucci, and A. Coelho, “Nari: Natural Augmented Reality Interface Interaction Challenges for Ar Applications,” in Grapp 2015 – 10th international conference on computer graphics theory and applications; visigrapp, proceedings, 2015, p. 504–510.
    abstract = {Following the proliferation of Augmented Reality technologies and applications in mobile devices it is becoming clear that AR techniques have matured and are ready to be used for large audiences. This poses several new multimedia interaction and usability problems that need to be identified and studied. AR problems are no longer exclusively about rendering superimposed virtual geometry or finding ways of performing GPS or computer vision registration. It is important to understand how to keep users engaged with AR and in what occasions it is suitable to use it. Additionally how should graphical user interfaces be designed so that the user can interact with AR elements while pointing a mobile device to a specific real world area? Finally what is limiting AR applications from reaching an even broader acceptance and usage level? This position paper identifies several interaction problems in today's multimedia AR applications, raising several pressing issues and proposes several research directions. Copyright},
    author = {Nobrega, R and Cabral, D and Jacucci, G and Coelho, A},
    booktitle = {Grapp 2015 - 10Th International Conference on Computer Graphics Theory and Applications; Visigrapp, Proceedings},
    editor = {J., Richard P.pettre J.braz},
    pages = {504--510},
    publisher = {Scitepress},
    title = {{Nari: Natural Augmented Reality Interface Interaction Challenges for Ar Applications}},
    type = {Proceedings Paper},
    year = {2015}
  • [DOI] F. {Babo Martins}, L. F. Teixeira, R. Nóbrega, F. {de Babo Martins}, L. F. Teixeira, and R. Nóbrega, “Visual-Inertial Based Autonomous Navigation,” in Robot 2015: second iberian robotics conference, 2015, p. 561–572.
    abstract = {This paper presents an autonomous navigation and position estimation framework which enables an Unmanned Aerial Vehicle (UAV) to possess the ability to safely navigate in indoor environments. This system uses both the on-board Inertial Measurement Unit (IMU) and the front camera of a AR.Drone platform and a laptop computer were all the data is processed. The system is composed of the following modules: navigation, door detection and position estimation. For the navigation part, the system relies on the detection of the vanishing point using the Hough transform for wall detection and avoidance. The door detection part relies not only on the detection of the contours but also on the recesses of each door using the latter as the main detector and the former as an additional validation for a higher precision. For the position estimation part, the system relies on pre-coded information of the floor in which the drone is navigating, and the velocity of the drone provided by its IMU. Several flight experiments show that the drone is able to safely navigate in corridors while detecting evident doors and estimate its position. The developed navigation and door detection methods are reliable and enable an UAV to fly without the need of human intervention.},
    author = {{Babo Martins}, Francisco and Teixeira, Luis F and N{\'{o}}brega, Rui and {de Babo Martins}, Francisco and Teixeira, Luis F and N{\'{o}}brega, Rui},
    booktitle = {Robot 2015: Second Iberian Robotics Conference},
    doi = {10.1007/978-3-319-27149-1_43},
    pages = {561--572},
    title = {{Visual-Inertial Based Autonomous Navigation}},
    url = {{\_}43},
    year = {2015}