https://www.ambientmediaassociation.org/Journal/index.php/series/issue/feedInternational SERIES on Information Systems and Management in Creative eMedia (CreMedia)2018-02-01T12:53:00+00:00Artur Lugmayrartur.lugmayr@artur-lugmayr.comOpen Journal Systems<p><span style="text-decoration: underline;"><strong>ABOUT THE SERIES</strong></span></p> <p>The International Series on Information Systems and Mangement in Creative eMedia (CreMedia) advances the knowledge of creativity industry with a cross-disciplinary viewpoint towards media, and attempts to connect media technology, business and management research, media art, and digital content production. Contributions such as emerging technologies, information systems, IT infrastructures, methods, algorithms, tools, business research, human-computer-experience, media management, art, and digital content productions are covered. The journal covers a wide range of media genres, such as television, emerging media, publishing, digital games, radio, ubiquitous/ambient media, advertising, social media, motion pictures, online video, eHealth, eLearning, and other eMedia related industries.</p> <p><span style="text-decoration: underline;">The journal seeks contributions from the fields of:</span></p> <ul> <li>computer science, in particular entertainment computation, media technology and multimedia</li> <li>human-computer-interaction, and user-experience</li> <li>ubiquitous, pervasive, ambient, and semantic intelligent technologies</li> <li>media management, business, economics, information systems research in media industries</li> <li>media art, content production, content systems, and services</li> <li>tools, software/hardware architectures, and their solutions</li> <li>methods, algorithms, and paradigms</li> </ul> <p><span style="text-decoration: underline;">Journal identifier:</span></p> <p> ISSN 2341-5584 (Print)<br> ISSN 2341-5576 (PDF)<br> ISSN 2341-6165 (CD-ROM)</p> <p> </p> <p><span style="text-decoration: underline;"><strong>FOUNDERS AND MANAGING DIRECTORS</strong></span></p> <ul> <li class="show">A/Prof. Dr. Artur Lugmayr, AUSTRALIA</li> <li class="show">Bjoern Stockleben, HFF, GERMANY</li> <li class="show">Emilija Stojmenova, Univ. of Ljubljana, SLOVENIA</li> </ul> <p><span style="text-decoration: underline;"><strong>INTERNATIONAL EDITORIAL AND ADVISORY BOARD</strong></span></p> <ul> <li class="show">Estefania Serral Asensio, KU-Leuven, BELGIUM</li> <li class="show">Mark Billinghurst, University of Canterbury, NEW ZEALAND</li> <li class="show">Shu-Chin Chen, Florida International University, USA</li> <li class="show">Artur Lugmayr, Curtin University of Technology, AUSTRALIA</li> <li class="show">Bogdan Pogorelc, Univ. of</li> <li class="show">Thomas Risse, L3C, GERMANY</li> <li class="show">Heiko Schuldt, Universty of Basel, SWITZERLAND</li> <li class="show">Bjoern Stockleben, Rundfunk Berlin Brandenburg (RBB), GERMANY</li> <li class="show">Emilija Stojmenova, Univ. of Ljubljana, SLOVENIA</li> <li class="show">Cinzia Dal Zotto, University of Neuchatel, SWITZERLAND</li> </ul> <p><span style="text-decoration: underline;"><strong>INDEXING & RANKINGS</strong></span></p> <ul> <li class="show">The Semantic Ambient Media (SAME) workshop series of is ranked in Scopus</li> <li class="show">The complete series is ranked within the Finnish national ranking system (Julkaisufoorum) on Level 1</li> </ul> <p><span style="text-decoration: underline;"><strong>REVIEW POLICY</strong></span></p> <ul> <li class="show">eEach publication within the series follows a <strong>strict double-blind peer review process</strong> to guarantee the high quality of it's publications</li> </ul> <p><span style="text-decoration: underline;"><strong>JOIN US ONLINE<br></strong></span></p> <ul> <li class="show"><strong>mailinglist:</strong> <a href="http://ambientmediaassociation.org">http://ambientmediaassociation.org</a></li> <li class="show"><strong>facebook:</strong> <a href="https://www.facebook.com/groups/sameworkshop/">https://www.facebook.com/groups/sameworkshop/</a></li> <li class="show"><strong>homepage:</strong> <a>http://www.ambientmediaassociation.org </a></li> </ul>https://www.ambientmediaassociation.org/Journal/index.php/series/article/view/285 Artificial Intelligence Meets Virtual and Augmented Worlds (AIVR), in conjunction with SIGGRAPH Asia2018-02-01T12:50:30+00:00Artur Lugmayrartur.lugmayr@artur-lugmayr.comKening Zhukenju850915@gmail.comXiaojuan Mamxj@cse.ust.hk2018-02-01T12:49:28+00:00##submission.copyrightStatement##https://www.ambientmediaassociation.org/Journal/index.php/series/article/view/270 Virtual Reality Techniques for Eliciting Empathy and Cultural Awareness: Affective Human-Virtual World Interaction2018-02-01T12:50:29+00:00Ivonne Chirino-Klevansivonne.chirino-klevans@faculty.ism.edu<p>On the average human beings have about 50,000 thoughts every day. If we consider that thoughts influence how we feel there is little doubt that the way we perceive reality will strongly correlate with how we act upon that reality. Let’s contextualize this thinking process within the realm of global business where interacting with individuals from other cultural backgrounds is the norm. Our own perceptions and stereotypes towards those cultural groups will strongly influence how we interact with them in business situations. The problem is that stereotypes, being cognitive shortcuts, not necessarily accurately represent intentions. Stereotypes provide us with a false sense of security enabling us to believe that we “understand” the reasons behind certain actions and reactions. This false sense of security often results in conflict in global business situations. That is one of the reasons why becoming globally competent without falling into stereotyping will provide us with the tools to increase success in cross-cultural business interactions.</p> <p>This paper describes an approach to design a virtual reality (VR) scenarios aimed at developing abilities to work across cultures using the principles of empathy and perspective taking. The approach we are taking in this design innovation paper moves away from only using the understanding of cultural dimensions in cultural competence skills development as research shows that focusing on “preconceived” differences in cultures can enhance stereotyping. Instead our approach provides users with the opportunity of exploring the thought process as a character in first person whose cultural background is different from that of the user. This scenarios provide opportunities for perspective taking which is conducive to empathy across cultures.</p>2017-12-23T00:00:00+00:00##submission.copyrightStatement##https://www.ambientmediaassociation.org/Journal/index.php/series/article/view/274 Deep Learning for Classification of Peak Emotions within Virtual Reality Systems2018-02-01T12:50:30+00:00Denise Quesneldquesnel@sfu.caSteve DiPaolasdipaola@sfu.caBernhard E. Rieckeber1@sfu.ca<p>Research has demonstrated well-being benefits from positive, ‘peak’ emotions such as awe and wonder, prompting the HCI community to utilize affective computing and AI modelling for elicitation and measurement of those target emotional states. The immersive nature of virtual reality (VR) content and systems can lead to feelings of awe and wonder, especially with a responsive, personalized environment based on biosignals. However, an accurate model is required to differentiate between emotional states that have similar biosignal input, such as awe and fear. Deep learning may provide a solution since the subtleties of these emotional states and affect may be recognized, with biosignal data viewed in a time series so that researchers and designers can understand which features of the system may have influenced target emotions. The proposed deep learning fusion system in this paper will use data collected from a corpus, created through collection of physiological biosignals and ranked qualitative data, and will classify these multimodal signals into target outputs of affect. This model will be real-time for the evaluation of VR system features which influence awe/wonder, using a bio-responsive environment. Since biosignal data will be collected through wireless, wearable sensor technology, and modelled through the same computer powering the VR system, it can be used in field research and studios.</p>2018-01-11T11:34:10+00:00##submission.copyrightStatement##https://www.ambientmediaassociation.org/Journal/index.php/series/article/view/275 Adaptive Tutoring on a Virtual Reality Driving Simulator2018-02-01T12:50:30+00:00Sandro Ropelatosandro.ropelato@gmail.comFabio Zundfabio.zund@inf.ethz.chStephane Magnenatstephane@magnenat.netMarino Menozzimmenozzi@ethz.chRobert W. Summerrobert.summer@inf.ethz.ch<p>We propose a system for a VR driving simulator including an \ac{its} to train the user's driving skills. TheVR driving simulator comprises a detailed model of a city, VR traffic, and a physical driving engine, interacting with the driver. In a physical mockup of a car cockpit, the driver operates the vehicle through the virtual environment by controlling a steering wheel, pedals, and a gear lever. Using a HMD, the driver observes the scene from within the car. The realism of the simulation is enhanced by a 6 DOF motion platform, capable of simulating forces experienced when accelerating, braking, or turning the car. Based on a pre-defined list of driving-related activities, the ITS permanently assesses the quality of driving during the simulation and suggests an optimal path through the city to the driver in order to improve the driving skills. A user study revealed that most drivers experience presence in the virtual world and are proficient in operating the car.</p>2018-01-11T11:37:04+00:00##submission.copyrightStatement##https://www.ambientmediaassociation.org/Journal/index.php/series/article/view/276 Combining Intelligent Recommendation and Mixed Reality in Itineraries for Urban Exploration2018-02-01T12:50:30+00:00Giulio Jacuccigiulio.jacucci@aalto.fiSalvatore Andolinasalvatore.andolina@aalto.fiDenis Kalkhofendenis.kalkhofen@icg.tugraz.atDieter Schmalstiegdieter.schmalstieg@icg.tugraz.atAntti Nurminenantti.nurminen@aalto.fiAnna Spagnollianna.spagnolli@unipd.itLuciano Gamberiniluciano.gamberini@unipd.itTuukka Ruotsalotuukka.ruotsalo@helsinki.fi<p>Exploration of points of interest (POI) in urban environments is challenging for the large amount of items near or reachable by the user and for the modality hindrances due to reduced manual flexibility and competing visual attention. We propose to combine different modalities, VR, AR, haptics-audio interfaces, with intelligent recommendation based on a computational method combining different data graph overlays: social, personal and search-time user input. We integrate such features in flexible itineraries that aid different phases and aspects of exploration.</p>2018-01-11T11:39:06+00:00##submission.copyrightStatement##https://www.ambientmediaassociation.org/Journal/index.php/series/article/view/277 Interacting with Intelligent Characters in AR2018-02-01T12:50:30+00:00Gokcen Cimengoekcen.cimen@inf.ethz.chYe Yuankhrylx@gmail.comRobert W. Sumnerbob.sumner@disneyresearch.comStelian Corosscoros@gmail.comMartin Guaymartin.guay@disneyresearch.com<p>In this paper, we explore interacting with virtual characters in AR along real-world environments. Our vision is that virtual characters will be able to understand the real-world environment and interact in an intelligent and realistic manner with it. For example, a character can walk around uneven stairs and slopes, or be pushed away by collisions with real-world objects like a ball. We describe how to automatically animate a new character, and imbue it’s motion with adaption to environments and reactions to perturbations from the real world.</p>2018-01-11T00:00:00+00:00##submission.copyrightStatement##https://www.ambientmediaassociation.org/Journal/index.php/series/article/view/278 Chalktalk VR/AR2018-02-01T12:50:30+00:00Ken Perlinken.perlin@gmail.comZhenyi Hezh719@nyu.eduFengyuan Zhuzhufyaxel@gmail.com<p>When people want to brainstorm ideas, currently they often draw their ideas on paper or on a whiteboard. But the result of those drawings is a static visual representation. Alternately, people often use various tools to prepare animations and simulations to express their ideas. But those animations and simulations must be created beforehand, and therefore cannot be easily modified dynamically in the course of the brainstorming process. Chalktalk VR/AR is a paradigm for creating drawings in the context of a face to face brainstorming session that is happening with the support of VR or AR. Participants draw their ideas in the form of simple sketched simulation elements, which can appear to be floating in the air between participants. Those elements are then recognized by a simple AI recognition system, and can be interactively incorporated by participants into an emerging simulation that builds more complex simulations by linking together these simulation elements in the course of the discussion.</p>2018-01-11T11:44:58+00:00##submission.copyrightStatement##https://www.ambientmediaassociation.org/Journal/index.php/series/article/view/283 Visualisation Methods of Hierarchical Biological Data: A Survey and Review2018-02-01T12:53:00+00:00Irina Kuznetsovairina.kuznetsova@student.tugraz.atArtur Lugmayrartur.lugmayr@artur-lugmayr.comAndreas Holzingera.holzinger@hci-kdd.org<p>The sheer amount of high dimensional biomedical data requires machine learning, and advanced data visualization techniques to make the data understandable for human experts. Most biomedical data today is in arbitrary high dimensional spaces, and is not directly accessible to the human expert for a visual and interactive analysis process. To cope with this challenge, the application of machine learning and knowledge extraction methods is indispensable throughout the entire data analysis workflow. Nevertheless, human experts need to understand and interpret the data and experimental results. Appropriate understanding is typically supported by visualizing the results adequately, which is not a simple task. Consequently, data visualization is one of the most crucial steps in conveying biomedical results. It can and should be considered as a critical part of the analysis pipeline. Still as of today, 2D representations dominate, and human perception is limited to this lower dimension to understand the data. This makes the visualization of the results in an understandable and comprehensive manner a grand challenge.</p> <p>This paper reviews the current state of visualization methods in a biomedical context. It focuses on hierarchical biological data as a source for visualization, and gives a comprehensive</p>2018-01-11T00:00:00+00:00##submission.copyrightStatement##https://www.ambientmediaassociation.org/Journal/index.php/series/article/view/284 Children Road Safety Training with Augmented Reality (AR) [Demo]2018-02-01T12:50:30+00:00Artur Lugmayrartur.lugmayr@artur-lugmayr.comJoyce Tsangjoyce.tsang@student.curtin.edu.auToby Williamstoby.williams@student.curtin.edu.auCasey X Limcasey.x.lim@gmail.comYeet Yung Teoyeetyung.teo@student.curtin.edu.auMatthew Farmermatthew.farmer@student.curtin.edu.au<p>Children killed or seriously injured through road accidents can be avoided through an appropriate safety training. Through play and engagement children learn and understand hazards at i.e. railway stations, bus stops, crossings, school zones, train stations, footpaths, or while cycling. We developed a rapid prototype of an <em>Augmented Reality (AR)</em> safety training proof-of-concept demonstrator for a scaled real-world model of dangerous road hazards. Two scenarios have been picked to give children the possibility to apply, and acquire knowledge of road safety: 1. handling emergency situations and informing authorities; 2. correct behavior at a bus stop on arrival/departure of a bus. In this paper we discuss our design approach, outline the technical implementation of the system, and give a brief overview of our lessons learned.</p>2018-02-01T00:00:00+00:00##submission.copyrightStatement##https://www.ambientmediaassociation.org/Journal/index.php/series/article/view/279 Data-Driven Approach to Human-Engaged Computing2018-02-01T12:50:30+00:00Xiaojuan Mamxj@cse.ust.hk<p>This paper presents an overview of the research landscape of datadriven human-engaged computing in the Human-Computer Interaction Initiative at the Hong Kong University of Science and Technology.</p>2018-01-11T00:00:00+00:00##submission.copyrightStatement##https://www.ambientmediaassociation.org/Journal/index.php/series/article/view/280 DupRobo: An Interactive Robotic Platform for Physical Block-Based Autocompletion2018-02-01T12:50:30+00:00Taizhou Chentaizhchen2-c@my.cityu.edu.hkYi-Shiun Wuyi-shiun.wu@my.cityu.edu.hkFeng Han13500809160@163.comBaochuan Yuebaochuyue2-c@my.cityu.edu.hkArshad Nasserarshad.nasser@my.cityu.edu.hkKenning Zhukeninzhu@cityu.edu.hk<p>In this paper, we present DupRobo, an interactive robotic platform for tangible block-based design and construction. DupRobo supported user-customisable exemplar, repetition control, and tangible autocompletion, through the computer-vision and the robotic techniques. With DupRobo, we aim to reduce users’ workload in repetitive block-based construction, yet preserve the direct manipulatability and the intuitiveness in tangible model design, such as product design and architecture design.</p>2018-01-11T00:00:00+00:00##submission.copyrightStatement##https://www.ambientmediaassociation.org/Journal/index.php/series/article/view/281 Towards Emergent Play in Mixed Reality2018-02-01T12:50:30+00:00Patrick Mistelipat.misteli@gmail.comSteven Poulakossteven.poulakos@disneyresearch.comMubbasir Kapadiamubbasir.kapadia@rutgers.eduRobert W. Sumnersumner@disneyresearch.com<p>This paper presents a system to experience emergent play within a mixed reality environment. Real and virtual objects share a uni- fied representation to allow joint interactions. These objects may optionally contain an internal mental model to act autonomously based on their beliefs about the world. The experience utilizes intu- itive interaction patterns using voice, hand gestures and real object manipulation. We author experience by specifying dependency graphs and behavior models, which are extended to support player interactions. Feedback is provided to ensure the system and player share a common play experience, including awareness of obstacles and potential solutions. An author can mix features from game, story and agent-based experiences. We demonstrate our system through an example adventure game using the Microsoft HoloLens.</p>2018-01-11T00:00:00+00:00##submission.copyrightStatement##https://www.ambientmediaassociation.org/Journal/index.php/series/article/view/282 AI, You're Fired! Artwork2018-02-01T12:50:30+00:00Aleksandra Vasovica.s.vasovic@gmail.com<p>The paper is text-based artwork, which is representing the initial conceptualization or contemplative phase of the media art and contemporary art performance and installation.<br>The objective of the long term art project is to further examine the potential of engagement of the advanced technology within the context of artistic research and contemporary art practice, with the specific postulate that the potential product of the artwork is expected to be imperceptible.</p> <p>The artistic research is referring to the philosophical and metaphysics idea that the alleged real reality cannot be perceived or defined via some concept. The question is, if it is so, than, is the art or the artist capable to successfully illustrate the undetectable real reality, even with the most advanced technological instruments employed.</p> <p>The text-based contemporary artwork is partly referring to another segment, which can be also observed within the context of the contemporary art – text based computer adventure games. More specifically, the method implemented for establishing the artwork’s concept uses some aspects similar to those used in early text-based computer games.<br>There are several stages in which the long-term artwork will progress.</p> <p>The initial form is designed in such a manner which would confirm that this segment of artwork not only does serve as a fundament for the other parts to unfold, but is also autonomous and is already completed in terms of contemporary art. This stand is applicable to all the consecutive stages – each segment is both independent and contextual.<br>The following stages would include the interactivity between the author, art audience, but also with the devices applied for the producing the artwork, like advanced technology instruments e.g. augmented reality (AR), virtual reality (VR), mixed reality (MR) devices, then interactive 3D technology, artificial intelligence (AI), plus the interactivity with the no-reality (reality in spiritual and philosophical contexts).</p>2018-01-11T00:00:00+00:00##submission.copyrightStatement##