Mixed Reality Lab Wiki
Advertisement
DSC08559


Report for HCII 2011 conference in Florida, USA

14th International Conference on Human-Computer Interaction


Date: 09-07-2011 to 14-07-2011

Venue: Hilton Orlando Bonnet Creek, Orlando, Florida, USA

DSC08556

14th International Conference on Human-Computer Interaction was held in Florida, USA from 09th July 2011 to 14th July 2011. The conference was jointly held with several other conferences such as, 4th International Conference on Virtual and Mixed Reality, 4th International Conference on Online Communities and Social Computing, 6th International Conference on Augmented Cognition, 3rd International Conference on Digital Human Modeling, 2nd International Conference on Human Centered Design, and 1st International Conference on Design, User Experience, and Usability. There were over 1000 participants from both industry and academia in the field of HCI for the conference and many parallel sessions were conducted for further discussions.

DSC09079

The conference began with two days of workshops, tutorials, and poster sessions while the rest filled with parallel sessions for paper presentations. Unfortunately, we had to pay separately for most of the workshops and tutorials. But I managed to attend some of them and also referred booklets after the sessions (from other attendees). Other than the poster exhibition there was another exhibition from the industry partners and researchers over the conference. They were exhibiting their new products and services related to HCI area for the participants. Most of these products are related to EEG measuring devices, Eye tracking devices, and software.

I have attended several sessions as explained in the body of this report. Importantly, I met several pioneers on brain imaging during the conference and I explained our idea of analyzing smell and taste sensations on brain. Their feedback was really encouraging; most of them explained back the importance of smell and taste interactions in the HCI arena to me. Some of them were surprised and appreciated to take such a challenging research area. In particular, I had a chance to discuss with Dr. Banu Onaral, expert in biomedical signal processing and fNIR (functional near-infrared spectroscopy). She is a professor of Biomedical Engineering and Electrical Engineering at Drexel University, Philadelphia, USA. She explained and discussed the complexity and complex structure of the brain in relates to taste and smell sensations with me. In fact, they also are working on a device which combines near infrared and EEG imaging technologies. One of her students presented the initial version (which is very large equipment) as I stated in the final session of this report. (For more information please refer their website: http://www.biomed.drexel.edu/fnir/CONQUER/Welcome.html)

DSC08576

In addition, one post doctoral fellow from Italian Institute of Technology (IIT), who is researching on a multi robot collaborative and communication platform, was very excited after hearing our idea of digital taste. He explained his dream of making a flexible robot to deliver enhanced sensory experiences for elderly people. I also described some of our lab’s work for elderly such as Age Invaders.

Moreover, I noticed the increased usage of brain imaging in HCI, particularly in user evaluations. Many researches utilize such technologies with eye tracking and heart rate monitoring technologies to present their results. On the down side of the conference, there were very few papers on multisensory discipline of HCI. One paper presented on a cross model effect between vision and olfaction. This technique can also be incorporated to evaluate our approach on digital taste.

DSC08984

The keynote presentation was delivered by Professor Ben Shneiderman from University of Maryland on “Technology-Mediated Social Participation: The Next 25 Years of HCI Challenges” . He emphasized on using currently popular social networking platforms and tools to support national and social needs such as healthcare/wellness, disaster response, community safety, energy sustainability, etc. He further described the challenges ahead in order to successfully accomplish his vision.

I presented our paper “Sharing Experiences through Physically Extended Social Networking” on Wednesday (13 July 2011) during the Kansei value creation session. There were around 40 people in the session and most of them are from Japan, since the session is related to Japanese Kansei. However, few people asked questions from me including Prof. Ohkura (session chair). Many questions were asked on taste sensations, whether the sensation is real or artificial. One of the important comments from a professor was to test the system when people are hungry. Because when people are hungry they are more sensitive for taste and smell sensations.

DSC08990

Overall, it was a great and fantastic experience discussing with researchers all over the world. I learned a lot of new and ongoing research works especially related to brain imaging and human computer interaction. Next, HCI International 2013 will be held from 21 to 26 July 2013, Mirage Hotel, Las Vegas, Nevada, USA.

Next section presents various interesting papers and posters presented during the conference.


Interesting tutorials and workshops


1. Brain-Computer Interface by Günter Edlinger and Christoph Guger

This tutorial was very helpful for me since they discussed the various brain machine interface technologies at present. They started from the definition of a brain machine interface and presented how the technology advanced from developing applications for disabled people (wheelchair control) to virtual reality and gaming industry (controlling virtual avatars, mind painting, controlling twitter).

DSC09063

In the second part, they move into their latest research findings on brain monitoring and its applications. Very interestingly they explained the limitations and drawbacks of current EEG technology such as noise filtering, portability, conformity, speed, accuracy, and the like. Furthermore, they did a live demonstration and controlled a virtual avatar with their system. Below web page contains several videos of their demonstrations.

http://www.gtec.at/Research/Videos

DSC09064

Interestingly, during the conference reception I had a discussion with Dr. Günter Edlinger. He knows the possibility of electrical stimulation on tongue and asked many questions about our approach and the lab. In addition, he introduced a new wireless EEG monitoring device they developed with dry electrodes during the reception.


2. Designing Interaction for Ambient Intelligence Environments by Dr. Dr. Norbert A. Streitz

This session was conducted by Professor Norbert A. Streitz, Cologne, Germany. He holds two PhD’s (Ph.D. in physics, Ph.D. in psychology) and a Senior Scientist and Strategic Advisor in information and communication technology. He is the founder of the "Smart Future Initiative" which was launched in January 2009. He works on people oriented, empowering smartness and about the future research trends as mentioned below.

  • Increased tangible interaction and its importance
  • Emotion processing (affective computing)
  • Social networks and collective intelligence
  • Smart spaces, AMI spaces
  • Crowd and swarm based interactions

For more information please refer to his “Smart Future Initiative” website:

http://www.smart-future.net/

http://www.smart-future.net/13.html

http://www.disappearing-computer.net/

(He did a keynote speech during the ACE2009 in Athens, Greece as well)


3. Wireless-EEG Neurophysiological Monitoring Systems for Improving HCI Applications (Open Workshop) Chair: Chris Berka, CEO and Co-Founder of Advanced Brain Monitoring (ABM)

This session was conducted by Advanced Brain Monitoring, Inc. (ABM) and introduced and demonstrated several wireless brain monitoring equipments for EEG, ECG, EOG, and EMG. First time I noticed the wireless brain monitoring devices in this session and learned various things related to their architecture. For example, the communication protocol is based on UDP protocol and they have several mechanisms to achieve robust communication. However, still these devices have less number of channels, for example 24 and this is a limitation at the moment.

More information: http://www.b-alert.com/

DSC09046


Interesting poster presentations


1. Smart Clothes Are New Interactive Devices

This poster presents a digital garment which consists of both digital and ordinary yarns. They use digital yarns as data communication mediums to achieve several applications such as a video display, for communication, and healthcare monitoring.


2. Virtual Bridge: AR-Based Mobile Interaction for Easy Multimedia Control of Remote Home Devices

DSC09051

This poster presents a mechanism for controlling multimedia contents of remote devices using a mobile device based on AR technology. Using a real-time object recognition method, home devices detected by the camera of a mobile device are displayed on the camera preview screen along with the thumbnails of their own multimedia contents around the recognized positions. They have enabled interactions between recognized devices such as pick and toss screen content to another and drag and drop interactions.


3. User needs for the TECHNO KITCHEN (Transitions in kitchen living)

DSC09054

This group’s focus is to convert kitchens into easier and safer place through applying digital technologies.

Some points,

DSC09055
  • Adjustable sinks and driers
  • Detect when appliances left on
  • Read out of fridge content


4. WARAI PRODUCT: Proposal to the Design Approach Designing the Product that Causes Laughter

This study proposed an approach to design products in order to cause the laughter. Authors aim is to reduce stress through these products by causing laughter.

More information: http://www.springerlink.com/content/h64785167582325j/


Interesting paper presentations


1. WristEye

DSC09071

In this paper, a novel interaction method based on the use of subtle gestures is presented. As in the image there is a camera mounted on the user’s wrist. Then through this camera it is possible to recognize both deictic and iconic gestures. They are mainly analyzing two kinds of information: in which direction the arm is moving and the number of fingers in the image. Input gestures are recognized, elaborated and sent to the feedback system for further processing.


2. Scratchable devices

DSC09075

This is an attempt to use the graphical programming language “Scratch” to program household appliances. I found this research is relatively advanced than the similar concept presented by Google, the Google Home. However, this group introduces a cost effective solutions for controlling appliances such as lights, coffee maker, alarms, and other such devices. The most interesting module is the device programming module developed with Scratch. Users can program the interactions with appliances as well as interactions between appliances through this. The authors did a live demonstration to control a light, alarm clock, with a coffee maker during their presentation.


3. Functional Near Infrared Imaging for Communication

They are using a portable brain imaging device based on fNIR technology to facilitate communication for people with very severe physical disabilities. They even propose a method of communicating by moving muscles, for example face muscles.

DSC09024

Authors showed a video to illustrate using BCI technique to chat with a remote person through GTalk and a virtual keyboard. As they mentioned performance, usability, and accuracy are the main problems in BCI.


4. Introducing Animatronics to HCI: Extending Reality-Based Interaction

This paper introduces new paradigm for animatronics (lifelike robots) by combining with HCI. Recently animatronics technology was considered for use as an interaction style. In this research, various interaction styles (conventional GUI, AR, 3D graphics, and introducing an animatronics user interface) were used to instruct users on a 3D construction task which was constant across the various styles.


5. A Method of Multiple Odors Detection and Recognition

Although electronic odor detection is already accomplished, multiple odor recognition technology has not yet been developed which is most common in real world. To recognize multiple odors, the proposed method includes odor data acquisition using a smell sensor array, odor detection using entropy, feature extraction using Principal Component Analysis, recognition candidate selection using Tree Search, and recognition using Euclidean Distance. The initial results from 132 odor database showed that the odor detection rate is approximately 95.83% and the odor recognition rate is approximately 88.97%.


6. Meta Cookie+: An Illusion-Based Gustatory Display

This paper presents a research conducted on "Psuedo-gustation" method to change perceived taste of a food. The system is capable of overlaying visual and olfactory information onto a real cookie. Interestingly they were able to control the shape, smell, and the color through AR. The experiment shows that the system can change the perceived taste based on the effect of the cross-modal interaction of vision, olfaction and gustation.


7. Applying Gestural Interfaces to Command-and-Control

The paper investigates applicability of gesture-based user interfaces in notional Command-and-Control environments of the United States Army. They used a virtual environment for planning and a wearable device to deliver the information to the real world. Specifically, the effort targeted US Army-based C2 environments, such as a notional fixed command center, a mobile command center, and the environment of the dismounted soldier in the battlefield.


Several slides with important information on EEG and fNIR technologies,

DSC09143
DSC09146
DSC09014
Advertisement