FANDOM


Conference Report by Roshan L Peiris Conference : 20th International Conference on Artificial Reality and Telexistence Dates : 1st to 3rd December 2010 Venue : Adelaide, Australia


The ICAT 2010 conference was held for the 20th time in Adelaide Australia hosted by the University of South Australia. For this conference I submitted AmbiKraf as a poster presentation. The presentation included a 2-minute teaser session followed by a 90-minute poster session. In addition I had the honor of being the Chair of the Session 3: Input/Output.


The conference featured many works from the Artifical Reality, Augmented Reality, etc fields. It was divided into few main sessions namely, System Design, CG & Innovative Display, Input/Output, Wearable, Tracking, Spatial AR, and 3DUI. In total, the conference accepted 24 papers, 14 posters, 6 Demos and 4 late breaking results. Most of the attendees were of Japanese origin with a few from South Australia and few from Europe. From Singapore, besides myself, my colleague Zhu Kenning presented a full paper on his Origami work under the 3DUI Session.


Session Chair

The first activity I was involved in the conference was chairing the 3rd Session Input/Output. This session featured 3 papers.

  • Ubiquitous Character Input Device Using Multiple Acoustic Sensors on a Flat Surface: Akira Urashima and Tomoji Toriyama.
  • Aurally Presentation Technique of Virtual Acoustic Obstacle by Manipulation of Acoustic Transfer Function: Takahiro Miura, Junya Suzuki, Teruo Muraoka and Tohru Ifukube
  • Slow Motion Replay of Tactile Sensation: Yuki Hashimoto and Hiroyuki Kajimoto

My main task was to introduce the session, introduce the speakers before each of their presentations, time the presentations and moderate the Q&A session after each session.

The first two papers were more related to input technologies while the last paper was more related to output. In the first paper, the authors presented a system of using acoustic sensors to provide input on a flat surface. By placing four acoustic sensors on a flat surface the users could ‘scratch’ on the surface using a pen or their finger itself to provide input. The four sensors would then pick up the sound generated by this interaction and compute the input. However, the current implementation was not accurate enough and the success rates of identifying the input characters were approximately 67%. This particular work reminded me of Chris Harrison’s “Scratch Input” and another work that was presented recently at the Ambient Intelligent Conference titled “Subjective Difficulty Estimation for Interactive Learning by Sensing Vibration Sound on Desk Panel”.


The next paper was more technical in nature where the authors worked on fine tuning the Acoustic Transfer Function present virtual obstacles in audio. This was interesting in a way that the work was related on presenting virtual obstacles for the blind through audio cues (as they would sometimes normally detect an actual object in real life). This work is still in the early stages and currently shows promising results for presenting a single obstacle. The third paper was using the commonly used “slow motion” principle and applied it to tactile interaction. Here the authors first recorded some tactile sensations (items falling on to a plastic panel) and replayed them at various slowed down speeds. The tactile actuator they used here is a speaker placed under the palm. The speaker would exert suction or push forces on the palm when actuated as required. The authors used this “slow motion haptics” concept to evaluate the user’s emotional cognition and ability to distinguish various items that were used in the experiment. However the studies did not yield significant results.


Another work that I really liked was the “Touch Light” project by Tokyo University. I have met the author, Kunihiro Nishimura, on several occasions at Laval Virtual, 2009 and iTokyo 2009. The touch light project was an interesting concept that lets you touch the light falling through leaves of a tree. The current implementation of the system uses a light detection system that detects light and shadows and a vibration system that exerts the “touch” feel on the user. This system interested me as another cross-modal interaction interface. Such a cross modal interface may interest the blind community as a novel way to “feel” the light and darkness. This work was demonstrated at the SIGGRAPH Emerging Technology 2010.


During my poster presentation, there were many interesting questions raised by the participants particularly on the technology and its future applications. Some of them particularly focused on the wearble application where they asked how it the technology would affect the participant since it uses heat for actuation. I explained our current progress on miniaturizing the technology to make it wearable and also the customization of the parameters to suit the wearers comfort. In addition, some others’ questioned about the application I explained where the wearer’s emotion or health state could be visualized through the clothes that they wear in the future. To address the question, I explained that the input could be anything such as the health status, emotion, or any other information, but our focus is on the display technology that could facilitate the displaying of this information. Following these questions, there were many follow up discussion and contacts made after the presentation.


Some pictures from the demo session

Demos

Demos at ICAT 2010