FANDOM


Summary of ISMAR Conference

ISMAR is the best known Augmented Reality (AR) conference. Held yearly, it alternates between North America, Europe, and Asia. This year it was in Seoul, Korea, a busy mega-city. The organization and venue had a few startup problems, but things seemed to improve as the conference went on. However, the Asian location does enable more participation from this region, which has many strong AR research groups. There were two papers from NUS, including the one by Ken from the Cute Lab. ISMAR has both and oral and poster sessions; but the choice as to where a paper falls seems rather arbitrary. There were about twice as many posters as oral papers, and the overall acceptance rate was about 25%.


ISMAR as a conference has a unique combination of papers, demonstrations, and industrial exhibits. The trend to having more applications on mobile devices continued here; with a number of companies showing Iphone and Android applications. Qualcom, a large California based company (who was a major sponsor this year and last) announced the release of their toolkit for AR on Android phones. It is interesting to note that the Qualcom AR toolkit was developed mostly in Austria, by people who have been associated with Prof. Schmalstieg in Graz (true globalization!)


Nokia was also present, and has attracted some well known AR researchers to their labs (i.e. Ron Azuma), but they are behind in terms of smart phone applications. They do, however, have plans for creating yet another phone OS to supplant Symbian on Nokia phones, and compete with Android with a better "Linux" like development environment. However, it remains to be seen whether they will succeed, because they have been so slow to enter the smart phone market. In my opinion, Nokia has good research, but their product development process seems too slow for today's rapidly changing world. 


The demonstrations by various authors and research groups were interesting, and covered a lot of different types of AR applications. There were a number of people doing natural feature tracking, but not over a large scale environment. The industrial exhibits this year were rather limited, and disappointing. There was a local Korean company with some mobile applications, and not much else to see.


The academic AR research world is dominated by about a dozen labs, and most of them were represented in the presented papers. From a tracking point of view most of the approaches are still similar to PTAM, and are SLAM based. While some vision people at the workshops talked about using Bundler for AR applications (similar to what we are doing), no system was demonstrated or presented that did so. My belief is that the SLAM methods are not general enough to be successful in the DSTA project because a) they require that environment models be built with the same video camera that will be used during the augmentation b) the models are not tolerant to lighting changes, and do not work well outdoors. Nothing that I have seen at this conference has changed my mind.


Below I describe in detail some of the papers that I think are most relevant to the DSTA project. There were other good papers (like Ken's) that were interesting but not relevant to the project.


Accurate real-time tracking using mutual information

-a much better way to track, which uses mutual information to compare images


-this allows for successful matches between images that represent the same object, but look a lot different (like a GIS line drawing of an environment and a picture)


-impressive results, but the optimization is rather complex, and not as useful to DSTA


Point and Shoot for Ubiquitous tagging on mobile phones

-uses a patch based learning system for localization


-works well on cell phones, but is applicable only to flat objects


-shows what can be done with current feature matching technology


Positioning, Tracking, and Mapping for Outdoor Navigation

-the other NUS paper


-it does silhouette matching of the sky-line of a building to compute the pose for outdoor AR


-works well, and has good integration of gps, vision and inertial sensor


-not a general solution to the tracking problem since it assumes there is always a visible skyline, which is not true in general


Towards real-time 3d tracking and reconstruction on the GPU

-uses a gpu to create an enhanced slam approach


-a sophisticated optimization algorithm implemented on the gpu using monte-carlo randomization


-basically a faster and better PTAM

Augmented reality in large environments

-uses structure from motion, and street maps to create a large feature map of a road network


-then uses vocabulary trees for matching video as a person drives


-similar to our approach, but not a lot of details and oriented towards driving localization, and not the motion of a person using an HMD


-also requires a prior 2d road map to make the models and match


Camera Motion Tracking in a Dynamic Scene

-a system where an auxiliary camera is used to estimate the motion of a main camera


-interesting idea, and their calibration method is similar to what we will need to achieve the most accurate rendering from the point of view of the HMD cameras


One paper that was very interesting, but not that relevant to DSTA was


A practical table top autostereogram display


-used a plate with multiple, random, small holes to create an autosterogram system suitable for multiple users


-a unique idea, but not the first time they presented this week, and also requires accurate head tracking


-nevertheless, high novelty, with an entertaining presentation by Henry Fuchs of UNC


Dr. Gerhard Roth