Abstract: Most of all interaction tasks relevant for a general three-dimensional
virtual environment can be supported by 6DOF control and grab/select input.
Obviously, a very efficient method is direct manipulation with bare hands, like
in real environment. This paper shows the possibility to perform non-trivial
tasks using only a few well known hand gestures, so that almost no training is
necessary to interact with 3D-software. For performing the gesture interaction,
the user’s hand is marked with just four fingertip-thimbles made of inexpensive
material as simple as white paper. Within our virtual 3D modeling scenario, the
recognized hand gestures are used to select, create, manipulate and deform the
3D meshes in a spontaneous and intuitive way. The user’s viewpoints to see the
modeling procedures are approximated by tracking a pair of marked polarized
glasses. All modeling tasks are performed wirelessly through a camera/vision
tracking method for the head and hand interaction.
Title: Qualitative Portrait Classification
Abstract: Due to recent advances in high-quality digital photography, taking a
large series of images is very inexpensive. Especially in portrait situations,
this results in a possible advantage because subjects often feel uncomfortable
during acquisition. Selecting from a larger set of images increases the chance
of a more satisfying outcome. However, the selection process is not easy and
time consuming as only a small number of images is typically considered as
aesthetically pleasing. In this work, we propose a machine learning approach to
mimic the selection process of a human subject. After a short training period, a
large set of images can be classified instantly into two categories, good or
bad. With the proposed automatic pre-selection, the advantage of digital
photography for portrait images is brought to a new level.