Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Finger-based multitouch interface for performing 3D CAD operations
Radhakrishnan S., Lin Y., Zeid I., Kamarthi S. International Journal of Human-Computer Studies71 (3):261-275,2013.Type:Article
Date Reviewed: Aug 14 2013

Interacting with computers by touching objects (or at least their representations) on a computer display is very appealing, but progress has been hindered by engineering and cost factors until recently. Smartphones, tablets, and similar devices with finger-based touch interaction have transformed the spread of handheld computational devices, which previously used pen-based interaction (for example, the PalmPilot). Pen-based interaction has been in use since the 1950s, when a light pen was used for the Massachusetts Institute of Technology (MIT) Whirlwind project, but has mostly occupied a niche where operations like drawing and selecting objects on the screen are important. Some initial drawbacks, like having to hold the pen up to a vertically oriented device, became less relevant with the use of pen-based interaction on mobile devices. Although pen-based interaction is still in use, it has been overshadowed by the huge popularity of finger-based touch interaction on mobile devices, with Apple’s iPhone and iPad devices being among the earliest and most influential.

The direct manipulation of objects on a display is very intuitive, convenient, and frequently enjoyable. On the other hand, it also has clear disadvantages. Using fingers as pointing devices works only for relatively large target areas, and the very act of touching something on the display leads to significant visual occlusion, where parts of the display are hidden by the fingers, hands, and sometimes arms of the user. Thus, in situations where it is important to carefully select or position small elements on a screen, finger-based touch interaction is problematic. For example, consider the act of making corrections to text typed via a soft keypad on an iOS device: the user needs to point to the location of the character to be modified, which is at least partially occluded by the finger performing the pointing action. The workaround in iOS is a pop-up looking glass that magnifies the target area. However, this makes the pointing action less direct, and has a few other side effects like occluding other screen areas and problems at the boundaries of the display.

The goals of the work presented in the paper are threefold:

(1) outline the key elements of the multitouch interface for [computer-aided design, CAD], (2) identify the factors affecting the performance of a multitouch enabled CAD modeling environment, and (3) lay a foundation for future research and highlight the directions for extending the multitouch interface for CAD and other engineering applications.

While the authors discuss relevant issues, my initial expectations were only met with respect to the second goal, and even there only partially.

One of the limiting assumptions is the choice of finger-based multitouch devices that rely on optical sensors to identify finger positions. Since most mobile devices use capacitance-based touch sensors, this assumption limits the available computational systems to installations where the touch-based interface is incorporated into a table or desk surface, often implemented by using cameras and projectors for input and output purposes. The original Microsoft Surface table (now called Microsoft PixelSense and also available as the Samsung SUR40 device) is an example of this technology, although the cameras have since been replaced by PixelSense technology.

Another limitation is the restriction of the experiments to a comparison between two sets of touch-based interaction methods--drag state finger touch (DSFT) and track state finger touch (TSFT) techniques--and their emulation of mouse-based interaction. In the first one, touching the screen selects an object and subsequent finger movement drags the object along. In the second one, touching the screen corresponds to hovering over an object with the mouse pointer; the object is only selected if an additional gesture (such as tapping the thumb) is executed. These two techniques are mutually exclusive, although explicitly switching between two different modes, or making them context dependent, may offer more flexibility at the cost of potential user confusion.

Since it is fairly obvious that finger-based interaction will have difficulties with specific interactions essential to CAD, such as the selection of small objects or points, it would be interesting to include pen-based interaction in the experiments.

The way the experiments were conducted also poses potentially serious limitations. The target group included 14 participants, all male and all regular computer users, four with CAD experience and two with multitouch experience. They performed a series of tasks grouped according to the different techniques, and always in the sequence DSFT, TSFT, and mouse based. Although these limitations are understandable from a practical perspective, there are multiple risks of bias due to the sample size, the properties of the subjects, and the lack of variation in the sequence of task groups.

The authors discuss the insights gained from the experiments, with particular emphasis on task completion times and selection errors. Not surprisingly, the mouse-based tasks result in shorter average task completion times and lower error rates than the other two. There is no significant difference in task completion times between the other two methods, and there are significant differences in error rates only for a small number of tasks. One measurable criterion that may favor finger-based interaction is the time required to switch between two interaction modes or devices, such as from the keyboard to the mouse, or vice versa. This was not addressed in the evaluation, although at least one of the tasks contained such a switch. The authors also acknowledge that there are other measures--“learnability, memorability, error rates, efficiency, and accuracy”--but no related information was collected.

Overall, the authors provided some useful information about user interaction and gestures for the CAD domain, but failed to convince me that their experiments yielded valuable insights into the use of finger-based touch techniques for the domain. The limitations in scope, in the way the experiments were conducted, and in the use of only two evaluation criteria are amplified by a fair number of issues that should have been addressed in the reviewing and editorial process. There are many minor grammatical issues, such as missing articles, and some typographical errors affect understanding. The acronyms DSFT and TSFT are introduced, but then another one, FSFT, is used twice. From the context, it appears that in one location it should be TSFT, whereas in the other one it should probably be DSFT.

Reviewer:  Franz Kurfess Review #: CR141462 (1310-0940)
Bookmark and Share
  Featured Reviewer  
 
Computer-Aided Design (CAD) (J.6 ... )
 
 
Portable Devices (C.5.3 ... )
 
 
User Interfaces (H.5.2 )
 
Would you recommend this review?
yes
no
Other reviews under "Computer-Aided Design (CAD)": Date
The logic of architecture
Mitchell W. (ed), MIT Press, Cambridge, MA, 1990. Type: Book (9780262132381)
Apr 1 1992
Computer aided design: fundamentals and system architectures
Encarnação J. (ed), Lindner R., Schlechtendahl E., Springer-Verlag New York, Inc., New York, NY, 1990. Type: Book (9780387520476)
Sep 1 1991
Exploration and innovation in design
Navinchandra D., Springer-Verlag New York, Inc., New York, NY, 1991. Type: Book (9780387974811)
Nov 1 1991
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy