Gazing Toward the Future: Advances in Eye Movement Theory and Applications

John M. Franchak , in Psychology of Learning and Motivation, 2020

Eye tracking systems measure gaze—the location the observer is looking at on a screen or in the world—through video recording of eye position. A screen eye tracker (SET) uses a specialized eye camera to detect the eye movements of a stationary observer (Fig. 1A). Although some SET systems may require the head to be immobilized in a chin rest, other SET systems detect the observer's head and then detect the eyes within the head. Even in head-free eye tracking, head movement is limited because the participant cannot move beyond the fixed field of view of the eye tracking cameras. Modern SET systems boast superb spatial accuracy (0.02–0.5 degrees) and temporal sampling rates (500–2000   Hz). These specifications are made possible by constraining the observer's movement and restricting the trackable region to a screen in a static location.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0079742120300244

Affect Measurement: A Roadmap Through Approaches, Technologies, and Data Analysis

Javier Gonzalez-Sanchez , ... Winslow Burleson , in Emotions and Affect in Human Factors and Human-Computer Interaction, 2017

Eye Tracking

Eye-tracking systems measure eye position, eye movement, and pupil size to detect zones in which the user has a particular interest at a specific time. There are a number of methods for measuring eye movement. The most popular are optical methods, in which light, typically infrared, is reflected from the eye and sensed by a camera or some other specially designed optical sensor. The data is then analyzed to extract eye rotation from changes in the reflections. Optical methods are widely used for gaze tracking and are favored for being noninvasive and inexpensive. An example of a commercial optical eye-tracking system is the Tobii T60XL Eye Tracker. The Tobii Eye Tracker reports data at a sampling rate of 60 Hz, and the reported data includes attention direction as a gaze point ( x and y coordinates), duration of fixation, and pupil dilation.

Pupil diameter has been demonstrated to be an indicator of emotional arousal, as seen in Bradley et al. (2008), who found that pupillary changes were larger when viewing emotionally arousing pictures, regardless of whether these were pleasant or unpleasant. Pupillary changes during picture viewing covaried with skin conductance changes, supporting the interpretation that sympathetic nervous system activity modulates these changes.

A sample dataset from a Tobii T60XL Eye Tracker is shown in Table 11.4. Gaze point values (GPX and GPY columns) range from 0 to the size of the display; pupil (left and right) is the size of the pupil in millimeters; validity (left and right) is an integer value ranging from 0 to 4 (0 if the eye is found and the tracking quality is good and 4 if the eye cannot be located by the eye tracker); and fixation zone is a sequential number corresponding to one or a set of predefined zones in which special interest exists. Timestamps in the table confirm a sampling rate of 60 Hz (approximately one sample every 16 ms).

Table 11.4. Extract of Data Collected Using Tobii T60XL Eye Tracker

Timestamp GPX GPY Pupil left Validity L Pupil right Validity R Fixation zone
141124162405582 636 199 2.759313 0 2.88406 0 48
141124162405599 641 207 2.684893 0 2.855817 0 48
141124162405615 659 211 2.624458 0 2.903861 0 48
141124162405632 644 201 2.636186 0 2.916132 0 48
141124162405649 644 213 2.690685 0 2.831013 0 48
141124162405666 628 194 2.651784 0 2.869714 0 48
141124162405682 614 177 2.829281 0 2.899828 0 48
141124162405699 701 249 2.780344 0 2.907665 0 49
141124162405716 906 341 2.853761 0 2.916398 0 49
141124162405732 947 398 2.829427 0 2.889944 0 49

Eye-tracking systems can be fixed (embedded in a display), mobile (able to be connected and mounted in diverse displays), or wearable (embedded in a pair of glasses). Regardless of the type of system, the setup process is fairly easy. The calibration process includes having the user follow an object around the display area with their eyes (for embedded and mobile systems), or having them stare at a particular point (for wearable glasses). The calibration for embedded and mobile systems requires time to ensure that the eyes of the user are within the line of sight of the infrared and optical sensors and that nothing is producing glare for the camera, which could affect the reflection and thus the tracking of eye movements. The reliability of embedded and mobile systems is reduced by glare on the cameras, the incorrect position of the user's face, and the presence of framed glasses or eye disorders, such as strabismus. In the case of systems in glasses, important things to consider are the interference of other wireless devices and the distance that the glasses can be from the host computer.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128018514000112

Eye Movement Research

Joseph H. Goldberg , Jack C. Schryver , in Studies in Visual Information Processing, 1995

Eye-Gaze Tracking Apparatus

An LC Technologies eye tracking system collected serial records at 30   Hz of eye-gaze screen location, pupil diameter and cumulative time. The system camera was mounted beneath a Sun Sparcstation 2 monitor, and pointed toward the user's right eye, which was at a distance of 50   cm. The eyetracking output data was transferred, via a host computer, to the Sun workstation for subsequent analysis. The system provided accurate records of serial eye-gaze location, based upon prior reference calibration. Using a chin rest to stabilize the head of the user, the average angular bias error was less than 0.5, with about 4   cm of head motion tolerance in the horizontal and vertical frontal planes. The system worked equally well with eyeglasses or contact lenses.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0926907X05800418

Developmental Origins of the Face Inversion Effect

Cara H. Cashon , Nicholas A. Holt , in Advances in Child Development and Behavior, 2015

2.4 Effects of Inversion on Infants' Scanning of Faces

The increasing availability and use of eye-tracking systems have offered an avenue for investigating qualitative changes in infants' visual scanning behaviors that occur as infants develop greater expertise with upright over inverted faces across the first year of life and beyond. Eye tracking is an exceptionally useful tool for examining: (1) how infants visually explore stimuli and (2) which specific areas or features are most important to infants during the processing or encoding of visual information. In order to determine the regions of an image that are being scanned, typically eye-tracking studies utilize areas of interest (AOIs) that are constructed around predetermined regions. For example, with face stimuli, researchers may choose to create AOIs around each of the internal features of the face (both eyes, nose, and mouth), with one or more AOIs constructed around the external regions (hair, forehead, ears, chin, etc.). The number of fixations or the duration of looking time can then be measured for each AOI to determine if any consistent patterns exist across infants. Via these measures, eye tracking can help us better understand the specific information to which infants attend and process while exploring different types of faces. To date, only a few studies have examined infants' scanning of upright versus inverted faces ( Gallay, Baudouin, Durand, Lemoine, & Lécuyer, 2006; Kato & Konishi, 2013; Oakes & Ellis, 2013), but their results have uncovered some interesting differences in the ways that infants visually explore faces of different orientations across development.

The first study to use eye-tracking technology to address potential scanning differences for upright and inverted faces in infants was conducted by Gallay et al. (2006). Gallay et al. recorded the scanning behaviors of a group of 4-month-olds while they were habituated to a face. Each infant was tested in two counterbalanced sessions: one with upright faces and one with inverted faces. The total looking time and number of trials to habituate did not differ for the orientation conditions, indicating that infants attended and habituated similarly to both orientations. Although these measures were not affected by face inversion, orientation did play a role in how infants explored the faces. To examine infants' scanning, Gallay et al. constructed three AOIs: one box around the eyes, one box around the nose and mouth combined, and one U-shaped AOI that included both cheeks connected by a small portion underneath the mouth. Based on these AOIs, Gallay et al. calculated the total duration of looking and the percentage of overall looking time in each area. They found that 4-month-olds spent more time scanning the internal regions of upright faces than the internal regions of inverted faces. Moreover, when viewing upright faces, infants spent more time looking at the nose/mouth region of upright faces compared to the nose/mouth region of inverted faces. Infants spent the majority of their time exploring the internal region of inverted faces by attending to the eyes. However, the amount of time fixated in the eye region did not differ for upright and inverted faces. These results clearly show that inversion affects the scanning of faces at 4 months. However, only one age group was tested, which does not allow us to place this pattern of results into a developmental context.

Two recent studies that have examined developmental changes in infants' scanning of upright and inverted faces were conducted by Kato and Konishi (2013) and Oakes and Ellis (2013). Kato and Konishi (2013) tested 6-, 8.5-, 11-, and 13.5-month-olds and adults with stimuli that were upright and inverted versions of a black-and-white schematic face (i.e., Fantz, 1961). They presented each image once for 30   s to infants and 10   s to adults. They found effects of inversion on infants' scanning of faces and on their face preferences. The scanning finding is very straightforward. Infants scanned the internal features more while viewing the upright face than the inverted face. The effect of inversion on infants' face preferences differed by age. Infants' preference shifted from the upright face at 6 months to the inverted face at 13.5 months. Infants from the middle two age groups, 8.5 and 11 months, showed no preference for faces in either orientation. The developmental pattern that emerges here is similar to the pattern of changes in attention that are observed in infants during the process of habituation (Cohen, 2004; Hunter & Ames, 1988). They share a trajectory that can be described as going from a familiarity preference (i.e., upright faces), to showing no preference, to showing a preference for novelty (i.e., inverted faces).

One other study examined infants' scanning patterns developmentally with more realistic stimuli. Oakes and Ellis (2013) investigated developmental changes in scanning patterns for upright and inverted faces in 4.5-, 6.5-, 8-, and 12.5-month-olds. Importantly, their study used 48 photographs of real faces that differed in gender and race. Infants were randomly assigned to either an upright or inverted face condition and then viewed at least 16, and as many as 96, trials that lasted for 3   s each. Oakes and Ellis constructed AOIs of equal size for the upper (eyes), middle (nose), and lower (mouth) internal regions of the faces. Median fixation durations were calculated for each AOI on each trial, and then these medians were averaged for each infant. On this measure, infants scanned the internal region significantly more than the external region regardless of age or orientation. A developmental shift was observed between 8 and 12.5 months in infants' scanning within the internal areas of upright faces, such that infants began to look less at the eyes and more at the mouth. This pattern was not found for the inverted orientation. Oakes and Ellis also measured gaze patterns using the proportion of scanning relative to the size of each AOI. Based on this measurement, effects of orientation on scanning patterns were found to be different across development. The 4.5- and 6.5-month-olds looked significantly longer at the eye regions of both upright and inverted faces. In contrast, the 8- and 12.5-month-olds scanned the eyes, nose, and mouth regions of upright faces generally equally given their sizes, with 12.5-month-olds looking only significantly more than would be expected toward the mouth. For inverted faces, these older infants scanned similarly to the younger infants, with greater looking toward the eyes and less looking toward the mouth than would be expected. Comparing the two younger and the two older age groups, there appears to be a developmental shift toward attending more to the mouth for upright faces. According to Oakes and Ellis (2013), this shift may be attributed to the increased significance of the mouth as a source of linguistic information and may suggest that in the latter part of the first year of life, infants have an expectation that upright faces are meaningful sources of social or linguistic information (see also Cashon & Cohen, 2003, 2004; Rakover, 2013).

It is also worth noting that Kato and Konishi (2013) did not find the same trend observed by Oakes and Ellis (2013) in which older infants demonstrated greater attention to the mouth and less attention to the eyes. This discrepancy may be explained by the fact that the duration that infants were allowed to scan the faces differed dramatically between the two studies (3   s versus 30   s). Also, this discrepancy may be explained by the fact that Oakes and Ellis used real photographs of faces whereas Kato and Konishi used black-and-white schematic face images, which likely convey less social significance. It is certainly possible that infants would produce different patterns of scanning for socially relevant realistic faces compared to unrealistic line drawn face stimuli.

In sum, these results suggest that differences in the scanning of upright and inverted faces are already observable by 4 months of age, and new scanning patterns emerge for upright, but not inverted, faces in 8- to 12.5-month-olds. This shift may reflect that infants at these older ages attribute more meaning to upright faces than to inverted faces. Furthermore, it appears that one indicator of increased expertise for upright faces may be greater scanning of the inner features of upright compared to inverted faces over development in the first year. Importantly, as Oakes and Ellis note, the possibility remains open that younger infants may scan upright and inverted faces in the same manner, but still process them differently. Thus, in future research, eye-tracking measures need to be further utilized in conjunction with tasks that examine infants' processing abilities before any strong conclusions can be made in this regard.

In the next section, we explore how the brain responds to upright and inverted faces during infancy. Although only a handful of studies have been conducted on this topic, their findings provide evidence of specialization for upright faces in the brain by the end of the first year.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0065240714000366

Eye Movement Research

Dave M. Stampe , Eyal M. Reingold , in Studies in Visual Information Processing, 1995

General Method for Tasks

Experimental tasks were implemented using a prototype eye tracking system developed by SR Research Ltd. This system uses a headband-mounted video camera and a proprietary image processing card to measure the subject's pupil position 60 times per second. Resolution of the system is very good (0.005° or 15 seconds of arc), with extremely low noise. A second camera on the headband views LED targets attached to the display monitor to compensate for head position, correcting gaze position to within 0.5° of visual angle over +   20° of head rotation, and allows head motion within a 100   cm cube.

Task displays were presented in black on a white background on a 21" VGA monitor located 75   cm in front of the subject, with a field of view of 30° horizontally and 24° vertically. A second VGA monitor was used by the experimenter to perform calibrations and monitor subject gaze in real time during experiments. Gaze position accuracies of better than 0.5° on all parts of the screen were routinely obtained.

Twelve subjects, five male and seven female with an average age of 25 years were run on all tasks in a single 60-minute session. Tasks were run in the same order for all subjects. All subjects had normal or corrected to normal vision, four with eyeglasses and three with contact lenses. A system calibration was performed before each task, and repeated if needed to meet a 0.5° accuracy criterion (Stampe, 1993). The experimenter monitored gaze position during trials to ensure continuing accuracy.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0926907X0580039X

New Perspectives on Early Social-cognitive Development

Chi-hsin Chen , ... Chen Yu , in Progress in Brain Research, 2020

1.1 Head-mounted eye-tracking

Currently, there are several head-mounted eye-tracking systems that are commercially available. Some systems have headgears readily available for infants while some others can be modified to attach to a custom-made cap or headband, as shown in Fig. 1 (for more details on the selection of eye-tracking systems, see Slone et al., 2018). A head-mounted eye-tracker is composed of two cameras, a scene camera facing outward to record the participant's first-person view and an eye camera facing inward to record the participant's eye movements (Fig. 1). Some eye-trackers are wired to a computer while others can store data on a light-weighted recording device (e.g., a smart phone) and be placed on the child, such as in a pocket or a small backpack. Because head-mounted eye-trackers only capture the first-person view, additional third-person view cameras are usually used to obtain a wider view of the environment or social interaction. These third-person view cameras can be placed on the side or overhead. They help capture the interactions or actions that fall outside the participants' first-person camera views; and the recordings can be later used for data coding (e.g., coding of participants' actions).

Fig. 1

Fig. 1. A head-mounted eye-tracker is composed of a scene camera, which records the participant's first-person view, and an eye camera, which records the eye movements.

Either before or after the experimental task(s), the experimenters must collect data for calibration. A common method with children is to draw participant's attention to several specific locations by using a small, attractive object or a laser pointer (for more details on the calibration procedure see Slone et al., 2018). Specialized software (e.g., Yarbus from Positive Science, LLC) is used that can map the changing positions of the pupil and corneal reflection recorded by the eye camera to corresponding locations in the first-person view scene. The calibrated videos are then used for data annotation and data analysis.

Head-mounted eye-trackers have been used with both typically developing infants and clinical populations, such as children with hearing loss (e.g., Chen et al., 2019a,b, 2020) or children with autism (Yurkovic et al., 2020). In terms of the eye-tracking system or technology, there is usually no specific requirement needed for clinical populations. However, for children with hearing loss who use cochlear implants or hearing aids, it is necessary to check whether the material or the position of the cap or headband to which the eye-tracker is attached would interfere with the placement or transmission of their hearing device(s).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0079612320300856

Behavioral and Physiological Metrics

Tom Tullis , Bill Albert , in Measuring the User Experience (Second Edition), 2013

7.2.1 How Eye Tracking Works

Although a few different technologies are used, many eye-tracking systems, such as the one shown in Figure 7.2, use some combination of an infrared video camera and infrared light sources to track where the participant is looking. The infrared light sources create reflections on the surface of the participant's eye (called the corneal reflection), and the system compares the location of that reflection to the location of the participant's pupil. The location of the corneal reflection relative to the pupil changes as the participant moves his eyes.

Figure 7.2. An eye-tracking system from SMI (www.smivision.com). Infrared light sources and an infrared video camera are directly below the monitor. The system tracks the participant's eyes automatically in real time.

The first activity in any eye-tracking study is to calibrate the system by asking the participant to look at a series of known points; then the system can subsequently interpolate where he is looking based on the location of the corneal reflection (see Figure 7.3). Typically the researcher can check the quality of the calibration, usually expressed as degrees that deviate from the X and Y visual planes. Deviations less than one degree are generally considered to be acceptable, and less than one-half of a degree is very good. It is critical that the calibration is satisfactory; otherwise all the eye movement data should not be recorded or analyzed. Without a good calibration there will be a disconnect between what the participant is actually looking at and what you assume he is looking at. Following calibration, the moderator makes sure the eye movement data are being recorded. The biggest issue tends to be participants who move around in their seat. Occasionally the moderator is required to ask the participant to move back/forward, left/right, or raise/lower their seat to recapture the participant's eyes.

Figure 7.3. An example of SMI software used to run an eye-tracking study and monitor eye movements in real time. The three windows contain study details (left), stimuli being tracked (top right), and eye being tracked (bottom right).

Information provided by an eye-tracking system can be remarkably useful in a usability test. Simply enabling observers to see where the participant is looking in real time is extremely valuable. Even if you do no further analyses of eye-tracking data, just this real-time display provides insight that would not be possible otherwise. For example, assume a participant is performing a task on a website and there's a link on the homepage that would take him directly to the page required to complete the task. The participant keeps exploring the website, going down dead ends, returning to the homepage, but never reaching the required page. In a situation like this, you would like to know whether the participant ever saw the appropriate link on the homepage or whether he saw the link but dismissed it as not what he wanted (e.g., because of its wording). Although you could subsequently ask participants that question, their memory may not be completely accurate. With an eye-tracking system you can tell whether the participant at least fixated on the link long enough to read it.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124157811000078

Detecting Concealed Knowledge From Ocular Responses

Matthias Gamer , Yoni Pertzov , in Detecting Concealed Information and Deception, 2018

Additional Ocular Measures

Besides providing information on eye movements and fixation positions, eye-tracking systems also record the occurrence of blinks as well as the pupil diameter. Both measures have also been exploited by CIT studies. Regarding blink frequency, it was demonstrated that blinking rate is reduced following the presentation of crime-related details to knowledgeable examinees and a discriminant analysis solely based on this measure allowed for correctly classifying 75% of guilty and 77% of innocent examinees ( Leal & Vrij, 2010). While an early study found a rebound effect with increases in blinking rate following the offset of critical CIT details (Fukuda, 2001), two more recent studies reported an opposite pattern with decreased blinking rates even up to 5   s following stimulus offset (Peth et al., 2013, 2016). Interestingly, these delayed effects seemed to be more reliable as compared to differences in blinking rates during stimulus presentation and yielded validity estimates around AUC     0.72. Whereas eye-movement characteristics (number and duration of fixations) were moderately correlated with autonomic responses in the CIT, such correlations could not be observed for the number of blinks (Peth et al., 2013). This finding led to hypothesizing that differential psychological processes underlie the pattern of different ocular responses. Whereas the fixation measures might be more related to an orienting response triggered by recognized CIT items, blinking rates might better reflect cognitive load induced by the recognition of crime-related details or inhibitory processes aiming to monitor or control bodily responses in order to effectively conceal item recognition.

Changes in pupil diameter were also used to infer concealed knowledge in CIT examinations. The width of the pupil can be adjusted by modulatory influences of the autonomic nervous system. Whereas the parasympathetic system can induce a pupil constriction, opposite effects can be triggered by increases in sympathetic activation. In the CIT, it has been observed that the presentation of concealed details elicits a larger pupil dilation as compared to neutral alternatives in knowledgeable examinees (Bradley & Janisse, 1981; Janisse & Bradley, 1980; Lubow & Fein, 1996; Seymour, Baker, & Gaunt, 2013). Although this effect might be used to differentiate between guilty and innocent examinees, it is important to note that pupil diameter is strongly influenced by the amount of light that enters the eyes, thus making it necessary to carefully control visual stimulus material or to use auditory stimulation in the CIT. Furthermore, it has been demonstrated that pupil responses are tightly correlated with skin conductance data (Bradley, Miccoli, Escrig, & Lang, 2008). Therefore, it might be sufficient to record one of these measures in CIT examinations (for further details on autonomic measures in the CIT, see Chapter 1).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128127292000082

Gazing Toward the Future: Advances in Eye Movement Theory and Applications

Sarah Brown-Schmidt , ... Sun-Joo Cho , in Psychology of Learning and Motivation, 2020

2.1 Eye-tracking data

Modern eye-trackers produce data at different temporal resolutions, ranging from ~   20 to 2000 hz. At each time point, eye-tracking systems produce information about whether the eye(s) are blinking, moving or fixating a location in space. This information can then be converted into time-series data, indicating where, at each point in time, the person was fixating. This transformation requires some simplifying assumptions that are beyond the scope of inquiry for the current paper. Briefly, assumptions can include combining the time it takes to saccade to an object with the fixation time on that object (see McMurray, Klein-Packard, & Tomblin, 2019; McMurray, Tanenhaus, & Aslin, 2009). Similarly, blinks can be treated by adding the blink time into the fixation time to the subsequently fixated object (see Arnold, 2008). Lastly, if the researcher is interested in measuring fixations to particular objects, e.g., fixations to "the striped star," the researcher must make decisions about what counts as a fixation (e.g., how close do the eyes need to be to the object before it is counted as a fixation).

A visual illustration of this process for a single trial from a single person is provided in Fig. 2. In the left panel of the figure, individual fixations are illustrated as circles numbered 1:10, with the size of the circle corresponding to the length of the fixation, and gray scale coding corresponding to whether it was coded as a target fixation (black), a competitor fixation (dark gray), or a fixation to something else (light gray).

Fig. 2

Fig. 2. Left panel: Example display with hypothetical fixations superimposed. Fixations are numbered in order and the size corresponds to fixation duration. The grayscale coding indicates whether the fixation was coded as a target (black), competitor (dark gray), or unrelated fixation (light gray). Right panels: Illustration of how the fixation data are converted to a measure of what the participant was fixating over time, in binary form (top right) or multicategorical form (bottom right).

As the smallest analysis unit of data, the researcher has a single outcome measure from each time point t, trial l, participant j, and item i (denoted by "y tlji "). In Cho et al. (2018), a binary outcome measure was used, indicating whether the participant was fixating the target (coded as y tlji   =   1) or not (coded as y tlji   =   0) at each time point. Given the display shown in Fig. 1, and the instruction to "Click on the striped star," if the participant was fixating the striped star, this would be coded as a target (y tlji   =   1) fixation; if they were fixating any other object this would be coded as a non-target (y tlji   =   0) fixation. The top right panel of Fig. 2 illustrates this binary coding of target/non-target fixations for the time points on a single trial, with target fixations illustrated by a black line and non-target fixations illustrated by a gray line.

By contrast, in Cho et al. (2020a, 2020b), the gaze data were initially coded into three categories, representing whether the participant was fixating the target (y tlji   =   1), a competitor (y tlji   =   2), or an unrelated object (y tlji   =   3) at each time point. Given the same example, the competitor can be defined as the object that is temporarily consistent with the unfolding expression "The striped" (Eberhard et al., 1995). Thus fixations to the striped star would be coded as target fixations (y tlji   =   1), fixations to the striped heart would be coded as competitor fixations (y tlji   =   2), and fixations to anything else would be assigned to the third category (y tlji   =   3). The bottom right panel of Fig. 2 illustrates this three-way coding of fixations over time, with target fixations illustrated by a black line, competitor fixations by a dark gray line, and fixations to other areas illustrated by a light gray line. As will be explained in detail below, these multicategorical data (1, 2, or 3) are recoded into binary form in order to model multinomial progressing.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0079742120300232

Enhancing Performance for Action and Perception

James N. Ingram , Daniel M. Wolpert , in Progress in Brain Research, 2011

Hand and arm movements during natural tasks

The naturalistic studies of eye movements reviewed in the previous section have been made possible by the development of wearable, self-contained eye-tracking systems ( Hayhoe and Ballard, 2005; Land and Tatler, 2009; Pelz and Canosa, 2001). Two recent studies from our group have used wearable, self-contained systems to record hand (Ingram et al., 2008) and arm (Howard et al., 2009a) movements during natural everyday behavior. However, in contrast to studies of eye movements, which have invariably imposed specific tasks on the subject, we allowed our subjects to engage spontaneously in natural everyday behavior.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444537522000163