Eye Tracker—Data

With the help of Sam P.Leon, who Juan Rosas kindly loaned to me for a week, I’ve collected some pilot data with the eye tracker to sort out the little procedural details that are bound to come up with any new method.   First, I’ll describe the experiment and the behavioral data that have been obtained.  Then, I’ll talk about my observations regarding procedural details with the eye-tracker.  Finally, I’ll show some of the tools I’m developing to handle the data and very preliminary results.

Method

The method being used is the “Learning Game” described elsewhere in this blog.  See The Learning Game and how it works.     ( Video )

General Design

The design was a simple “latent inhibition” design.   All groups underwent “magazine training” with each of the four possible “USs.”  Then, Group LI received six trials with a Red sensor (CS) positioned at the top middle sensor position in the panel.  Group Control simply sat and observed the context for the same amount of time.

Both groups then received 10 conditioning trials where the sensor was paired with the blue “Learian” spaceship.  Following those trials, both groups received 5 trials of extinction with the Red sensor.

Procedurally, participants came to the lab, went through the eye-tracking calibration routine which I built into the game, then they began the main gameplay.

Behavioral Results Overview

Behaviorally, we obtained what is very likely to be a robust latent-inhibition effect.  The data below show their anticipatory responding (keypresses to charge the appropriate weapon) during the Red sensor (CS) prior to the arrival of the spaceship.  As the graph shows, Group control acquired that response significantly faster than Group LI.

The data below show the two group’s average responses per second on each second of the CS prior to the US, on each trial.

Conditioning Data

The extinction data are below, showing responding in each second of the CS on each trial.  There was no US present, thus all 20-s of responding represents “anticipatory” responding.

ExtinctionData

 

These data show that the control group continued to respond more than the latent-inhibition group.  Notice that the responding peaks at about second 6, the first second after they expect the US to occur.  That is a simple ramping up of responding.  However, the drop after that second represents that the participants have learned when the US occurs. 

What is particularly interesting in these data is that the LI group learned more poorly that the US was coming worse than did the control group.  Yet, to the extent that they learned it, they also learned when it was occurring.  Learning “when” something was about to happen appeared to be parallel to learning “what” was about to happen (sorry Andy).

Observations on eye tracking

Now, what about the eye-tracking data?   The machine produces 21 pieces of data per time-sample (that’s all I bother to save anyway), and 32000, or so, time samples over the course of the experiment.  As you might guess, I’m still reducing the data.   But, I can report some observations regarding the procedure.

Overall the SMI System is amazingly robust.  It calibrates on 9 points rapidly and maintains that calibration even when the participant leaves their seat and returns.  The accuracy is very good as well. 

There are environmental issues that do matter.   The room should be reasonably lit, and not dim.  The tracker had problems with very dark-eyed people if the overall lighting was somewhat dim.  I assume that with a dilated pupil it had some difficulty discriminating the pupil boundaries.   Lighting in the room should be adjustable across a range of illumination values to get it just right for each participant.

Women’s makup, surprisingly, appears to matter.  Some makeups that women wear seem to reflect infra-red light much better than the average skin or other makeups, which causes the tracker problems in finding the eyes.  We had significant problems getting one participant calibrated.    Glasses can affect the accuracy.  At least one participant had relatively thick glasses and the tracker appeared to be slightly off with this participant, although the deviation appeared to be constant.  That is, regardless of where the participant looked the tracker reported the gaze as slightly below and to the left. 

The tracker comes with a limited version of SMI’s analysis software (Bgaze) which has all the functionality of the full version, but does not allow exporting of data transformations.  I would recommend getting the full version for simple experiments, and perhaps even complex ones.  However, for my work I decided to write my own software to extract the necessary variables for analysis.

The eye-tracker exports its data in a clear and tab-delimited format that is ridiculously easy to read and parse with C# (or any other .Net) code.

Presently I am writing the software to take in the gaze data and pupil diameters to assess where people direct their attention, and to also come up with a statistic to determine how well the points follow a specified path over time.   A screenshot is below.

The data I am showing below came from one participant in Group LI.  The selection of this participant to show here was random.

These points represent all the points where the eye gaze was recorded during the first 20 second exposure to the CS.  The yellow area around the sensor is the “area of interest” defined by the sensor dimensions, plus a measurement of error in the tracking system that I calculated based on an event at the end of the game where the participant’s fixation was known.  The green circle represents the average deviation of the points from the center of the sensor.

PE1

As the dots clearly show, the participant did look at the sensor during this first pre-exposure.   Interestingly, they looked at it during the sixth and final pre-exposure shown below.

PE6

The next figure shows a smaller sample.  These are the gaze points during the initial 5-seconds of the CS on the first conditioning trial.

C1

Again, this participant looked directly into the sensor, although at this point it had been seen 7 times without consequence.

The following image shows the same participant’s gaze during the initial five seconds of the last conditioning trial.

C10

Notice that the data are much more uniform.  They did look into the center, but also maintained a tighter pattern, looking up into the contextual space and at the two guns at the top of the screen with more looks at the gun on the right.   That pattern is probably meaningful because the US appeared from that corner of the screen.    This pattern was more evident across other subjects.  It appears as though that people initially glance at the CS, then look for where the US should appear, and go back and forth.    

It is that very interesting pattern that I must quantify, and I have some ideas on how to do that effectively.   If that result replicates, then it is very meaningful to me.  It will allow for measuring what is likely a pure conditioned response that is likely to be independent of instructions (which might affect how they respond on the keyboard).  When the CS and US are associated, the presence of the CS directs visual attention to the location where the US would arrive.    

Below are the data from the first extinction trial, so it shows a 20-s sample.

E1

Notice that the participant again observes the CS, and attends to the area where the US would appear, as well as areas toward the upper center of the screen.    As this pattern unfolds over time, he or she tends to look at the CS, then the US area and back and forth, and then begins looking at the CS and other areas.  

The final image shows the last trial of extinction.

E5

The participant still attends to the CS, but there is much less searching of the area where the US arrives.  I’m fairly excited about these data.  I have some minor potential confounds I need to sort out, but I am very optimistic that the tracker, in combination with the behavioral data, will provide important information on the relationships between learning “what”, “when”, and “where” in associative learning.  

If the anticipatory US tracking replicates, then I can (though I’m not terribly interested in it..) distinguish the contribution of changes in attention, changes in learning rates, and learning of CS-Nothing associations to latent inhibition.  How exactly, shall remain top secret at this point!

Advertisements
This entry was posted in Eye Tracking. Bookmark the permalink.

6 Responses to Eye Tracker—Data

  1. Pingback: Eye Tracking–Pupil Data | Byron's blog.

  2. Pingback: Eye Tracker—Areas of interest, scanpaths. | Byron's blog.

  3. Pingback: Eye Tracker–Areas of interest, Hits. | Byron's blog.

  4. Pingback: Eye Tracker–Pupils revisited. | Byron's blog.

  5. Pingback: Latent inhibition and eye tracking | Byron's blog.

  6. Pingback: Latent inhibition and eye tracking. | Byron's blog.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s