Who’s participating in research on Amazon’s Mechanical Turk?

[Methodological Psychology]

Mimicking 10 facial expressions gets you $1.60. Taking a 90-minute survey on your emotions is worth a buck. Throughout the social sciences, it’s becoming increasingly common for researchers to employ Amazon’s Mechanical Turk–an online marketplace where “workers”are paid to complete tasks offered by “requesters”–in their empirical research. MTurk, as it’s known, is efficient and inexpensive making it an especially attractive research tool. But one of the big questions that remains is who are the workers, or in the case of research, participants, that complete the tasks?

In a recent article, published in Current Directions In Psychological Science, Gabriele Paolacci and Jesse Chandler, review the latest research examining the use of MTurk as a participant pool. Demographically speaking, the MTurk workforce is made up of over 500,000 people from 190 countries, with about 75% of workers living in the United States and India. Paolacci and Chandler report that MTurk offers researchers a participant population that is more diverse than the typical college student population, but still not representative of the population as a whole. According to the authors, “Workers tend to be younger (about 30 years old), overeducated, underemployed, less religious, and more liberal than the general population.” Furthermore, within the US’s MTurk workforce, Asians are overrepresented, while Blacks and Hispanics are underrepresented.

Paolacci and Chandler report workers are motivated primarily by size of the payout, but are also motivated by the intrinsic aspects of the tasks as well. Evidence also suggests that MTurk workers respond just as truthfully, and are similarly attentive as traditional participant samples. However, because obtaining future work on the site is often dependent on how they completed previous work (accurate, on-time) the authors highlight the possibility of demand characteristics within MTurk. Likewise, increased experience completing research tasks, particularly economic games and problems, may lead to a practice effect impacting worker responses. The authors also caution that arbitrary factors in experimental design could impact participant selection, and emphasize the need for researchers to take steps to understand and report the make-up of their participant population.

 

Tags: , , , ,
2 replies
  1. Ari
    Ari says:

    I am interested to see a similar study with the participant pool of CrowdFlower – a common alternative to Mechanical Turk. Are they a more representative group of “workers”?

Trackbacks & Pingbacks

  1. […] on MTurk compares to that from other pools and depends on controllable and uncontrollable factors. The Psych Report published a nice summary of the paper, that you can find […]

Comments are closed.