Over the past decade, research in psychology, sociology, and economics has begun to incorporate online participant pools to varying extents. These online pools allow experimenters to increase the sample size and diversity of their participant groups, while also enabling experiments that would be nearly impossible to conduct in the lab, for example exploring interactions between thousands of participants within social networks. Such experiments include virtual worlds where networks of participants can interact in a way that simulates complex social systems (Wisdom et al. 2013, Shirado et al. 2017, Jayles et al. 2017), cultural transmission (Centola et al. 2015, Carr et al. 2020), governance decisions in online communities (Salganik et al. 2006, Balietti et al. 2016), large-scale perception research (Lahdelma and Eerola 2020), and online techniques for exploring high-dimensional perceptual representations in the human brain, which incorporate humans and machine-learning algorithms (Harrison et al. 2020). These new types of massive online experiments can help address the challenges of conducting basic and translational human behavior research during the COVID-19 pandemic while maintaining high standards of privacy and ethics, and also providing novel ways to study some of the most critical issues of our time, ranging from the transmission of fake news to the instability of politics within social networks (Lazer et al. 2018).
These new modes of experiments require new tools. Existing online platforms such as jsPsych, PsychoPy, Gorilla, PsiTurk, and OpenSesame are useful for creating single-player interactions, but provide very limited capacities for creating large-scale interactive experiments. In addition, existing platforms are limited in terms of their built-in security and privacy mechanisms and restricted in terms of the automation of recruiting, payment, setting up servers and managing data (e.g lioness-lab, nodeGame, Wextor, TurkServer), as well as their ability to combine pre-existing materials such as pre-screening tasks, questionnaires, stimulus types (e.g., Empirica, Dallinger, oTree, Breadboard). Moreover, some tools rely on subscription and proprietary code (e.g., Labvanced). To address these limitations, the Computational Auditory Perception Research Group at the Max Planck Institute for Empirical Aesthetics is developing a platform that enables the running of large-scale interactive studies involving hundreds or even thousands of human subjects. Our platform provides built-in modular components that can be easily woven together to form complex experimental designs, such as adaptive algorithms that sample from subjective probability distributions (MCMCP, Sanborn & Griffiths 2008; GSP, Harrison et al. 2020), while also supporting a broad range of stimulus and response types, including audio, video, multiple choice, free text, sliders, and WebGL. In the coming year we plan to make these techniques accessible to the scientific community as a mature, fully documented, and user-friendly open-source software package that we call PsyNet. We have already developed a prototype of this technology and used it to design, deploy, and implement massive interactive online experiments that include interactions between multiple participants.
At its full form, PsyNet will provide nine core features that together transform the design of large scale virtual lab experiments: (1) Efficiency and reusability; (2) Automatic recruitment and compensation; (3) Database services & efficient storage; (4) Automatic replication of the entire experimental pipeline based on source code; (5) Security and privacy, specifically maintaining consistency with GDPR regulations and MPG ethical standards; (6) Built-in support for diverse recruiting, thereby significantly diversifying the population of participants (Henrich et al. 2010); (7) Automatic data quality assurance; (8) Extensive real-time experimental monitoring dashboard; (9) Open-ended integration with powerful cloud-based computation engines. The synergic combination of these features in PsyNet enables the development and deployment of large-scale experiments that would be extremely difficult otherwise to perform.
Thus far, we have developed a prototype of PsyNet (see architecture in Figure 1 and dashboard in Figure 2) and have already used it for several experiments (Harrison et al., 2020; see also preliminary results in Figure 3). While our prototype has mainly relied on external recruiting services (Amazon Mechanical Turk), the final product will also allow researchers to deploy PsyNet on a stand-alone server with increased privacy and security.
PsyNet builds on the Dallinger framework (Dallinger Readthedocs), an open-source software project initially developed by Jordan Suchow (Suchow), Tom Morgan and Tom Griffiths. We collaborate with the software consultancy Jazkarta (Jazkarta) to keep Dallinger maintained and to develop new Dallinger features. In 2019 we organized a three-day workshop training researchers how to use Dallinger in their own research (Dallinger Workshop).
Balietti, S., Goldstone, R.L. and D. Helbing (2016). Peer review and competition in the Art Exhibition Game. Proceedings of the National Academy of Sciences, 113.30, 8414-19.
Carr, J. W., Smith, K., Culbertson, J. and S. Kirby. (2020). Simplicity and informativeness in semantic category systems. Cognition 202, Article 104289.
Centola, D. and A. Baronchelli. (2015). The spontaneous emergence of conventions: An experimental study of cultural evolution. Proceedings of the National Academy of Sciences 112.7, 1989-94.
Harrison, P. M. C., Marjieh, R. Adolfi, D., van Rijn, P., Anglada-Tort, M., Tchernichovski, O., Larrouy-Maestri, P. and N. Jacoby. (2020). Gibbs sampling with people. Advances in Neural Information Processing Systems (oral presentation).ArXiv preprint: 2008.02595
Henrich, J., Heine, S. J. and A. Norenzayan.(2010). Most people are not WEIRD. Nature 466.7302, 29.
Jayles, B., Kim, H., Escobedo, R., Cezera, S., Blanchet, A., Kameda, T., Sire, C. and G. Theraulaz. (2017). How social information can improve estimation accuracy in human groups. Proceedings of the National Academy of Sciences 114.47, 12620-625.
Lahdelma, I. and T. Eerola. (2020). Cultural familiarity and musical expertise impact the pleasantness of consonance/dissonance but not its perceived tension. Scientific Reports 10. 1: 1-11.
Lazer, D. M. J, Baum, M. A., Benkler, Y. Berinsky, A. J. Greenhill, K. M., Menczer, F., and M. J. Metzger. (2018). The science of fake news. Science 359.6380: 1094-1096.
Salganik, M. J., Dodds, P. S. and D. J. Watts. (2006). Experimental study of inequality and unpredictability in an artificial cultural market. Science 311.5762, 854-56.
Sanborn, A., and Griffiths, T. L. (2008). Markov chain Monte Carlo with people. Advances in Neural Information Processing Systems, 1265-1272.
Shirado, H., and N. A. Christakis. (2017). Locally noisy autonomous agents improve global human coordination in network experiments. Nature 545.7654, 370-74.
Wisdom, T. N., Song, X. and R. L. Goldstone. (2013). Social learning strategies in networked groups. Cognitive Science 37.8, 1383-1425.
Dr. Peter Harrison
Forschungsgruppe Computational Auditory Perception
Forschungsgruppe Computational Auditory Perception