BrainsCAN postdoctoral award

The lab is soliciting postdoctoral applicants to Western’s prestigious BrainsCAN postdoctoral fellowship program. Applicants should have a background in cognitive neuroscience, and expertise in programming and analysis of fMRI or EEG/MEG data. A background in computer science or engineering may also be suitable, given that the applicant has experience with neuroimaging data analysis and a keen interest in understanding human brain function.

The successful applicant will play a key role in neuroimaging and computational investigations of visual cognition. They will enjoy working as part of an interdisciplinary team and be able to mentor members of the lab. Furthermore, they will join the Brain and Mind Institute (BMI), which is one of the leading centres in cognitive neuroscience in Canada. The BMI hosts a full range of ultra-high field, research-dedicated MRI scanners and state-of-the-art research facilities for cognitive, behavioural, and neurophysiological testing.

Dr. Mur and the applicant will develop the proposal together, in conjunction with other labs at Western. Please contact Dr. Mur in the first instance about developing an application ( Send a detailed CV and cover letter explaining why the research in the lab interests you, and how your skills and abilities are suitable.

Review of applicants will start by March 31, 2019. The deadline for submitting the final proposal is May 15, 2019. In brief, applicants must be no more than 6 years post-PhD and the salaries are highly competitive (55-70K per year, CDN, with additional benefits). The proposed research should fit within the broad BrainsCAN remit, which covers research that addresses fundamental questions regarding how we learn, think, move, and communicate.

Moving to Western University

I will soon be joining the Brain and Mind Institute (BMI) at Western University, London ON, Canada. I am excited to start the Visual Cognition Lab in January 2019. We will use psychophysics, functional magnetic resonance imaging, and computational modeling to investigate how the human brain makes sense of the outside visual world.

If you are interested in joining the lab, feel free to contact me. I am currently accepting applications for MSc / PhD students to join in fall 2019. Students can apply to the Psychology Graduate Program (Jan 4th) or the Neuroscience Graduate Program (Jan 18th).

Deep neural nets outperform oracle features but not oracle categories at explaining object similarity judgments

Check out our latest paper: explaining human behavior with deep nets and oracle models

We perceive and recognize objects with ease; however, the computational task performed by the brain is far from trivial. Until recently, computational models of object vision failed to even come close to human object-recognition performance. However, this changed with the advent of deep convolutional neural networks. Deep neural nets are loosely inspired by the human brain, and consist of units (“neurons”), organized in multiple layers (“brain regions”), connected with weights that can be modified by training (“synapses”).

Despite the impressive performance of deep neural nets at object recognition, it is unclear whether they are good models of human perception and cognition. How well do deep neural nets capture human behavior on more complex cognitive tasks? And how does their performance compare to that of non-computational conceptual (“oracle”) models?

We address these questions using a well-established cognitive task – judging object similarity. Human observers performed similarity judgments for a set of 92 object images from a wide range of categories, including animate (animals, humans) and inanimate (fruits, tools) objects. We tested how well the judgments could be explained by internal representations of deep neural nets, as well as by feature labels (e.g. “eye”) and category labels (e.g. “animal”) generated by human observers (who serve as an “oracle” here).

We show that deep neural nets, despite not being trained to judge object similarity, can explain a significant amount of variance in the human object-similarity judgments. The deep nets outperform the oracle features in explaining the similarity judgments, suggesting that they better capture the object properties that give rise to similarity judgments than feature labels do. However, the deep nets are outperformed by the oracle categories, suggesting that they fail to fully capture the higher-level, more abstract categories that are most relevant to humans. By comparing object representations between deep nets and simpler oracle models we gain insight into aspects of deep nets that contribute to their explanatory power, and into those that are missing and need to be improved.