Vision isn’t a simple matter of the eyes capturing a video feed which is then somehow piped inside your head. In fact, the image projected onto your retina undergoes a huge amount of processing, first inside the eyes and then in various parts of the brain, in which information about the scene is extracted, and combined with prior knowledge about the world, to come up with informed guesses about what is out there and how to respond to it appropriately. This process isn’t fully understood, even in humans, but we know that complex visual processing tasks, like recognizing objects, are built up from simpler components, such as edge detectors, which as you might imagine are optimized to detect object edges using contrast.
Along with many of my colleagues, I’m interested in trying to understand how bee brains process visual information and the ways in which they use similar or different techniques to humans. One thing that is certain is that, like us, the processing chain starts off with simple feature detectors which pass information to more complex visual processing areas of the brain, so if we want to accurately model bees’ visual processing, we need to start with accurate knowledge of questions like how good they are at detecting changes in the orientation of object edges.
In this paper, we investigated, for the first time, how good bumblebees are at differentiating between angles by setting them a foraging task in which a food reward was found in artificial flowers that displayed a black or magenta bar at a particular angle. The bees had to learn to visit the rewarding flowers while ignoring some decoys that just contained disappointing water and which were distinguished by having their bars a slightly different angle. We found that bumblebees could learn to identify bars that differed by just 7°, doing much better than two species of honeybee that had previously been tested in slightly different ways. It’s not yet clear whether they could do even better: when we tested a small number of bees on differences of just 5°, they never reached the threshold of 80% correct choices that we’d defined as showing that they could solve the problem, but they did show evidence of learning: they gradually made more correct choices as they gained experience, which seems to imply that they could perceive some difference between the bars.
This really highlights the difficulty of trying to deduce even basic sensory capabilities from an animal’s behaviour. Clearly, a bee cannot make a particular choice if it cannot perceive any difference between the options, but there are all sorts of reasons why they might not use the evidence of their senses in the ways we hope. For example, if it’s difficult to tell two angles apart, it might be quicker and easier to just try every flower than to do the hard work of figuring out which ones have food available. In this study we presented the bars in two different ways: printed onto paper and laminated; or displayed on a computer monitor. The bees learned to solve the task much quicker with the paper flowers than the monitor, which is something that has been found before by other studies. Even when two experimental set-ups appear to us to be essentially equivalent, we don’t always know if it seems that way to a bee.
I often talk about behavioural experiments as allowing us to ask animals questions about what they know about the world, or how they go about solving problems, but the truth is that we don’t always know what a bee thinks it is being asked to do in a given situation. There is no way to simply ask a bee what it sees; instead, we have to ask it how it uses what it sees to help it gather food, and it takes many strands of evidence before we can put together some sort of picture of what it’s like to see the world through a bee’s eyes. We still don’t know, and our best guesses may turn out to be mistaken, but we are getting closer all the time.