stranger shores
Thoughts on neuroscience and the mind
Sunday 9 January 2011
F1 data visualisation
Sports statistics giving you a permanent headache? How about some data visualisation to ease your pain...
I have been experimenting with how best to display a season of formula 1 results. There are a few pieces of information that are important when evaluating a drivers performance. As well as the obvious race result, its also important to show qualifying performance, and also to contrast a drivers performance with that of their teammate (because of course a Mclearan is not a Minardi!)
The grid layout displayed in the picture above is the best i have come up with so far....Each race is listed on the left hand side (Australia, Malaysia, etc) while each driver is listed on the top row. Each divers race result is represented by a coloured circle. The left semi-circle is qualifying performance, and the right semi-circle is the actual race result. Using this display technique it is easy to pick out which drivers fare better in qualify or race conditions - just concentrate on the left or right side of the circles. The best performing drivers have the the biggest circles (the Ferrari pair of Rikkonen and Massa in this example). As well as size you will notice the circles are also coloured, showing the all important teammate comparison - green indicates better and red worse performance that the teammate. The red green contrast is quite strong in the screenshot shown, so focusing attention on colour (rather than size) gives a quick and easy to read indication of which of the two team drivers is doing better.
This kind of display could be used for many other purposes aside from displaying F1 results.
btw- I know the results are not completely accurate! This is for illustration purposes only.
Differences between colour codes
It is known that many cells throughout the visual system are sensitive to coloured input. From the retina through to the LGN and up to cortical regions, V1 and V4, many neurons show selectivity to specific colours. However, exactly what these cells are coding for is still uncertain. There are 3 main possibilities.
1) Colour biases
A neuron may show colour selectivity as a secondary characteristic, and this this may be functionally irrelevant. For example, consider the case of orientation selective cells in V1, or motion selective cells in V5, whose primary job is to code for these properties. Just because the cells shows some colour sensitivity it does not mean that this information is used in a useful way by later stages. These biases may result form 'random cone clustering' (Conway et al., 2008). Consistent with this proposal is the fact that these colour biases change when the luminance of the input changes. This should not happen with true hue selectivity.
2) Wavelength selectivity.
Although we perceive a unified colour at any one place/time, the input is actually decomposed, by the 3 cone type photoreceptors responding to different wavelengths. Cells near the bottom of the visual system (the retina and LGN) appear to code for colour in this way. For example, seeing yellow is generated by the correct ratio of 'red' and 'green' wavelength receptor activation, although there is no hint of red or green in the banana! This lead onto the third possibility
3) Perceptual colour / hue selectivity.
It is logical to expect that at some stage there exist neurons whose activity correlates with perceived colour. At this stage a 'yellow' neurons would respond to our banana.
A number of possibilities have been raised here. Evidence suggests there is a dissociation between wavelength and perceptual colour codes, with the former associated with early visual areas (retina, LGN, and V1) and the later with V4 (refs).
Brouwer & Heeger (2009) have recently probed colour codings in different areas using the multivariate pattern analysis approach (MVPA). This is an exciting new method for teasing apart spatial overlapping neural representations using conventional brain scanning measurements. Essentially the computer is given the results of a brain scan and asked to do its best to dissociate between different classes of activity. The computer is able to use some pretty fancy statistical techniques so does a far better job than a old fashioned human eyeballing approach could hope for. They show that the classifier is able to correctly predict the stimulus colour, based on activity patterns in v1,v2,v3,v4,vO1,LO1. However, only in V4 and VO1 is a gradual change in perceptual hue mirrored by a gradual change in neural activity patterns. Thus these areas are most likely strongly involved in perceptual coding of hue.
Tying these results into the 3 possible signal types described above it would be very interesting to if the classifier is, in some cases relying of the first type of signal. That is colour bias signals, which in this cases would be an artefact. Brower & Heeger (2009) report best performance using the activity patterns in V1, but could this be due to a relatively large proportion of colour biased cells - that is, a colour signal that isn;t useful for perception. To determine the perceptual relevance of colour related activity in v1, the Brower & Heeger (2009) experiment could be rerun with each specific hue being presented at a number of different luminance levels. This may greatly reduce colour related activity in v1 because hue biases in v1 are often altered or abolished when stimuli are raised or lowered in luminance (Solomon & Lennie, 2007).
refs coming soon...
1) Colour biases
A neuron may show colour selectivity as a secondary characteristic, and this this may be functionally irrelevant. For example, consider the case of orientation selective cells in V1, or motion selective cells in V5, whose primary job is to code for these properties. Just because the cells shows some colour sensitivity it does not mean that this information is used in a useful way by later stages. These biases may result form 'random cone clustering' (Conway et al., 2008). Consistent with this proposal is the fact that these colour biases change when the luminance of the input changes. This should not happen with true hue selectivity.
2) Wavelength selectivity.
Although we perceive a unified colour at any one place/time, the input is actually decomposed, by the 3 cone type photoreceptors responding to different wavelengths. Cells near the bottom of the visual system (the retina and LGN) appear to code for colour in this way. For example, seeing yellow is generated by the correct ratio of 'red' and 'green' wavelength receptor activation, although there is no hint of red or green in the banana! This lead onto the third possibility
3) Perceptual colour / hue selectivity.
It is logical to expect that at some stage there exist neurons whose activity correlates with perceived colour. At this stage a 'yellow' neurons would respond to our banana.
A number of possibilities have been raised here. Evidence suggests there is a dissociation between wavelength and perceptual colour codes, with the former associated with early visual areas (retina, LGN, and V1) and the later with V4 (refs).
Brouwer & Heeger (2009) have recently probed colour codings in different areas using the multivariate pattern analysis approach (MVPA). This is an exciting new method for teasing apart spatial overlapping neural representations using conventional brain scanning measurements. Essentially the computer is given the results of a brain scan and asked to do its best to dissociate between different classes of activity. The computer is able to use some pretty fancy statistical techniques so does a far better job than a old fashioned human eyeballing approach could hope for. They show that the classifier is able to correctly predict the stimulus colour, based on activity patterns in v1,v2,v3,v4,vO1,LO1. However, only in V4 and VO1 is a gradual change in perceptual hue mirrored by a gradual change in neural activity patterns. Thus these areas are most likely strongly involved in perceptual coding of hue.
Tying these results into the 3 possible signal types described above it would be very interesting to if the classifier is, in some cases relying of the first type of signal. That is colour bias signals, which in this cases would be an artefact. Brower & Heeger (2009) report best performance using the activity patterns in V1, but could this be due to a relatively large proportion of colour biased cells - that is, a colour signal that isn;t useful for perception. To determine the perceptual relevance of colour related activity in v1, the Brower & Heeger (2009) experiment could be rerun with each specific hue being presented at a number of different luminance levels. This may greatly reduce colour related activity in v1 because hue biases in v1 are often altered or abolished when stimuli are raised or lowered in luminance (Solomon & Lennie, 2007).
refs coming soon...
Sunday 2 January 2011
Black bags and the binding problem
What is the binding problem? In neuroscience what we mean by binding is tying together the activity of neurons that may be in distant parts of the brain. We know that this is likely to be crucial because the brain represents things in a distributed fashion.
It may be the case that two properties (say colour and shape) belonging to a perceptualy unified object (e.g. the percept of a motorbike) are coded in different parts of the brain. This raises some important issues that neuroscientists are still puzzling over.
I have been trying to think of an analogy that clearly illustrates why binding is a problem, and how potential solutions may work. This is the best I have come up with....
Think of two black bags - one full of cards with a different colour painted on each. The other full of cards with a different shape painted on each, such as cars, people or animals etc (all drawn in black and white). If the idea of discrete neural assemblies is true, when the brain represents something it is as though one hand is placed inside each bag and a card selected. The left hand may select a red card from the colour bag, and the right may select a picture of a car from the picture bag. Remember though that we can't take the cards out of the bags, we can only peek in through the gap at the opening. So, having just one feature conjunction being represented is no problem - we can peek in and see red in one bag, and a car in the other. What must be out there in the real world is a red car. However, consider the situation where we are trying to represent multiple objects. Say, as well as a red car, we are also looking at a green bike. In that situation we must select a green card, and a red card from the colour bag, and the pictures of both a car and a bike from the form bag. But how do we know which colour goes with which picture? This question lies at the heart of the binding problem. How is it that we can correctly represent a red car and a green bike, instead of a green car and a red bike? The latter situation is what is known as an illusory conjunction. Pairing the wrong features and therefore generating an 'illusory' percept.
There are two alternative solutions to this currently considered plausible by brain researchers. 1) Attention acts as a gating mechanism and selects the features of a single object for processing at any one instant, or 2) the temporal pattern of activity forms another component of the neural code that is used to code relational information.
To illustrate the attentional hypothesis (1) lets go back to our analogy with bags and cards...Imagine that instead of being forced to keep our hands deep inside the bags we are allowed to select a single card from each bag and remove it. Then we can put the two cards down on the table in front of us and stick them together. This, of course, will make it very easy to see which colour goes with which picture, but comes with the rather large drawback that only a single perceptual object can be active at one time. Amongst psychologists this is known as the 'feature integration theory' of attention, proposed by Anne Treisman in the 80s.
The second possibility is newer (and more contentious). It has been proposed that synchronous oscillations are crucial for binding (von der Malsburg & Wolf Singer being the original proponents). Oscillatory patterns in neural firing have been reported for many years but the functional relevance of this is still a matter of debate, and many research labs are attempting to answer this question. If neurons A B and C all fire at the same rate their activity is difficult to distinguish. However, if A and B fire periodically at the same time points (in phase), whilst neuron C fires at different time points (e.g. in antiphase), this may be used as a type of binding information. Going back to the analogy, this is akin to somehow weaving a thread between a coloured card in one bag, and connecting it to a picture in the other bag. Many of these threads can be fastened at a single instant, so multiple objects can be represented simultaneously, and the thread can rapidly be tied and untied.
I hope this analogy is helpful in understanding what neural binding is all about!
It may be the case that two properties (say colour and shape) belonging to a perceptualy unified object (e.g. the percept of a motorbike) are coded in different parts of the brain. This raises some important issues that neuroscientists are still puzzling over.
I have been trying to think of an analogy that clearly illustrates why binding is a problem, and how potential solutions may work. This is the best I have come up with....
Think of two black bags - one full of cards with a different colour painted on each. The other full of cards with a different shape painted on each, such as cars, people or animals etc (all drawn in black and white). If the idea of discrete neural assemblies is true, when the brain represents something it is as though one hand is placed inside each bag and a card selected. The left hand may select a red card from the colour bag, and the right may select a picture of a car from the picture bag. Remember though that we can't take the cards out of the bags, we can only peek in through the gap at the opening. So, having just one feature conjunction being represented is no problem - we can peek in and see red in one bag, and a car in the other. What must be out there in the real world is a red car. However, consider the situation where we are trying to represent multiple objects. Say, as well as a red car, we are also looking at a green bike. In that situation we must select a green card, and a red card from the colour bag, and the pictures of both a car and a bike from the form bag. But how do we know which colour goes with which picture? This question lies at the heart of the binding problem. How is it that we can correctly represent a red car and a green bike, instead of a green car and a red bike? The latter situation is what is known as an illusory conjunction. Pairing the wrong features and therefore generating an 'illusory' percept.
There are two alternative solutions to this currently considered plausible by brain researchers. 1) Attention acts as a gating mechanism and selects the features of a single object for processing at any one instant, or 2) the temporal pattern of activity forms another component of the neural code that is used to code relational information.
To illustrate the attentional hypothesis (1) lets go back to our analogy with bags and cards...Imagine that instead of being forced to keep our hands deep inside the bags we are allowed to select a single card from each bag and remove it. Then we can put the two cards down on the table in front of us and stick them together. This, of course, will make it very easy to see which colour goes with which picture, but comes with the rather large drawback that only a single perceptual object can be active at one time. Amongst psychologists this is known as the 'feature integration theory' of attention, proposed by Anne Treisman in the 80s.
The second possibility is newer (and more contentious). It has been proposed that synchronous oscillations are crucial for binding (von der Malsburg & Wolf Singer being the original proponents). Oscillatory patterns in neural firing have been reported for many years but the functional relevance of this is still a matter of debate, and many research labs are attempting to answer this question. If neurons A B and C all fire at the same rate their activity is difficult to distinguish. However, if A and B fire periodically at the same time points (in phase), whilst neuron C fires at different time points (e.g. in antiphase), this may be used as a type of binding information. Going back to the analogy, this is akin to somehow weaving a thread between a coloured card in one bag, and connecting it to a picture in the other bag. Many of these threads can be fastened at a single instant, so multiple objects can be represented simultaneously, and the thread can rapidly be tied and untied.
I hope this analogy is helpful in understanding what neural binding is all about!
Saturday 18 December 2010
Where are coloured afterimages generated?
A facinating study by Shimojo et al., (2001) has demonstrated something unexpected about how colour afterimages are generated in the brain. Chromatic after images are common and easy to produce. See for yourself - try staring at fig 1 for 30 seconds and then shift your eyes to the blank region on the right. You may find it takes a couple of seconds to start, but you should find yourself experiencing a coloured afterimage. Can you see a face? Theories of colour vision suggest that these after images are generated through opponent processes in the retina and brain. The opponent colours come in three pairs, red - green, blue-yellow, black-white. If you try the test again on figure 2 you should experience a correctly coloured union jack as the afterimage. Notice the range of colours in the original consist exclusively of the complementary opponents.
Figure 1.
Figure 2.
There is also another factor that is involved when we see coloured after images. You will notice that the effect requires you to stare at the original for a certain length of time. This implies that stimulating the visual system with the same coloured pattern for a length of time causes a change to tale place. This change is neural adaptation. Constant excitation of a neuron causes it to become fatigued and fire less to a subsequent input.
It is commonly believed that the primary (if not only) site of neural adaptation causing colour after effects is the retina - at the level of the photoreceptor. The photo pigments are believed to become bleached, causing adaptation. With a simple but effective visual trick Shimojo et al have shown otherwise. By constructing a stimulus which generates the effect of "perceptual filling in" -that is a region where the brain believes there should be colour and fills in the gaps, even though there is no physical colour being presented - they have shown that afterimages are also perceived for this "illusory" coloured region. That is, a perceived region of colour not presented to the eye, can causes an after image - This means that adaptation must be taking place at a cortical site and not in the eye. A simple but effective experiment!
Try it for yourself by staring at the white dot underneath the red region in figure 3 for 30 seconds. Then shift your gaze to the white dot on the right. You may see one of the patterns shown on the bottow row (B).
Figure 3.
Figure 1.
Figure 2.
There is also another factor that is involved when we see coloured after images. You will notice that the effect requires you to stare at the original for a certain length of time. This implies that stimulating the visual system with the same coloured pattern for a length of time causes a change to tale place. This change is neural adaptation. Constant excitation of a neuron causes it to become fatigued and fire less to a subsequent input.
It is commonly believed that the primary (if not only) site of neural adaptation causing colour after effects is the retina - at the level of the photoreceptor. The photo pigments are believed to become bleached, causing adaptation. With a simple but effective visual trick Shimojo et al have shown otherwise. By constructing a stimulus which generates the effect of "perceptual filling in" -that is a region where the brain believes there should be colour and fills in the gaps, even though there is no physical colour being presented - they have shown that afterimages are also perceived for this "illusory" coloured region. That is, a perceived region of colour not presented to the eye, can causes an after image - This means that adaptation must be taking place at a cortical site and not in the eye. A simple but effective experiment!
Try it for yourself by staring at the white dot underneath the red region in figure 3 for 30 seconds. Then shift your gaze to the white dot on the right. You may see one of the patterns shown on the bottow row (B).
Figure 3.
Saturday 12 June 2010
Subscribe to:
Posts (Atom)