One way to look at neuronal activity is too consider what a single neuron can do. (I mean here a quasi-biological “spiking” neuron.) It seems that this viewpoint is to some degree flawed — we should really think in terms of columns and populations — but I’ll go with it for today.
Most of this a self-teaching partial summary of Silver 2010, Neuronal arithmetic (link). The article concerns itself mostly with modulatory inputs (coming from somewhere in the network) and how they can change behavior of a neuron. And the behavior here is really the pattern of response to different, stronger driving inputs. As Spratling 2014 (A single functional model of drivers and modulators in cortex) helpfully explains, a distinction is commonly made between synaptic connections capable of evoking a response (“drivers”) and those that can alter ongoing activity but not initiate it (“modulators”).
The most obvious operation is addition. A neuron receives input currents from its neighbors, these currents increase the membrane potential — the effects add up — we get spikes, and when the electrical input is sustained, these spikes will repeat at some rate. So we can say that inputs are added, and the outcome is reflected in firing rate.
Put it another way. Any modulating excitatory current makes it easier for the neuron to reach its threshold by increasing its potential. When also enough of driving inputs coincide, the action potential can actually be reached. We can thus see why a neuron is a coincidence detector for its presynaptic (inputting) neighbors (although strictly it does not produce arithmetic, it only works because of addition).
Subtraction only makes sense, I think, when we look at modulating inhibitory inputs. We cannot, by definition, get a spike by inhibiting/depressing the neuron: on the contrary, we are making reaching the action potential harder. We are muting the cell.
Let’s say that we do not have particularly well-crafted connections for the neuron and its inputs are more like a stochastic noise doing some excitation and some inhibition. This seems to be more realistic. Then by skewing the noise up (making it more probable to be excitatory in each given tiny modicum of time) we modulate the cell’s potential additively, and by skewing the noise down — subtractively.
And which is most interesting, by decreasing the noise overall we multiply the neuron’s response. Why is that? The output firing rate of the neuron goes up, proportionally to its inputs, because it is less probable that in any given time inhibitory noise happens to overwhelm the excitatory driving input. Thus we do not really excite the cell more, we’re just making it easier for any current to drive postsynaptic firing. We multiply the response. (Compare fig. 2c, bottom (output spike probability) with 1e (output rate) in Silver 2010.)
The article mentions that we increase the neuron’s gain (which is related to amplification in electronics) when multiplying. This connection between neural gain and multiplication has further significance.
And finally, how to divide (decrease proportionally without subtracting) the response of a neuron? We just put some different, parallel way for the driving current to pass — a shunt. According the Ohm’s law, some of the current that would be otherwise “available” for the neuron would go some other way. Thus we get a proportional decrease.