I find a lot of the common explanations of how the brain works in neuroscience to be unsatisfying… compared to molecular biology, it just feels like often we don’t have a solid (falsifiable, etc) grasp of what’s actually happening, yet… too much handwaving (for instance, the lack of any specifics on where precisely some data is located… even “it’s stored in the connections” seems not quite true or falsifiable… although perhaps not exactly false, either). So I find discoveries like this exciting as it’s starting to peel back the curtain on a true understanding of the brain and neurons.Reply
Anybody look at the original paper? I'm not convinced at all by their Fig 2 - that's more a blob than a precession.Reply
Don't get yourself distracted into the light while trying to find how neuron learns. Once get distracted, always be distracted and get into a rabbit hole of plethora of information but loose the initial motive.
Shape of the neurons is the memory. A fetus brain doesn't have ups and downs. It is fluidic. As we learn, we get ridges. This is just a fact to prove that neurons/ the neural fluid(neurons together) take shape as it learns. Once we establish a simple, yet truthy foundation, pile up things on this for more missing pieces.Reply
This is what (mediocre) my PhD thesis was about.Reply
Rate coding vs temporal coding is literally a meme in neuroscience because the two camps seem to refuse to compromise.
Everyone else has realized that both happen depending on how that particular part of the nervous system works, or even what particular kind of information is flowing through it.
This title reads like it was written by a rate coder who woke up one day and was like "Woah, you mean ... there might be more to it than just averaging spike counts per unit time???"
edit: Hah, they are literally the first two headings in the wikipedia article on neural coding .Reply
how is this new? spiking thresholds serve a purposeReply
I'm merely a casual observer of neuroscience, but I feel like I already knew this. This isn't a huge leap if you accept that Spike-timing-dependent plasticity is happening.Reply
I totally called this!
Watch for firing patterns encoded on MRNA for pattern matching, and to help predict what might come next in a sequence. That was the other part of my theory.Reply
Neurophysiology and phase precession have a storied history, from the 1990's up to the Nobel Prize in 2014.
O'Keefe and Reece in 1993: https://onlinelibrary.wiley.com/doi/abs/10.1002/hipo.4500303...
... [F]iring consistently began at a particular phase as the rat entered the field but then shifted in a systematic way during traversal of the field, moving progressively forward on each theta cycle... The phase was highly correlated with spatial location... [B]y using the phase relationship as well as the firing rate, place cells can improve the accuracy of place coding.Reply
Soo... this starts to sound like serial communication. When can we start reverse-engeneering communication protocols? :DReply
This has been somewhere between highly suspected and well established for a few decades now, certainly not unexpected. These findings simply back up the present consensus.Reply
How the fuck is that unexpected?Reply
Preprint of the paper "Phase precession in the human hippocampus and entorhinal cortex": https://www.biorxiv.org/content/10.1101/2020.09.06.285320v1....Reply
Isn't that the whole idea of Spiking Neural Networks?Reply
why is this unexpected?Reply
Reminds me a lot of stochastic computing: https://en.m.wikipedia.org/wiki/Stochastic_computingReply
> For decades, neuroscientists have treated the brain somewhat like a Geiger counter: The rate at which neurons fire is taken as a measure of activity, just as a Geiger counter’s click rate indicates the strength of radiation.
This hasn't been true for at least 20 years. There's a classic paper showing evidence of this in 1993 (O'Keefe and Reece) in rats. And has been an active area of discussion both before and since. (Note that it's not to say that rate isn't important, but as a community no one has beleived that all the information would be encoded in rate for many years)
There's lots of good explainers here that link to relevant research: http://www.scholarpedia.org/article/Encyclopedia:Computation...Reply
Freeman was right after all.Reply
That must be wrong. There are no time measuring machinery and no notion of time.
Frequency, by the way, is not a synonym for time.Reply
The headline seems to be somewhat at odds with explanation of this "phase precession" is the body of the article:
"The phenomenon is called phase precession. It’s a relationship between the continuous rhythm of a brain wave — the overall ebb and flow of electrical signaling in an area of the brain — and the specific moments that neurons in that brain area activate. A theta brain wave, for instance, rises and falls in a consistent pattern over time, but neurons fire inconsistently, at different points on the wave’s trajectory. In this way, brain waves act like a clock, said one of the study’s coauthors, Salman Qasim, also of Columbia. They let neurons time their firings precisely so that they’ll land in range of other neurons’ firing — thereby forging connections between neurons."
My understanding of what they're saying is that it's related to "neurons that fire together wire together". Given different signal travel distances, it's necessary for neurons to fire at different times if they're to arrive at a given destination at the same time (in order to "wire together"). They achieve this timing by firing in synchrony with the theta brain waves that travel across regions.
So, with this understanding, I guess you could say the timing is encoding information, but really in essence it's only the relative spatial position - within the cortex - of the firing neuron that's being "encoded". A more useful way to view it is just that firing times are synchronized to theta waves in order to achieve larger scale coordination that compensates for signal travel distances.Reply
Motion-detecting neurons in the visual cortex need to use timing; it would be a little weird for evolution to just use that mechanism once and not try it again.Reply
In my very different field (robotics and industrial automation), temporal coding is one of the most powerful ways to expand your IO. Nearly all PLCs, sensors, and robots make heavy use of digital IO. But this parallel interface is limited, especially with hard-wired signals, and even if you're using serial network protocols the typical fieldbus abstraction represents the network as fixed-size buffers of digital IO that update every few milliseconds. Analog signals are usually an expensive optional extra!
However, the controllers all include high-resolution timers. If you need to transmit an analog value, rather than bit-coding it over 12 discrete digital IO, a clever programmer might turn on a digital output for the desired number of milliseconds, or select between 10 or 16 or however many states you want to represent with your one wire using a predefined list of durations. You can convey far richer information this way!
Always interesting to see what researchers are discovering in the automated control system that is biology... sometimes we can discover techniques for use in industry with biomimicry, sometimes biology we didn't know about seems to imitate technology we developed independently.Reply