It seems very odd that the article seems to be measuring the information content of specific tasks that the brain is doing or specific objects that it is perceiving. But the brain is a general-purpose computer, not a speed-card computer, or English text computer, or binary digit computer, or Rubik's cube computer.
When you look at a Rubik's cube, you don't just pick out specific positions of colored squares relative to each other. You also pick up the fact that it's a Rubik's cube and not a bird or a series of binary digits or English text. If an orange cat lunged at your Rubik's cube while you were studying it, you wouldn't process it as "face 3 has 4 red squares on the first row, then an orange diagonal with sharp claws", you'd process it as "fast moving sharp clawed orange cat attacking cube". Which implies that every time you loom at the cube you also notice that it's still a cube and not any of the millions of other objects you can recognize, adding many more bits of information.
Similarly, when you're typing English text, you're not just encoding information from your brain into English text, you're also deciding that this is the most relevant activity to keep doing at the moment, instead of doing math or going out for a walk. Not to mention the precise mechanical control of your muscles to achieve the requisite movements, which we're having significant trouble programming into a robot.
Has anyone here even read more than the title? The article literally explains that you perceive at a rate of 10^9 bits per second. But after filtering and preprocessing in the outer brain you are only left with about 10 bits per second for conscious processing for things like motor functions. Yes, you can see a Rubic's cube and perceive all sorts of facts about it and the environment at the same time. But try solving it with your hands while someone shows you a bunch of objects and asks you visual comprehension questions at the same time. You might still perceive those other objects, but consciously classifying them verbally is gonna be really difficult. It's no surprise that the feature space that your deeper brain can actively work on is quite limited.
I did read the summary and see what you saw, but that still leads me to believe the headline of the article is clickbait and that the authors don't understand that action signalling doesn't require the same bandwidth as information processing, but even then it's way more than 10b/s.
Look at a backhoe. It has a few levers and a couple pedals. EXTREMELY simple interface. Each lever and pedal is basically 1 bit for engage/disengage, but the operator has to process orders of magnitude more sensory info to operate it properly. You could use an arduino to control a backhoe, but you'd need quite a powerful computer to know what to tell the arduino to do. This shouldn't surprise anyone. Knowing how to use the tool well is always far more complicated than simply knowing how the tool operates.
It's 1 bit (discriminate red vs. green) but I doubt many here can manage 100 ms, which is what it would take to implement 10 of those decisions per second.
If you extend your arm straight out to point for example, that's significantly more than ten commands sent at once to your arm muscles. It's just non-conscious. That test measures a bunch of things all together too. Visual processing, color recognition, and conscious decision to act. I got just under 240, and considering everything I had to do, that's pretty neat.
Can you point me to the book that says that "Booleans" are encoded in the human brain just like in a binary computer and they take exactly 1 bit to store? I mean, why not assume error correction, while we are at it, or some kind of compression algorithm that reduces the size further?
If that sounds ridiculous to you, you are beginning to get it. Every single brain cell works on electric potential (a continous value). Not a single one of them can be said to be in the "zero" or "one" state (a discrete value).
"bits of information" doesn't mean 1's and 0's. "bits" is a measure of entropy.
If you send an analog signal over a wire and the receiver on the other end can distinguish between 1024 different voltages reliably, then you sent 10 bits of information. Even though you sent neither a 0 nor a 1, but an analog voltage.
It's about the "information" as an abstract concept, not about how that information is encoded as data sent over some medium. I can send you the same thing thousands of times. I would have sent you lots of data, but 1000 copies of the same thing still only contains as much information as a single copy does.
According to psychology research on human perception there is indeed something that you could call buffered IO in the human brain.
It's used in theories to explain why you can hear your name in a short sentence in a crowded room and then know what the sentence is. Including one or two words before your name. While if your name wasn't said, no attention is diverted to that sound and the "buffer" contents get dropped without retrieving the information.
Yes, and there's different buffers with different capacity for different types of perception according to these theories. Look up Baddeley & Hitch for details. I would guess there are similar buffers for other senses that didn't make it in the lectures in college.
> or can it process them with some amount of parallelism
I guess someone with two boxes handy could set them up next to each other and run two copies of this test to see if their reaction time holds up or if it lengthens?
EDIT: mine suffers greatly on dual wield: 225 -> 320 ms
I'll have a good think about whether we can process things in parallel or only in series the next time I decide to walk and chew gum while bouncing a ball, not tripping over things, and staying out of traffic.
This might be the worst possible way to measure the net throughput of conscious decision making I've ever seen. A wildly ecologically invalid, serialized discrimination task.
Now take someone navigating a social situation with three simultaneous participants, actively listening, coordinating body language, interjecting, responding to questions... and you have a human being operating at a "bitrate" of information that is so many orders of magnitude removed from this bullshit psychometry task as to prompt the question: "what the actual are they talking about".
Using a computer analogy to try to understand the brain functions is ok as long as we remember that the brain is not actually one of them. There have been different technological analogies throughout history including hydraulic systems, telegraphic and telephonic lines, all of them might seem comical today but they were the best explanation at the time, same will be in the future with ours, comparing something like the brain (or rather the whole living being) to a mathematical processing device is more than short sighted. We try to explain everything from the engineering point of view by isolating things but that’s not how life or the universe works.
But on the other hand, a "fast" debator from a high school debate team can process 260 words per second while compressing that down to notes, while simultaneously queuing up meaningful responses, while evaluating which/what to go with for a particular audience/judge, while listening for mistakes from their opponent. If you distill bitrate down to serialized responses to a canned psychometric task, sure, but why do we think that's the total throughput? It isn't, trivially, if we inspect what people do every day.
> EDIT2: typing at 90 wpm is 1,5 wps, or <10 bps according to Shannon (~1 bit per letter, 5 letters per word)
People can type way faster than 90 WPM, we talk about bitrate of the hardware here its the same even if you haven't practiced typing. And typing is still not what our consciousness is made to do, we have way higher bitrate than that when doing more native tasks such as running in the woods. You can't run in the woods without consciously thinking about each step and deciding where to put your foot next to not hurt your ankle and not collide with trees, that has a massive bitrate.
Running in the woods supports TFA: 180 steps per minute * 16 possible locations per step = 12 bits per second; maybe there are fewer or more possible footfall spots but no matter what it's more than 1 bit per second and less than 100. (and way less than 1e9)
Listen to a podcast at double speed. Assuming a normal talking speed of 150 words per minute, 300 words per minute of written word is not 10 bits per second.
The Hutter prize submissions can get compression factors >9 on english wiki text. And if you're listening to podcasts the entropy is probably even lower. The human brain is obviously a much better language model than anything we have today, so you can assume that the latent layer in your brain deals with much less than 60 bits per second.
Each second of listening we're perceiving the speaker's identity, what accent they are using, how fast they are talking, and what emotions they are showing. Those should count for the bit rate dealt with by the conscious brain.
Again: perception is not what we're talking about and the paper acknowledges that perceptive input is orders of magnitude larger. I challenge you to listen comprehensively to someone talking about a topic you don't know while identifying someone in a police lineup.
Consider normal text compression and you're left with a few bits at best for most of those "fast talkers/listeners." And the human brain is very good at compression.
Yes, but in order to measure its bitrate accurately you need to tell us whether that compression is gzip, zlib, zip or 7zip. They don't all produce the same results.
If we are going to be utterly ridiculous about this conversation, let's at least be complete.
How would that be relevant? The only relevant aspect for this discussion is that language is tightly compressible, which noone here has challenged yet. But I've seen noone come up with a true example where you'd be actively processing high entropy data at a rate that disagrees fundamentally with the paper.
I feel like this is splitting hairs and moving goalposts. The pro side will always have some sort of explanation why it’s 10 bps or less without a way of actually proving it.
I can type at a rate faster than 10 bits/ second (about 2 characters / 16 bits! what a slow rate! I'm well above that, at least 24 bits/second!) and you aren't compressing that to less.
And that's while also moving my hands in extremely complex ways to perform the task, looking around my office, listening for threats / wife, observing the breeze from my fan, twiddling my toes on the balance board I don't use...
It's clickbait/ragebait. Well done to the title writer.
Another way to put it: try to perform a skill you have never practiced. For example, if you've never played the piano or tried to read sheet music, see how long it takes you to play a few notes correctly. It's complex enough that you'll very likely find yourself limited to around 10 bits per second. You shouldn't count the bits handled by visual processing, basic motor control, and other things you have practiced all your life. If you practice the skill, the skill moves out of your conscious processing and no longer counts toward the 10 bits per second.
> You shouldn't count the bits handled by visual processing, basic motor control, and other things you have practiced all your life.
Ok, but how to count bits for your example with piano? It has 80 keys or so, isn't it? Should we take log2(80)=6.32...? Of if you are working with only part of the keyboard, maybe we should take log2(7)? How many bits per seconds of processing it takes to keep my hand on a keyboard in a right way that was shown by a tutor? How to measure it? Does my experience with a guitar makes it easier? How many bits easier?
Your argument boils down to “exclude anything faster than 10 bits per second, so then you’re limited to things which are 10 bits per second or less”. It’s extremely unconvincing.
What conscious motor processing? My motor functions largely take care of themselves while I daydream when walking or consider where I want to steer while driving.
That's just motor reflexes that don't even enter higher cognitive processing. But daydreaming probably does process at the same rate as normal language, as was explained in the other comment. Try doing algebra in your head while running an obstacle course you've never seen and you'll be much slower at everything.
That still doesn't explain strange sentences in the article such as"
> Why does the brain need billions of neurons to process 10 bits/s?
Maybe the article is being intentionally disingenuous here? The brain is definitely not processing 10 bits/s, maybe a small part of it is.
I'd also say that you can make very complicated problems with only 10 bits of input (e.g. calculating busy-beaver of N), so "processing X bits" where X is a small value is no guarantee that this should be doable with a low amount of processing anyway.
You're missing my point. I'm saying that `try solving it with your hands while someone shows you a bunch of objects and asks you visual comprehension questions at the same time` is more than 10 bit/s of data being processed. I'm saying made up "tasks and outcomes" in this study are not a measure of the brain's throughput IN THE INNER LAYERS.
10 bits per second is effectively nothing. Not even a single cell could operate at 10 bits per second. Every organic system would collapse immediately.
Remember, this is an intermediate encoding of a hidden feature space between perception and planning. What you see at the start and the end of the neural network might be very different. Consider this: Typing at 60 words/minute, 5 characters/word and 8 bits/character gives a gross bit rate of 40 bits/second. With today's compression algorithms, you can easily get 4:1 reduction in data. That leaves you at approximately 10bits/second that are consciously processed in your brain. Probably even less since your brain might be much better at encoding language than even our best models. Even if some of those numbers are off by a certain factor, the number in the paper is certainly in the right ballpark when you consider orders of magnitude.
So the argument is that compression is not processing? That's a very weird definition of processing. Also when we do this we can always argue that we get down to 10bit/s, just change the compression ratio.
Send a query to your favorite database. The database preprocesses hundreds of gigabytes of data, map-reduces it, and finally returns a single 32-bit integer, taking exactly 3.2 seconds to do so.
Nobody would say that the database can only process 10 bits per second. The query just happened to ask for a very simplified answer.
Everybody would say it outputs 10 bits per second. And when it comes to consciousness simplified answers at 10 bits per second is the best you can get. This article asks why.
It's an interesting question indeed, why the mind produces so little output compared to input.
Still, I get the feeling that the apparent bit rate of the conscious output is not the right metric to be measuring. There are so many ways in which the bit rate you're measuring could have been bottle-necked by factors outside of the mind, such as the speed at which you can move your tongue to articulate language, or a cultural expectation that prefers compact answers over chain-of-thought babbles. The mind also leaks a lot of information, both consciously and unconsciously, through side channels such as facial expression, gestures, and even variations in the rate of output itself.
> It's an interesting question indeed, why the mind produces so little output compared to input.
My theory is that consciousness is super complicated and brain has barely enough juice to crank out any of it.
Conscious field of vision is about 2% of total field of vision and we observe the world by constantly swiping it with this 2%. This way we reuse both neural circuitry of our brains and also training data that would otherwise necessarily be spread across larger size of neural network if our conscious field of vision was larger.
So it short, conscious output is so small because we are dumb and short lived.
I also wouldn't say that there's a lot of information in the side channels. Properly compressed it's probably less than engaged conscious output.
On the other hand, the amount of actual entropic "information" that is processed when you identify a Rubik's cube as such may be nowhere near as much as you think it is, and most importantly, 10 bits may be nowhere near as little as you think it is.
If we use your example, which is that of identifying an object, we may simply ask the entropy of what the distribution of possible objects-to-be-identified is at t=0, prior to any analysis. Saying we can resolve 10 bits of this entropy per second is equivalent to saying that we can identify one object from a uniform distribution of 1024 per second. Let's suppose this is a low estimate by several orders of magnitude, and that it's really one from a billion objects instead that you can identify per second. Then this would still only be about 30 bits/sec.
None of this changes the main thesis of the paper, which is that this is much lower than the 10⁹ bits/sec our sensory systems transmit.
But you don't just perceive an object's category (like "cat"). We also perceive a high amount of detail about the object - colour, pattern, behaviour, we make comparisons to past behaviour, predictions of what's likely to happen next and so on.
Sure, some parts of the brain don't receive all that detail, but that's necessary for abstraction. If you pumped all the sensory data everywhere, the brain would get overwhelmed for no reason.
That 30 bits was not literally intended to represent only the object's noun category, but even if it did, none of the additional pieces of information you would like to add are going to change this picture much, because what one would think of as a "high amount of detail" is not actually all that high in terms of the logarithmic growth of the entropy.
Take color: suppose the average person has 16 baseline colors memorized, and then a few variations of each: each one can be bright or dark, saturated or pastel. That would be about 6 bits for color. If you have an eye for color or you're an artist you may have some additional degrees of freedom. Hell, a computer using RGB can only represent 24 bits worth of color, maximum. I am going to suggest this stuff gets cognized less than 10 bits worth for the average person; let's just say 10.
Now, of course, people can memorize more than one color. If colors are independently distributed uniformly at random, then processing N colors requires 10N bits. But of course they aren't, so the entropy is less. But again, let's just say they were. So how many color combinations can you process per second? I would say it's a bit of a challenge to memorize a set of 10 arbitrary drawn colors shown for a second. Most people couldn't continuously do that at a rate of 10 colors per second. That would be 100 bits/sec of info.
The point is that you really don't perceive all that much. You show the average person a Rubik's cube, there is no way they're going to remember the exact pattern of colors that they saw, unless the cube were solved or something. They will perceive it as "multicolored" and that's about it.
Adding behavior, texture, etc doesn't change this picture. None of this stuff is even close to 10^9 bits of entropy, which would be 2^1,000,000,000 different equally likely possibilities.
> deciding that this is the most relevant activity to keep doing at the moment, instead of doing math or going out for a walk.
How many bits of actual decision is going on here, as compared to the period of time that decision applies to?
For example, if a person decided once per second whether or not to go for a walk, that could be 1 bit per second. But if that person is constantly transitioning back and forth between walking and not-walking, we could consider their behavior pathological. Instead, for most people, the information density of these decisions is quite low, i.e. the per-second decision bits are very compressible.
Personally, I only decide whether to go for a walk (or not) _at most_ once every few minutes. Even if we add in bits for "where" and "precisely when" and "how long", I think we're still at just a small fraction of 1 bit per second.
Your conscious planner may be making 1 decision/second, but your senses, proprioception, balancing system, etc. are handing you about a gigabit/second of data, most of which never rises to your conscious attention.
When I'm reading, that's roughly 2000 bits/second, but I am engaging it with model-making systems that can completely enrapture me.
I/O is not the same as computation; conscious computation is not the same as all computation.
There is a whole "OS" in the background that is way more complex than all of our programmed systems. I may be hyped focusing on a complex program and really not think about anything else, but my body is still constantly senses, processes, and reacts to signals. E.g. I'm not ragdoll falling to the ground, which requires holding balance and tone, which is far from trivial. I also constantly listening to every possible danger, which would immediately override my current activity. I also react to internal signals like hunger or pain.
A bit dumb, but maybe relevant comparison might be asking why can an Apple Watch stay on for a single day only on a charge, while Garmin can do 2 weeks/a month? Because one is a general purpose computer, while the other is an embedded software that can only ever do that few things it is preprogrammed to do.
I don't see your comparison as relevant. I'm a metre away from a nine-year-old Android 6 phone that can easily go a month on a single charge (while tracking its location!). Your Apple Watch is (a) constantly using its radio, (b) running a fairly high-end graphics chip, and (c) actively probing your body at intervals. Nothing to do with software.
I think you lost the actual comparison. Is not about Android and iOS.
Is about having a ton of mini specific "computers" vs. the very generic conscience.
For example all the balancing, background hearing, hungry, thirsty , and so on are very specific zones in the brain. While our conscient mind is doing...well, whatever we like in a very generic way, basically keeping the joy of living on.
> Personally, I only decide whether to go for a walk (or not) _at most_ once every few minutes.
And yet, if I walked into your office and shouted your name, you would have less than a second of reaction time, indicating that you are processing all sound around you and deciding whether to get up and talk to someone pretty much continuously.
The vast majority of the brain's processing power, including conceptual processing, is not conscious. Conscious thought is the framebuffer of the mind, subconscious is the RAM.
Your brain does. It's not _one_ computer. It's dozens some interconnected and synchronized to others some with entirely independent control. It's why we have seizures, why optical illusions exist, and why you flinch when something moves at you in your peripheral vision.
Yes but reductions have implicit or explicit assumptions. When our reductions give non-sensical results (10 bits/sec) we should not take these results seriously/understand them as an artifact of our reductions, rather than taking them seriously and giving them ontological existence.
That's debatable. To me declaring that the world was made in a simple way like this and just listen to the priests was reductionism of the wrong kind.
I know the kind you mean though, reducing external factors(like air), so you can isolate gravity as a force unrelated to density by finding out in a vacuum a feather and a metal ball fall with the same speed.
(As for the paper I have not yet made up my mind.)
I'm not sure why reductionism brings simplicity to anyones mind. Yes, you consider one thing at a time, but there are billions of such single things to explain. And isolating one is a challenge in its own right.
Reductionism is setting everything but one thing aside and trying to figure out that one thing. Bringing in the second thing in only after you fail and trying again.
Could the 10bits be some sort of instruction-set-address-width?
Meaning that while you are solving the rubik‘s cube, your brain has the capacity of switching into one of 2^10=1024 possible next states, which might fire nerve impulses for the body and also come with a new set of 1024 next states?
When focusing on the cube about 20 or 30 of the next states could be related to the cube, while the rest is concerned with bodily sensations, other thoughts etc.
What if learning something well meant that less states need to be reserved for the task/cube at hand because the decision tree going forward has become more efficiently packed, so more states are open for something else.
THe 10 bits is just very obviously made-up nonsense. The brain doesn’t operate on anything like a binary system, sensory data is not digitally encoded - it is all continuous data. Even the abstract kind of answers its own question. It says sensory data is something like the order of 10^9 bits per second but the “information throughput” is 10 bits per second. It asks “why do we need billions of neurons to process 10 bits per second?” Well we clearly don’t because that’s clearly not what we do. We process 10^9 bits per second, and we do a bunch of things which aren’t just processing sensory data as well. And on top of that we do things which aren’t measured in the 10 bits per second also.
> sensory data is not digitally encoded - it is all continuous data
That might be correct but as far as I know the brain has circuits that fire periodically with a certain frequency (what brain waves are about). If such frequencies steer some kind of gate than you do have a kind of digital logic with 1 being where the brain waves cross the zero line.
> If such frequencies steer some kind of gate than you do have a kind of digital logic with 1 being where the brain waves cross the zero line.
What makes you think that this is the case? If someone touches you from behind do you immediately fall over dead from sensory overload? Do you react as if you where punched in the gut? Do you look over your shoulder to check who it is? Or do you fail to notice it until they break a bone? There is a significant amount of inputs where the reactions are not just zero or one but happen on a scale, your digital 1 or zero is more likely to be a 1.0 or 0.0 of a numeric type with unknown bit depth, lets go with 80bit since x87 is just the worst and completely blows the 10bit claim.
Don’t forget typing in English is also outcome of processing all the context.
Also not so conscious context like „I am writing a reply on HN not on Reddit - making obvious silly pun about cats is not going to make upvotes - making quasi intellectual comment about subconscious processing should earn me some”.
> But the brain is a general-purpose computer, not a speed-card computer, or English text computer, or binary digit computer, or Rubik's cube computer.
Look, I like the sentence. I like it so much I might steal it. But, that said I think analogising what the brain does to "general purpose computer" is a bit awry because we can invent algorithms, and given a process we can implement algorithms but I still tend to think there is something ineffable about thought/cognition which is NOT general purpose computation. It's a heuristic. I dunno what I think.
The computer analogy is definitely off. For one thing, rationality is kind of bolted on to our linguistic ability. It's not deeply seated in our brain; it takes substantial energy to engage in logical reasoning and we're not that great at it; even very skilled mathematicians make trivial mistakes.
Deeper in our brain is reasoning about causality. But even that is more of an input to our reactions and emotions than it is as a consciously accessible function, and it too is often wrong.
As you said, it's a heuristic. We certainly can't solve the halting problem with brain magic.
Implementing algorithms doesn't need a computer, and it doesn't need something that's not a computer.
That said, with enough willpower why couldn't you implement a universal Turing machine in your brain? I think you would need external memory (a tape) but that's not a problem.
Otoh, I just spent a minute to even comprehend your idea. We are living in the information era, the era of dodging an orange cat has ended. And we suck at this new thing, really. We’re like slow af emulators of logic programs and tend to fallback to animal mode when tired or clueless.
Do you really think that the fundamental human act of the current era is running logic programs? I don't dispute that digital logic and its uses deeply changed humanity, but this is like calling humans "weak af imitations of steam engines" during the late-1800s railroad boom.
Calling actions/thoughts that follow a concrete list of logical rules/algorithms "animal mode" is deeply anti-human.
I don't see how you could avoid logic and do literally anything useful that is not animalistic by nature. I can only imagine how <human-incomprehensible positive adjective> it would be to be able to remember more than 7 things, use more than one thread and grasp complex ideas by simply looking at their schema for a second, all that on average. Instead we slowly follow triсkles of data, constantly being carried away by nuances and distractions, getting overwhelmed and tired in no time. Our natural "logic" sees god behind a lightning and witch behind an accident, please don't defend it.
deeply anti-human
What's wrong with that, just look at us going, globally. Any pro-human thoughts, really? Never understood this clinging to the horribly limited nature of humanity only because it has some relatively nice pockets (which usually correlate with deep thought and negatively with the lack thereof).
Well and not only that, your brain is at the same time processing incoming sounds which it spatializes based on the minor phase/level differences and based onnreflections within the ears themselves. Your brain processes all feelings for all body parts, including a big chunk of skin, temperature perception, sense of gravity to balance yourself, etc.
If you received 10 bits once a second describing all of that, good luck, you probably won't survive the next 15 minutes.
The PhD student writing this could be excused for being young an inexperienced but their advisor, tagged second on the paper, should have cut this off at the pass.
> If the questions are properly designed, each will reveal 1 bit
of information about the mystery thing. If the guesser wins routinely, this suggests that the thinker can access about
2^20 ≈ 1 million possible items in the few seconds allotted. So the speed of thinking – with no constraints imposed –
corresponds to 20 bits of information over a few seconds: a rate of 10 bits per second or less.
This is an extrinsic definition of "information" which is task relative, and has little to do with any intrinsic processing rate (if such a thing can even be defined for the imagination).
The question of why does biological hardware capable of very high "intrinsic rates" deliver problem solving at "very low extrinsic rates" seems quite trivial. Its even a non-sequitur to compare them: properties of the parts are not properties of wholes. "Why does a gas move at 1 m/s, when its molecules move at 1000s m/s..."
All the 'intrinsic processing' of intelligence is concerned with deploying a very large array of cognitive skills (imagination, coordination, planning, etc.) that are fully general. Any given task has requires all of those top be in operation, and so we expect a much slower rate of 'extrinsic information processing'.
Consider how foolish the paper is to compare the intrinsic processing of a wifi network with the extrinsic task-specific processing of a human: it is likewise the case that if we set a computer the challenge of coordinating the solution of a task (eg., involving several LLMs) across a network, it's task-specific performance would drop off a cliff -- having a much slower 'solution rate' than 10bit/second.
These 'task-specific bits' represent a vast amount of processing work to solve a problem. And are at least as much to do with the problem, than the system solving it.
It seems to me all this paper does is define tasks in a highly abstract way that imposes a uniform cost to process '1 bit of task information'. Do the same for computers, and you'd likewise find tiny bitrates. The rate at which a problem is solved is 'one part of that problem per second' for a suitable definiton of 'part'
Another relevant point is the common anecdote about, eg, some master engineer who gets paid big bucks to come fix some problem that's been blocking up a factory for weeks. The engineer walks around, listens to a few of the machines, and then walks up to one of the machines and knocks it with his elbow Fonzi style and the factory starts running again. The factory boss is happy his factory is working, but annoyed that he paid so much for such an "easy" solution.
Ie, the amount of input and processing required to produce the "right" 10 bits might be far larger than 10 bits. Another obvious example is chess. The amount of bits conveyed by each move is small but, if you want to make the right move, you should probably put some deeper thought into it.
Humans are essentially organisms that collect and filter information, boil it down to a condensed soup of understanding, and emit a light sprinkle of carefully chosen bits intended to reshape the future towards their desires.
filtering and _acting_ on this condensed information.
For example: lighting strucks a tree, a fire starts. Man is scared but that night is very cold and near the tree is warmer. This happens a few times, und a branch falls and it is collected , incidentally is thrown on another pile of wood , starts burning -> idea of fire is formulated and since them man keeps warm.
Or: man finds shiny things in a river bed, collects them, one day the whole shack burns from lighting, and discovers that the shiny thigs are now in a different shape -> metal working is born.
Seems like this 10 number comes out of the kind of research where the objective isn’t to find the truth, but to come up with an answer that is headline grabbing. It’s the scientific equivalent of clickbait.
This type of comment is my least favorite on HN. "Seems quite trivial," "non-sequitur to compare them," "foolish." I am not able to read the paper as I do not have access, but the published perspective has 131 citations which seem to consider everything from task-specific human abilities, to cortical processing speeds, to perception and limb movements and eye movements, and so on.
I'm glad you thought about it too, but to assume that the authors are just silly and don't understand the problem space is really not a good contribution to conversation.
(Disclosure: I’m a former academic with more than a handful of papers to my name)
The parent comment is harshly criticizing (fairly, in my view) a paper, and not the authors. Smart people can write foolish things (ask me how I know). It’s good, actually, to call out foolishness, especially in a concrete way as the parent comment does. We do ourselves no favors by being unkind to each other. But we also do ourselves no favors by being unnecessarily kind to bad work. It’s important to keep perspective.
I realized that I do have institutional access and so I was able to read the paper, and I stand by my initial criticism of the above comment.
"It seems to me all this paper does is define tasks in a highly abstract way that imposes a uniform cost to process '1 bit of task information'."
The paper uses this number and acknowledges that it is not the only possible measure, and explains why they use this number and how it was derived. It is just the start of the paper, not "all this paper does." The paper primarily focuses on counterarguments to this number to then address the primary question of the relationship between the inner and outer brain.
A few questions it poses: does the superior colliculus contribute to a bottom-up "saliency map" to ultimately direct the attentional bottleneck in cognition? Why does the brain use the same neural circuitry for both rapid/parallel sensory processing and slow/serial cognition? This is not even how other parts of the body work (e.g., type I and II muscle fibers). Perhaps the associated routing machinery between input and output accounts for the billions of neurons? Maybe, like the visual cortex, the prefrontal cortex has a fine-grained organization of thousands of small modules each dedicated to a specific microtask?
We do ourselves the most favors by reading research with some skepticism, and asking questions. We do ourselves no favors by writing comments after only reading an abstract (please, tell me if I'm wrong). I only point out that discounting research so blithely does nothing for improving research. This was a perspective paper - an author asking questions to better understand a possible issue and guide research. And maybe the commenter is right, maybe this is the wrong focus, but I do not believe it was truly considered.
The question reduces to "how does the intrinsic capacities of intelligence, had by humans, give rise to the capacity to answer complex questions?" -- I see nothing which the framing in informational terms adds.
It's nothing more than saying: we know that wires have electrons, and are made of metal, and can support a transfer rate of 1Gbp/s -- and we know that an LLM takes 1 min to answer "Yes" to a postgraduate physics question -- so how/why does the current in the wire at 10^9 bit/s second, support this 1bit/min mechanism?
It's extremely wrong-headed. So much so the paper even makes the absurd claim that Musk's neurallink need not have any high bandwith capabilities because a "telephone" (to quote) would be sufficient.
This is like saying an internet-connected server, hosting an LLM, need not have a high bandwidth RAM, because it only needs to transmit 1bit/s to answer the "yes" question.
In my view there isn't much worthwhile to say under this framing of the problem -- it's a pseudoscientific framing --- as is quite a lot of 'research' that employs 'information' in this way, a red flag for the production of pseudoscience by computer scientists.
Their implied premise is: "computer science is the be-all and end-all of analysis, and of what one needs to know, and so reality must be as we conceive it". Thus they employ an abuse of abstraction to "prove" this fact: reduce everything down to its most abstract level, so that one speaks in "bits" and then equivocate in semantically-weighty ways between these "bits", and pretend not to be doing so. This ends with pythagorean levels of mysticism.
I appreciate that you are elaborating further on your issues with the paper. I, again, am not choosing to defend the paper itself, rather the reason for science - asking questions and finding answers, even ones that may not be "worthwhile." Because we do not always know what is worthwhile and often we ignore some important facts when we think, intuitively, something makes sense and there is no reason to study it.
But, I will counter your comparison regarding LLMs and the transfer rate of wires. We, humans, have wired up the LLM ourselves. Evolution wired our body/brain and we do not know all of the filters and connections that exist in the transfer and processing of data. There is so much about the body we do not know. With LLMs, we've created every part so it doesn't really compare.
And to say that fields of science should not consider the knowledge gleaned from other fields is preposterous. I read about a new discovery in Math almost every few months in which a person from a different field brought in different techniques and looked at a problem from a new angle. Maybe this framing of the problem is silly in the end, or maybe it is just what someone needs to read somewhere to spark an idea. It doesn't hurt to think about it.
Exactly. English text is thought to have about 10 bits per word of information content, yet you can read much more quickly than 1 word per second. That includes not just ingesting the word, but also comprehending the meaning the author is conveying and your own reflections on those words.
> Why can we only think about one thing at a time?
Maybe this is just a perception thing. Sure, you can only really keep up one stream of thought, visualization or inner dialogue (whatever you want to call it) at a time, but perhaps that's because we learn all our lives that direct communication is a one-channel, linear thing--speaking and listening focused on one topic at a time. Our brain does plenty of thinking in the background that leads to "a-ha!" moments even when the direct focus of our thoughts isn't on that topic. What if the mind could maintain multiple threads of thoughts at once, but our language coerces our thought patterns into being linear and non-concurrent?
As someone without an inner monologue, and someone that's spent a LOT of time meditating, it's not the language. It's the attention mechanisms themselves.
Buddhist scholars insist that while we can have multiple threads of attention in our awareness, like strings with pearls of experience/thoughts we can only actually hold one little pearl of information from that stream in our attention at a time, and that we flit between them quite rapidly.
Personally, I sort of agree, but I notice that there seems to be a time-compression thing happening where the pearl delivered to attention can contain a compressed summary of continuous perception. This seems to work for 2 things at once in awareness. When you start monitoring 3+ streams, there are gaps. And even maintaining the 2 streams continuously is exhausting so the mind tends to relax a little and leave gaps on a normal basis, but it seems like it can monitor dual feeds when its particularly important.
My understanding is that neuroscience largely seems to agree with the above.
(Actually, I'll note that the USUAL mode of being doesn't even monitor one stream continuously. A lot of the weird effects (and deeply interesting ones!) they talk about in meditative arts seem to pop up when you progress to being able to hold truly continuous attention.)
What you're describing here is software, not hardware—Cognitive Science is the relevant field, not Neuroscience.
That said, your understanding is largely supported by our current understanding of consciousness, attention, and perception. The attention mechanism doesn't handle parallel processing well—but can operate "multi-threaded", where it juggles several foci at once (with some obvious cost to switching between them). But I think its a mistake to assume that decision making has to be done within this attention context. While we may only be aware of a single thread at any given time, the brain is doing a lot of parallel processing. We can only focus our attention on a single cognitive task, but that doesn't mean we're not actively performing many others.
What you're describing here is dualism and Descartes, in response to a post that references Buddhist scholars, a philosophy famously focused on monism.
"Cognitive science" vs "neuroscience" as a concept is just how we decided to slice the problem up for academia.
we may only be aware of a single thread at any given time
We may be not a single mind, but a bunch of minds. It just happens that the mind that “you” are reads this and has written the above comment, cause it’s of that kind (just like “all biological beings in this thread happen to be humans” type of a filter). Other minds can live completely different lives, just inside the same skull. And share emotions and thoughts with you sometimes from their prison.
This “aware” part is pretty mysterious, because the physical mind could operate without it perfectly. But for some reason, the space containing a mind experiences this awareness thing.
I think this is clearly seen whenever we go for a walk or a drive, the mind drifts but you still somehow get to your destination and in hindsight wonder who was actually driving the car?
Sometimes I'll be deeply thinking about something while driving, and discover I'm at the last road to my house without remembering having driven the previous few blocks. It's quite disturbing. When I say deeply thinking I don't mean anything involving phones or external stimuli - really just thinking about a particular problem I'm working on. I also don't deliberately engage this deep mode of thought, I just sort of slide into it naturally.
Does anyone else have this happen? I don't think my driving is suffering, but it's hard to really honestly say?
Yes, it's a classic example of the power and skill of your "unconscious" mind - your consciousness is freed up to do novel work because the drive home is so routine that your unconscious mind can do almost all of the work. Should something change - a traffic jam, a detour, a pedestrian crossing the road - your conscious attention will be called back to the more urgent task which is making a decision about how to handle the driving situation.
It seams interesting to me that what we refer to as the conscious mind is unconscious a third of each day and the part we call unconscious is active 24 by 7.
I'm out of my depth here, but a high-level response:
First, I don't think the "unconscious" part is a single process, but myriad processes, and I'd bet they wax and wane.
Second, the "conscious" part is the part that can reason about itself and think abstractly. I think it would be correct to say it's doing higher level computations. The important part is that this is more costly - it's not optimized because it has to be flexible, so it would make sense that it's resting as often as possible.
So, one high-performance, high-power, general-purpose processor to handle the foreground task, and a bunch of low-power processors for background tasks.
Looks like ARM got it right with its big.LITTLE architecture. :)
When I have a deeply engrossing unitary (I.e. not one of five tasks but one task for months) project at work I had better start commuting by train and cut out the driving. I have lost two cars by not doing that. Fortunately no one was hurt. One car I had towed to the work parking lot, and just never thought about it until some time after the project when it turned out the office just had it towed off as unknown junk. The project went well.
Oof, that's kind of scary. Sorry you experienced it, and glad nobody was hurt.
I had a workmate at a gamedev place I worked (so lots of deep technical challenges) who apparently regularly missed the motorway exit for work because he was thinking about what he was working on.
I guess the point is even if the distraction is 100% internal you should still do your best to pull yourself out and get your focus back on the road.
Also I do not agree with the premise that we can only think about one thing at a time.
We routinely communicate with multiple people at once and also communicate with the same persons in multiple threads of conversations.
Of cause this means that we switch between those tasks and do not really do them in parallel. At most we listen to one person, answer a second via speech, a third via text while thinking about what to respond to a fourth
This is the part that bothers me. I can definitely think of multiple things at a time. It really just depends on the complexity of the tasks. I can listen to and process and audiobook while driving to work every morning, for instance. I definitely can have multiple thoughts in parallel. I remember when I used to recite prayers, I would be reciting the memorized prayer while thinking about other things. Both things were happening at the same time. The memorized task takes less processing power, but it still requires some thought to execute.
I don't know what ever became of the line of research, but there was a very interesting book I read decades ago called Descartes' Error by Antonio Damasio that examined case studies of patients who had their corpus collosum severed, resulting in a split brain. You could show their left and right eyes different images and ask them what they saw and they would write and speak different answers, because speech and writing are controlled by different brain hemispheres.
This seems to suggest that any bottleneck in conscious attention is not an inherent limitation of an animal brain but rather a consensus mechanism we've developed to keep our chain of experience coherent. If we get rid of the constraint that all of our external communication channels need to present the same narrative, we can seemingly process more information even when it requires being a conscious center of attention.
It's like UIs being single-threaded, because otherwise you would have chaos if several background threads are trying to update the UI at the same time.
We think about many things at a time. But for those with malfunctioning brains that have the internal monologue going constantly, they mistaken that monologue for their thoughts and so it must be "one thing at a time". The language they experience their monologue in is by its very nature, sequential, you can't speak or even hear/understand two parallel streams of speech.
>Our brain does plenty of thinking in the background that leads to "a-ha!" moments even
That's not "in the background". That's the real you, your real mind. That's the foreground. But, if your brain malfunctions as many do, then the monologue shows up and crowds out everything. Sometimes it is apparently loud enough that it even prevents those "a-ha!" moments.
>but our language coerces our thought patterns into being linear and non-concurrent?
The language should just be discarded. What you want is an internal silence.
I wouldn’t say it’s language so much as unnecessarily added language. Words and sentences can appear and be useful, but there is a lot of mental activity that is not essential but added on responses to things. I wouldn’t say a component that generates comments is a broken brain, it believing the comments or the beliefs embedded inside them can break your contentedness.
Heck, I can type way faster than 10 bits per second, even after gzipping the output.
And when I consider the amount of sensory information that I consciously process (not that comes in, but that I conceptually analyze), it's got to be way higher.
> Quick, think of a thing... Now I’ll guess that thing by asking you yes/no questions.” The game ‘Twenty Questions’ has been popular for centuries1as a thinking challenge. If the questions are properly designed, each will reveal 1 bit of information about the mystery thing. If the guesser wins routinely, this suggests that the thinker can access about 220≈ 1 million possible items in the few seconds allotted. So the speed of thinking – with no constraints imposed – corresponds to 20 bits of information over a few seconds: a rate of 10 bits per second or less.
As the answerer, if you have a wide vocabulary or if you're a technical person then it's not too difficult to routinely choose words the other person simply does not know so that no amount of yes/no questions will get them there.
Obscure medical terms (phlebotomy), names of uncommonly-known stars (Fomalhaut), obscure data structures (cache-oblivious lookahead arrays), mathematical constants (Feigenbaum's constants)... The list goes on and on!
The point I'm trying to make is that most people who play Twenty Questions aren't trying to maximize the number of bits per second in their answer. They're actually trying to play semi-cooperatively. The fun part of Twenty Questions is when the other person guesses your word with as few questions remaining as possible. Having them get all the way to 20 and then you tell them "no you were way off to guess toothache, it was actually temporomandibular joint dysfunction" makes you look rather unsporting!
Thus, since I think we can expect people who play Twenty Questions to actually try to choose a word they know the other person can guess within the space allowed, we can reasonably conclude that using the game as a way to establish some sort of rough constraint on the speed of thinking (in bits per second) is way off. In fact, I know from my own experience playing the game that I will think of and discard many words in a short time as I try to find one that will be in the sweet spot of challenge for the other person to guess.
So, in the context of random word lookup with filter for things, we have a latency of a few seconds and a total selection of 20 bits.
Meanwhile the machinery in understanding that it is a game, processing the audio input of the question, producing the output of the answer is all taken for granted.
If the questions were pre-determined, which they're usually not. Reminds me of Huffman coding and the reason that compression challenges measure submissions looking at artifacts required to run them in addition to compressed size. I tend to agree with OP that this doesn't pass the smell test
English is about one bit per letter. If you type at a very fast 120WPM then you’re right at 10bps. Computers just don’t represent English very efficiently, even with gzip.
In short, you show someone an English text cut off at an arbitrary point and ask them to predict which letter comes next. Based on how successful they are, you can calculate the information content of the text. The result from this experiment was approximately one bit per letter.
Representing it is not the concern of the experiment. I don’t think anyone has a scheme that can do this. But it’s straightforward enough in theory. You create a compressor which contains a simulated human English speaker. At each point, ask the simulation to rank all the letters that might come next, in order. Emit the rank of the actual next letter into your compressed data. To decompress, run the same procedure, but apply the ranks you read from the data stream to the simulation’s predictions. If your simulation is deterministic, this will produce output matching the compressor’s input.
Say that experiment is correct. Wouldn't that imply that the information context of a single letter varies based on the possible future permutations?
I.e., The string "I'v_" provides way more context than "con_" because you're much more likely to get I'm typing "I've" instead of "contraception"
That seems to disprove the idea that a letter is a bit.
Also the fact that there are more than two letters also indicate more than one bit, though I wouldn't want to even start to guess the encoding scheme of the brain
I don’t follow. Of course the probabilities change depending on context. 1 bit per letter is an average, not an exact measure for each individual letter. There are cases where the next letter is virtually guaranteed, and the information content of that letter is much less than one bit. There are cases where it could easily be many different possibilities and that’s more than one bit. On average it’s about one bit.
> Also the fact that there are more than two letters also indicate more than one bit
This seems to deny the possibility of data compression, which I hope you’d reconsider, given that this message has probably been compressed and decompressed several times before it gets to you.
Anyway, it should be easy to see that the number of bits per symbol isn’t tied to the number of symbols when there’s knowledge about the structure of the data. Start with the case where there are 256 symbols. That implies eight bits per symbol. Now take this comment, encode it as ASCII, and run it through gzip. The result is less than 8 bits per symbol.
For a contrived example, consider a case where a language has three symbols, A, B, and C. In this language, A appears with a frequency of 999,999,998 per billion. B and C each appear with a frequency of one in a billion. Now, take some text from this language and apply a basic run-length encoding to it. You’ll end up with something like 32 bits per billion letters on average (around 30 bits to encode a typical run length of approximately 1 billion, and 2 bits to encode which letter is in the run), which is way less than one bit per letter.
> I.e., The string "I'v_" provides way more context than "con_" because you're much more likely to get I'm typing "I've" instead of "contraception"
Yes the entropy of the next letter always depends on the context. One bit per letter is just an average for all kinds of contexts.
> Also the fact that there are more than two letters also indicate more than one bit
Our alphabet is simply not the most efficient way of encoding information. It takes about 5 bits to encode 26 letters, space, comma and period. Even simple algorithms like Huffman or LZ77 only require just 3 bits per letter. Current state-of-the-art algorithms compress the English Wikipedia using a mere 0.8 bits per character: https://www.mattmahoney.net/dc/text.html
>I don’t think anyone has a scheme that can do this
If you substitute "token", for "letter", what you have described is exactly what a modern LLM does, out of the box. llama.cpp even has a setting, "show logits", which emits the probability of each token (sadly, only of the text it outputs, not the text it ingests - oh well).
I don't think anyone actually uses this as a text compressor for reasons of practicality. But it's no longer a theoretical thought experiment - it's possible today, on a laptop. Certainly you can experimentally verify Shannon's result, if you believe that LLMs are a sufficiently high fidelity model of English (you should - it takes multiple sentences before it's possible to sniff that text is LLM generated, a piece of information worth a single bit).
Oh look, Fabrice Bellard (who else?) already did it: https://bellard.org/ts_zip/ and you may note that indeed, it achieves a compression ratio of just north of 1 bit per byte, using a very small language model.
In practice, it is even less. Current state-of-the-art algorithms compress the English Wikipedia using just 0.8 bits per character: https://www.mattmahoney.net/dc/text.html
These letters are jointly distributed, and the entropy of the joint distribution of a second of "plausible" English text is much lower than the naive sum of the marginal entropies of each letter. In fact, with LLMs that report the exact probability distribution of each token, it is now possible to get a pretty decent estimate of what the entropy of larger segments of English text actually is.
I think this gets into what you consider to be “information.” Random noise is high entropy and thus high information in one sense, and zero information in another.
Well the information used in the article is the classical shannon's information, so the former. Though I suspect that the entropy of what we can actually "randomly" type is not that high.
Even very fast typists are unable to do stenography without a machine specialized to the task. Speech, in turn, can usually be understood at two or even three times the rate at which it is ordinarily produced. Meanwhile, I can read several times faster than I can understand speech, even at the highest speedup which I find coherent.
Ergo, 10 bits per second just doesn't hold up. It's an interesting coincidence that a reasonably fast typing speed hits that rate, but humans routinely operate on language at multiples of it.
I don’t think a difference of this magnitude meaningfully changes what the paper is talking about. They already have other human behaviors in their table with bit rates up to 5 times higher. Even if you set it at 100bps it wouldn’t change much. They’re addressing a difference of eight orders of magnitude. Making it seven instead of eight isn’t all that important.
If someone types English for a minute at 120WPM then they’ll have produced about 600 bits of information.
Are you saying we should consider the rate in a smaller window of time? Or we should consider the rate when the typist is producing a series of unrelated English words that don’t form a coherent sentence?
Take for example a human typist working from a hand-written manuscript. An advanced typist produces 120 words per minute. If each word is taken as 5 characters, this typing speed corresponds to 10 keystrokes a second. How many bits of information does that represent? One is tempted to count the keys on the keyboard and take the logarithm to get the entropy per character, but that is a huge overestimate. Imagine that after reading the start of this paragraph you are asked what will be the next let…
English contains orderly internal structures that make the character stream highly predictable. In fact, the entropy of English is only ∼ 1 bit per character [1]. Expert typists rely on all this redundancy: if forced to type a random character sequence, their speed drops precipitously.
[1] Shannon CE. Prediction and Entropy of Printed English. Bell System Technical Journal. 1951;30(1):50-64.
You show a bunch of English speakers some text that’s cut off, and ask them to predict the next letter. Their success at prediction tells you the information content of the text. Shannon ran this experiment and got a result of about 1 bit per letter: https://archive.org/details/bstj30-1-50/page/n5/mode/1up
OK. When talking about language I find it's always good to be explicit about what level you're talking about, especially when you're using terms as overloaded as "information". I'm not really sure how to connect this finding to semantics.
If the text can be reproduced with one bit per letter, then the semantic information content is necessarily at most equal to N bits where N is the length of the text in letters. Presumably it will normally be much less, since there are things like synonyms and equivalent word ordering which don’t change the meaning, but this gives a solid upper bound.
It would be a lot less abrasive to say "It's linked elsewhere, but here it is: https://arxiv.org/abs/2408.10234" or some variation, instead of saying "it's here somewhere, go find it".
with all due respect, it was meant to be slightly abrasive. it's understandable (?) not finding something when the thread has hundreds of comments, not so much when the thread had like 15-20 comments.
I was iterating over the different citations for bitrate, at least some of them, like Starcraft and the Rubik's cube, are literally a Guinness Book of Records that's a tiny blurb about APMs and a video of a guy solving the rubik's cube.
Going from APM and/or image wiggling to "bits per second" is .... hilariously reductive and I struggle to consider this response to be woefully incomplete at convincing this reader.
And yeah, my immediate response to reading the title was "where the hell are they getting that number", so I have gone and looked and am unsatisfied.
It seems weird to me. They say 10/bits/sec "behavioral throughput."
Have they not seen a football match? The brain controls 600 or so muscles in a rapid manner. That alone must be a lot of bits per second, certainly far better than computer controlled robots.
Re
>Why does the brain need billions of neurons to process 10
bits/s?
Tesla's FSD cars have a lot of processing power but still struggle not to drive into fire trucks. You probably need a lot.
I beg you, please read the paper before commenting. It's very interesting and it answers a lot of questions that might arise from just skimming the title.
Asking people to read the article before commenting? A commonsense suggestion that needs to be made makes me smirk inside, not the least because I am guilty of this, too, around here. (But not this time, thank you, kind Sir.)
As to being in a "gray area", have you read the posting guidelines? ;-)
I'm pretty sure it says we shouldn't say things like "read the article" or "you haven't read the article, have you?" in our comments.
Anyway, I'm laughing at this community (myself included) and the fact that your innocent and well-intentioned comment needs to be said here. And it did and does, my friend!
I am very very annoyed by many of the shallow "it's obviously wrong" comments on this story. And thank you to those rebutting more politely than I feel inclined to.
It's a fascinating paper and something that I have been interested in since before [0] and ties in to a strand of work in my PhD research. Also see for example [1].
[0] Stevens, M. Sensory Ecology, Behaviour, and Evolution, OUP Oxford, 2013, ISBN 9780199601783, LCCN 2012554461
[1] Coupé, Christophe and Oh, Yoon Mi and Dediu, Dan and Pellegrino, François Different languages, similar encoding efficiency: Comparable information rates across the human communicative niche, American Association for the Advancement of Science (AAAS), 2019-09, Science Advances, volume 5, report/number 9, ISSN 2375-2548, doi:10.1126/sciadv.aaw2594
The article is not even wrong imo, it is non-sense. Eg when we speak we convey much more information than just with the words we say. We communicate "information" using intonation, changing the rate of the speech, body language etc. Statements like "10 bits per seconds" are ridiculous clickbaits, and cognitive scientists should study cognition in more ecologically valid settings if they want to make any sense.
You appreciate that the contrast that is being highlighted in the paper is the missing 8 orders of magnitude? A little bit more than the OOM baseline claim of 10bps highlighted is neither here nor there.
Put a bunch of people in an empty, lacking as much as possible any details, room. Try to minimise an environmental stimulation. Have the people dress as plainly, boring as possible. Think of diddy's white parties, all white. Have people sing and dance and do stuff. Have a person just watch them. Do you think the person will find that this context is understimulating? Do you think that the "perception bandwidth" of them there is gonna fall down, or that people will have the same feeling as looking at an empty wall or some painting for several minutes? I don't think so, and if not then we have to think about where the information is coming from, who produces the processes that encode this information and how they produce them.
Only 10 beatings per second? This is a just universe, Sir!
On a serious note, enlightenment only cures us of our selfish curiosity, i.e. any action which causes harm to others. The Way requires us to harmonize with universal compassion, so there is take and give (especially with regard to our required sustenance), but we definitely lose our propensity to experiment with our power at the expense of others. No, we are to increase our curiosity in how we can better help others, help being the cornerstone of compassion.
I don't need to read the paper. The problem is that mechanical systems have inertia and are limited in their ability to change direction and thereby their ability to signal discrete information.
> In particular, our peripheral nervous system is capable of absorbing information from the environment at much higher rates, on the order of gigabits/s. This defines a paradox: The vast gulf between the tiny information throughput of human behavior, and the huge information inputs on which the behavior is based. This enormous ratio – about
100,000,000 – remains largely unexplained
The GPU is capable of performing billions of operations per second, yet Cyberpunk barely runs at 60 fps. And there is no paradox at all.
By the way, the brain seems to perform better than a GPU at tasks like image recognition. Probably because it does even more operations per second than the GPU.
There is also another comparison. Imagine if your goal is to calculate an integral over 100 dimensional space (or solve a quantum system) and answer whether it is larger or less than zero. This will take enourmous time but produces a single bit of information.
Is the brain better than a GPU at image recognition nowadays? Actually I’m not sure how that’s measured. Certainly a GPU could be tied to a database with a lot more things in it, like you can get some pretty impressive facial recognition demos where it’ll recognize a ton of random people.
But humans can see objects they’ve never seen before and sometimes guess what they might be used for, which is sort of like object recognition but better. (Or sometimes I see an object I’m technically familiar with, like an old tool of my grandpa’s, and remembering what he used it for feels more like imagining… maybe it is).
The only time 'bits' will ever be an appropriate measure of human processing is when we are processing or producing diginal information artifacts, e.g. a typewritten document.
Our bodies' systems are biochemical wetware that will never be aptly described using a boolean basis. That is one of the primary problems of society's obsessions with classical notions of gender.
No one is male OR female. We are, every single one of us, a combination of male and female hormones. The more "male" a person is is the result of that balance favoring the male hormones; and vice versa. What humanity is now struggling with is that there are plenty of folks with lots of both or little of either and all kinds of combinations.
Of course, my not being a biochemist means my categorization of hormones into "male" and "female" is, itself, likely to be a poorly booleanized representation of their natures.
We are much more akin to Boltzmann's statistical mechanics description of reality, than to digital logic's boolean description.
Bits are a perfectly acceptable way to measure biological information processing. These are not the boolean logic digital bits like on a computer. They're the more abstract concept of a bit in information theory.
Take the number of distinct possible configurations a system can be in (accounting for statistical uncertainty/biases if needed), take the base 2 logarithm of that number, and you have the bits of information in the system. This can be applied to basically anything, biological or otherwise.
It's more like statistical mechanics and the foundations of the second law of thermodynamics.
Unless entropy is a damned lie. Which I'm not saying it isn't, but claiming such a thing is a pretty strong claim. Possibly one of the strongest claims you can make in physics (which is why it's associated with cranks).
I'd expect some perpetual motion machines after overturning such a principle.
But I do agree you need to be careful defining the scope of microstates and macro states.
> Take the number of distinct possible configurations a system
Easy for an isolated system. Human body is 6000 billion cells, each of them has many possible configurations, most of them share and process informations. I respectfully doubt there’s much to do with bits outside of a tiny bit if flesh in a petri dish.
6 trillion * number of configurations of each cell * number of configurations of interactions between cells = (very very large) finite number. The thing about numbers is that they go on forever. There's not a cutoff for "ok, this number is too big for bits".
Take whatever that mind-bogglingly huge number is, take the logarithm base 2, there you go, that's how many bits you need.
Yes no problem to imagine or process numbers, how big they might be.
I just don’t see how that makes a practical sense in the context of the article. Doing a full "scan" of a tiny biological material to study it, ok. Doing a full scan of "our body" as MrMcCall describe (and reject it)? I don’t think it’s possible in a near future. I totally agree with Asraelite in theory but don’t think that will be apply in reality.
I think I see the disconnect. "Practical application" here does not mean "doing a scan". Don't let the word "bit" lure you into thinking we want to do something on a computer. This is using bits purely as a unit of measurement of information quantity. It applies practically in reality the same way any other unit of measurement applies. The same way we can ask how much something weighs or how fast it's going, we can ask how much information it has.
> That is one of the primary problems of society's obsessions with classical notions of gender.
What you go on to discuss is sex, and sexual dimorphism, which is a remarkably robust way of classification. The "classical" notions of gender (tbh, "classical" doesn't make much sense here) as sex based is fairly reasonable all things considered. Consider the arguments presented in this essay [0]. That, however, doesn't really mean much for how we should treaty people in public who desire to express their gender in different ways, which is, of course, respecting of their dignity and desires, in most cases.
Yeah, what I mean by classical would boil down to just genitalia, which doesn't really hold up in how we must respect the person and how they feel and choose to express themselves. Yes, so long as their expressions are not harming others, then we must respect their human right to choose who they are.
I've got to give a huge hat tip to Suzi (Eddie) Izzard, who -- beyond their being just a brilliant comic and generally good human being -- taught me and my fam about how the spectrum of human configuration is way more complex than just male and female.
The use of "bits" here doesn't mean we are working in binary.
It is more like the way it is used in information theory. The number of bits is log2 of the number of states that can be represented, and it doesn't have to be an integer. For example, with 10 bits of information, we can distinguish between 1024 different states, it can be 1024 colors for instance, or 1024 genders if you wish, it doesn't matter, the important part is that there are 1024 boxes to put things in, no matter what they are. Of course, it doesn't mean that only 1024 colors exist in the universe, there are an infinity of them, but with 10 bits, you can only distinguish between 1024 of them. If you want more, you need more bits, if you can do with less, you need less.
By the article results, it means your "inner brain" can process one color with 1024 nuances per second, or 2 independent colors with 16 nuances each per second. If the colors are not independent, it can process more, because, if, say, you know that the two color are highly contrasting, you don't have to allocate "boxes" for noncontrasting colors, may free some boxes for more nuances, so, you may, for instance, process two contrasting colors with 100 nuances each with these 10 bits.
A bit is the fundamental unit of information theory, and has nothing to do with digital logic in this context. No one is saying "ok use one bit to encode male or female". No one is saying "ok 101 means serotonin, and 110 is dopamine". What they are saying is that the information content produced by a human being can be compressed down to about 10 bits per second, but this is a statistical description.
"Compression" here is nontechnical, and I was using it by analogy as an aid to intuition. I didn't want to throw out the E word (entropy) unnecessarily.
Boolean logic extends just fine to handle complexity. Instead it’s the intuitive classification people come up with that are often a poor fit for reality.
Is someone’s DNA consistent throughout their body? Y/N
Does someone have any chromosomal anomalies? Y/N etc
Similarly it’s very possible for a girl to suffer from abnormally low testosterone levels which doesn’t fit with how the public thinks of it as a gendered hormone. During puberty it normally spikes in both girls and boys. From a range of (2.5 - 10) in prepubescents, the typical range in puberty for boys is much higher (100 - 970) vs (15 - 38) but that doesn’t make it a male hormone just a pathway used differently.
In biology, or really most sciences (math being an exception), the more closely you examine a delineated this or that categorization, the more you realize it's a scale, a range, or something fuzzy.
Like even things we talk about regularly like touch and space is vague in the details. Is it still touching if the repulsive force of electron to electron is keeping nucleus apart? Where is empty space begin and an atom end? Is it after the electron shell? Outside of it's repulsive force? Some hybrid value?
I can always be awestruckdumb by the understanding that we are all mostly space inhabited by fields, our littlest bits vibrating at mindblowing speeds.
What you're saying is interesting but I think the causality is backwards here and I can provide some examples to show why.
(By male hormone I'm assuming you mean testosterone, and by female hormone I assume you mean oestrogen.)
i in fact
If being "more male" came from having more testosterone (and vice versa), then logically when children go through puberty and develop into adults, they would become "more" male or "more" female.
As adults become elderly and naturally produce less sex-associated hormones, they would become "less" male or female.
(Fetuses do not all begin in the womb as female, that's a common misunderstanding. We start off physically undifferentiated, and develop along a genetically predetermined pathway as we grow. Some animals use temperature or other environmental triggers to pick, humans use genes.)
Would that mean a male bodybuilder who injects testosterone is more male than a man that doesn't? His phenotype may become visibly more masculine, but that doesn't change his sex at all. Same for a female bodybuilder that injects testosterone - she may develop stereotypically male physical characteristics like large muscles and a deeper voice, but her sex is unaffected.
The causality is the other way: being male - or - female results in a physiology (adult testicles/ovaries) that produces sex associated hormones in larger or lesser degrees depending on the person (and in some cases very low amounts or not at all).
This makes sense if sex is a binary (with rare differences of sex development - detailed here https://www.theparadoxinstitute.com/read/sex-development-cha... ) that results in different levels of sex hormones in the body and resulting phenotype. So yes, everyone is male or female.
(I'm not referring to gender here - I'm talking only about sex)
If there's a spectrum then some men could be biologically "more male" than others and vice versa for women. I've not seen any evidence of this myself, but I'm happy to be proven wrong!
it is a categorization, like all things in biology. One of the most robust and significant ones for all of life is sexual versus asexual reproduction. It is intentionally blurring understanding to say that it is not a binary. This is not a gaussian situation, and not fitting into this categorization is exceedingly rare due to defect/mutation which largely does not proliferate genetically.
I think you've mixed up a few mostly unrelated things together to make a point. You're correct in that the larger point to be made is that analog and digital computing are paradigmatically distinct and analogies are hard to draw across that divide.
However, "bits" is just a quantity of information in a certain base. We could discuss it in "nits" if you prefer. The point is that information per se remains real even if the specific representation is based on some assumption of digital computing.
The rest of your comment is unfortunately out of scope of this article although it deserves some discussion on its own merit.
Definitely a neat article insofar as it summarizes many observations about task related “bandwidth” of human output.
Summarizing other comments - calculating bandwidth in a task-specific way neglects metacognition: the Rubik’s cube solver can rapidly recover if someone shoves them when their blindfold is on; the memory champion can notice a pie being aimed at their face and duck. I think that there’s still something interesting around our inability to multitask conscious behaviors, but the overarching conclusion is flawed.
I think the authors are using information theory to inappropriately flatten the complexity of the problem. On one hand we have “bits” of pre-processed sensory measurement data, then on the other hand we have “bits” of post-processed symbolic data: in many cases directly so via human language, but that would also include “the Terran unit moved a short distance” as a compact summary of a bunch of pixels updating in StarCraft. This even extends to the animal examples: the 10 bits/s figure applies to higher-level cognition. The crucial difference is that the sensory bits can be interpreted via the same “algorithm” in a context-independent way, whereas the higher-level cognition bits need their algorithms chosen very carefully (perhaps being modified at runtime).
So I am just not sure why 10 bits/s of symbolic data processing is especially slow in the first place. We don’t have a relevant technological comparison because none of our technology actually processes data in that fashion.
The authors of the article are smuggling in the assumption that 10bits/s is slow.
It's slow when compared to general computing system that we implemented in silicon substrate.
But this assumption doesn't translate linearly to the brain throughput and the perception of existence.
In my opinion the hypothesis is meaningless.
That is not to say the article is meaningless. Actually being able to measure brain information throughput is amazing. It's only that slowness isn't absolute.
> thinker can access about
220 ≈ 1 million possible items in the few seconds allotted
Huh, no? No one is able to think about million items in a few seconds.
The 20q thinking process involves bringing an incomplete set of abstract categories and asking a question that divides these categories into two halves (binary search).
You don't even start from scratch, using previous experience (cache) to reuse whatever worked best the last time.
The optical nerve has an information density of around 10 MBit/s (ref https://pmc.ncbi.nlm.nih.gov/articles/PMC1564115/) Concentrating on only the symbolic thinking speed seems to be unnecessarily restrictive..
Really fun paper. I especially enjoyed this section:
> Based on the research reviewed here regarding the rate of human cognition, we predict that Musk’s brain will communicate with the computer at about 10 bits/s. Instead of the bundle of Neuralink electrodes, Musk could just use a telephone, whose data rate has been designed to match human language, which in turn is matched to the speed of
perception and cognition
It might be though that even though our processing rate is limited to 10 bits per second, shortening the communication loop between the helper AI and the human might allow the human to switch subjects more productively by getting faster feedback. The human would be in an executive approval role like the lead character in Accelerando with their agents, assuming they trusted their agents to be delegated to.
We process information at the rate we do because that's what works for adaptation to our environment. That's how all biological things develop. Some things are remnants of old adaptations, but generally speaking, we are the way we are because that's what enables our survival. If it seems "slow", it's because it doesn't need to be faster.
Better questions would be, why are we so weak? Why are we defenseless? Why are we hairless? Why do we move so slow? Most other mammals our size are much better equipped for survival. Obviously other beings "perform better" in many ways. Yet our fleshy, weak, slow, gangly, shivery bodies are suited for thinking, for adapting, for communicating and collaborating. However unoptimal or "slow" these authors think our brains are, they are obviously perfectly capable for what they need to do, seeing as we dominate the entire planet.
In any system design, every decision tends to be a tradeoff. You can gain CPU power, but it will cost you energy, heat, and probably stability or longevity. You often do not know what a "feature" will do until you add it. So it's wiser to only add the features you need to accomplish your immediate goals.
If at some point in the future, our survival is existentially threatened by our lack of "processing power", our brains will adapt. Until then, leave well enough alone.
I tried to read the article, but celldotcom has a presumably very high bit rate robot that promptly
questioned my humanity, so I did the dishes and ate lunch, but that didn't get through somehow as proof.(of humanity)
And so my recourse is to then read the coments here, to try and get the gist of the argument,but even well fed, doing 11 of maybe 12 bits per second, there does not seem to be any
point in quibling with reality.
Maybe after a bit of shock-o-late icecream,
(B and G chockolate therapy with esspresso beans
added)
What would it really even mean for our brain to be fast or slow? Relative to what?
Inherent speed does not matter and is ill defined, it only matters relative to the environmental processes we have to react to. We’re already orders of magnitude faster than processes like geology and weather, so it is really only other animals where speed matters, and since we’re all using the same basic hardware, we’re on a relatively level playing field there (except for house flies it seems, lol). Time as we understand it may be as much as a cultural construct as anything else (see, for example Timeless physics). Some neurodivergent people, and people from very different cultures don’t experience anything like what most people in our culture refer to as time.
As for thinking about one thing at a time- I am absolutely certain this is false. Our subconscious operates massively parallel, and we only have one conscious thought at a time. But it had an interrupt system that lets it either act instantly itself in an emergency, and to bring things to consciousness when they are important. I’m sure everyone has had the experience of reacting quickly to an emergency with no conscious thought, or suddenly knowing the solution to a problem you hadn’t even been consciously thinking about.
> Why can we only think about one thing at a time?
This sort of reasoning seems to be a symptom of inadequate communication/jargon/diction describing mental faculties. Many times during serious thought there's no discrete "number of thoughts" occuring at all: there's just a hazy mental process that resolves to some result and often many results. This reminds me of the "80% of people have no inner monologue!!!" bullshit that went around recently.
I think the discussion on serial vs parallel processing is incomplete in the linked paper, and is one reason I think the 10 bits/s bottleneck is an incomplete or misinterpreted result. Here’s a review with sources on serial processing, https://journalofcognition.org/articles/10.5334/joc.185
> Cognitive psychology has mainly focused on structural and functional limitations of cognitive processes when facing multitasking requirements. Structural limitations assume strict serial processing for at least one processing stage, while functional limitations assume flexible, parallel processing only limited by the number of available resources. Human movement science, on the other hand, emphasizes the plasticity of cognition and training possibilities. As both approaches have provided ample empirical evidence for their views but have predominantly worked in isolation, this example clearly illustrates the need for a more integrative approach to multitasking. A challenge for the contemporary research on multitasking is to bring together the issues of structure, flexibility, and plasticity in human multitasking, offering a new integrative theoretical framework that accounts for this fundamental aspect of human behaviour.
From one of the papers cited by the above reference (Hommel 2020),
> A closer look reveals that the questions being asked in dual-task research are not particularly interesting or realistic, and the answers being given lack mechanistic detail. In fact, present theorizing can be considered mere empirical generalization, which has led to merely labeling processing bottlenecks rather than describing how they operate and how they actually produce the bottleneck.
So, while I applaud the authors on generating buzz and discussion, I think their promising work will benefit from more serious consideration of the underlying neurophysiology.
> More generally, the information throughput of human behavior is about 10 bits/s.
I'm sorry, I just can't take this article seriously. They make a fundamental mistake of encoding and assume that information is discretized into word-sized or action-sized chunks.
A good example is a seemingly discrete activity such as playing a musical instrument, like a guitar. A guitar has frets and strings, a seemingly small number of finite notes it can play. So it would seem a perfect candidate for discretization along the lines of the musical scale. But any guitar player or listener knows that a guitar is not a keyboard or midi synth:
1. The attack velocity and angle of the pick intones aggression and emotion, not just along a few prescribed lines like "an angry or sad or loud or quiet".
2. Timing idiosyncracies like being slightly before or after a beat, or speeding up or slowing down, or even arhythmic; the entire expression of a piece of music is changed by subtleties in phrasing.
3. Microbends. The analog nature of strings cannot be hidden entirely behind frets. Differences in the amount of pressure, how close to the fret the fingers are, and slight bending of the strings, intentional or unintentional, static or dynamic, change the pitch of the note.
4. Non-striking sounds like the amount of palming, pick scraping, tapping, and sympathetic vibrations.
Of course there are lots of other things. All of these things make the difference between a master guitar player, say Hendrix, and someone just playing the same notes.
And yes of course we can consider the encoding of the audio coming out of the guitar to be information--at a much higher bitrate, but what about the facial expressions, body language, etc? There are tons of channels coming off a musician, particularly live performances.
This entire article just misses these in picking a quantized encoding of information that of course has a low bitrate. In short, they are missing bazillions of channels, not the least of which is expression and timing.
Why not? Processing an input of 10^9 bits, making sense of all of that, and contrast it against all your existing knowledge have an output speed 10 bits/s? It is not so bad. At least if we were really processing all that information in the same way.
It had to be enough to let us survive, in the context of the challenges we faced through most of our evolution. We took a lot of shortcuts and trims there, that is why we have a system 1 and system 2 in place, with a lot of builtin cognitive biases because of that.
The answer is that we defined second after the time period it takes for us to think about something, if our thinking was faster the second would also be faster and we would still feel like we only think about about one thing per second.
> Plausible explanations exist for the large neuron numbers in the outer brain, but not for the inner brain, and we propose new research directions to remedy this.
Going on a limb here, but perhaps we shouldn’t modify biological composition of the human brain.
It seems the authors conflate a problem bring easy to state with little processing power being needed to solve it. This obviously isn't true, very complex mathematical problems can be stated in very few bits. Human interactions often can be extremely complex, even though they are relatively slow.
Reading a text isn't about matching symbols to words. It is about taking these words and putting them into a social context, potentially even doubting their content or imagining the inner world of the author. Obviously that is what the "inner" brain (which existence seems very dubious to me) has to do.
I see there are some people in the thread that doubt the low bandwidth between conscious thought and the rest of the central nervous system.
Do you also doubt that you're actually living half a second in the past, with the brain compensating for this lag between initial perception and conscious reception of the indirect effects of it?
> In contrast, central processing appears to be strictly serial…
and then they proceed to give misinterpretated evidence of serialization because they’re making assumptions about lower level biochemical behavior based on higher level tissue performance. In fact, that tissue-level behavior isn’t correctly described either.
"In contrast, central processing appears to be strictly serial: When faced with two tasks in competition, individuals consistently encounter a “psychological refractory period” before being able to perform the second task broadbent_perception_1958, pashler_dual-task_1994. Even in tasks that do not require any motor output, such as thinking, we can pursue only one strand at a time."
Clearly they're not talking about "neuronal transmission", but tasks, and further more, they cite their sources.
I wasn’t being “dishonest”, I couldn’t copy/paste the entire text on my phone.
I addressed the rest of that statement in my comment by noting that you can’t make the same assumptions about biochemical reactions and emergent behaviors of tissues.
Secondly, even from a neurophysiology perspective, their cited evidence is misinterpreted. Any basic dual N-back task proves their central thesis incorrect.
"It is by will alone I set my mind in motion. It is by the juice of Sapho that thoughts acquire speed, the lips acquire stains, the stains become a warning. It is by will alone I set my mind in motion."
The back of the envelope computations are shockingly shallow and meaningless.
(20 Questions, from the intro) Trying to think of a thing for the game is not a search over a set of known things. Just saying the possibility set has size 2^N doesn't mean that choosing something in the set consists of processing the set. But even if that were the case, and if you do consider each of 2^N options, the consideration process itself is not trivial and probably varies wildly.
(English typing) Touch typists do not (only) simply convert an existing/known string to a sequence of hand actions by mapping character to action. There are whole words and sequences that become units/tokens from the standpoint of muscle memory and processing (this will be relevant to the rubik's cube topic as well). When i type, there's a sort of planning and queueing of actions, but also there's monitoring of actions that allows fast error correction with pressing delete a number of times or holding it and costly determining when the error has been reached, and resuming afterward. Of course the process likely varies from person to person, but there's such a host of other things going on that should count as part of the information processed in this simple behavior that the example and numbers used in the paper for it are utterly useless even as estimates.
(Rubik's cube blind speed solving)
Again we see reference to the entire possibility space (from the perspective of possible configurations of the puzzle). But solvers do not identify the configuration they encounter with reference to the space, nor do they search the space for it. They look for patterns and ignore what they cannot use for the strategy they have practiced. The cuber often does not commit to memory the whole configuration, but will often convert it to a custom and bespoke mnemonic. It's just utter nonsense to refer to the number of possible configurations, it has nothing directly to do with what the human is doing.
If I memorize a 30 word passage, i have not "processed the set of possible 30 word passages".
I’m interested in the phenomenon that no matter how messed up (or absent) my sensory input becomes it doesn’t seem to impact my “inner observer”. The only thing impacting it is sleep or loss of consciousness, yet even then there can be moments during these states that this observer can become lucid.
It’s like the science of the soul and one notable implication is dementia, when people become unable to function and recognize things, I wonder if that “inner observer” is still just as in-tact as it is on a huge dose of LSD?
I skimmed over the paper so sorry if I didn't see it, but isn't the most obvious answer that (at least conscious) human decision making happens at a high level of abstraction? Information output or processing in terms of bits, say the numbers of words in a sentence, isn't equivalent to the meaning or effect that has in the world, which is what matters.
If a general orders an army to go to war that's a few bits of text on a piece of paper, but obviously that bears no relation to the consequence of what that puts into motion in the real world. It's not like we spend most of our days speedcubing or writing and reading the dictionary. We aren't low level data processors even though that might happen unconsciously somewhere in the sensory system, but the conscious mind is a reasoning system.
"For each move, AlphaZero searches only a small fraction of the positions considered by traditional chess engines. In Chess, for example, it searches only 60 thousand positions per second in chess, compared to roughly 60 million for Stockfish."
Not too draw to many parallels between the human brain and these systems, but they do obviously share the similarity that higher order conceptual decision making compared to just data processing will result in lower rates of decision making at at least the top level. That's for me what you'd expect to happen, not a paradox.
Bits are a unit of measurement we use in relation to computers. Humans are not computers. Do not use bits to measure anything in relation to humans. Stop thinking of humans as computers. It's dehumanizing, unhealthy, and a very narrow way to think of people. Using a computer model for humans is useless at best and misleading at worst.
These types of articles are so fundamentally flawed... it beggars belief. Why not ask the opposite question: if bandwidth works the way they describe, why can't H100 GPUs (3TB/s bandwidth) perform sensorimotor tasks 24 trillion times faster than a human? (Spoiler alert: they can not).
<s> Could it be... there is a bit of a straw man argument here? About how much information it actually takes to input and output a complete sensorimotor task? I dare say! </s
Just for playing any sport the accuracy to instruct 100s of muscles to work in a certain way is certainly above that 10bits,
Pointing out positions in a 10cm x 10cm x 10cm cubic volume seems to possible significantly faster than 1/s.
The slower examples listed in the table all have some externalities like a motor/object manipulation feedback loop overhead (speed cubing) and or redundacy and are not optimized for pure information density, so I have no idea why they settled on that average, and not the optimum?
Object Recognition and Reading are already at ~50 bits.
> Just for playing any sport the accuracy to instruct 100s of muscles to work in a certain way is certainly above that 10bits
But significant portions of that process are not done by the conscious brain, and some aren't done by the brain at all (reflex and peripheral nervous system). We don't consciously think about each of the 100 muscles we're switching on and off at rapid speed.
This is such a bad paper. Almost all calculations and equations look like some back of envelope calculation.
A decent researcher would have provided some tests to their hypotheses.
The numbers cited and used in calculations are supported by citations. The purpose of this paper is not to test a hypothesis, or to gather new data, but to think about existing data and new directions of research. This is spelled out in the paper's abstract, which is kind of summary of the whole paper, useful to get a very quick idea about the paper's purpose -- expanded further in the paper's introduction and re-visited again in the paper's conclusion.
Thank you for explaining what an abstract is... The fact that those number come from a citation doesn't make them true.
This is a badly written paper that a decent researcher wouldn't have written (and I know that the author has many papers, I am speaking about this one) and a decent reviewer would have rejected. A paragraph about Elon Musk? Guesstimates on information rates?
As a blog post would have been okay-ish, as a scientific paper is quite bad.
There are no measurements here, I can guess the weight of an apple based on some prior (which my brain stores as some continuous distribution, not bits), but I am not measuring it.
It's incredibly tiring that bad science is sold as good science only because it comes from some fancy university. This paper is crap and should be treated as such.
It seems very odd that the article seems to be measuring the information content of specific tasks that the brain is doing or specific objects that it is perceiving. But the brain is a general-purpose computer, not a speed-card computer, or English text computer, or binary digit computer, or Rubik's cube computer.
When you look at a Rubik's cube, you don't just pick out specific positions of colored squares relative to each other. You also pick up the fact that it's a Rubik's cube and not a bird or a series of binary digits or English text. If an orange cat lunged at your Rubik's cube while you were studying it, you wouldn't process it as "face 3 has 4 red squares on the first row, then an orange diagonal with sharp claws", you'd process it as "fast moving sharp clawed orange cat attacking cube". Which implies that every time you loom at the cube you also notice that it's still a cube and not any of the millions of other objects you can recognize, adding many more bits of information.
Similarly, when you're typing English text, you're not just encoding information from your brain into English text, you're also deciding that this is the most relevant activity to keep doing at the moment, instead of doing math or going out for a walk. Not to mention the precise mechanical control of your muscles to achieve the requisite movements, which we're having significant trouble programming into a robot.
My thoughts exactly. It makes no sense to me that what I'm thinking and perceiving in real-time is the equivalent of 10 bit/s of data.
Has anyone here even read more than the title? The article literally explains that you perceive at a rate of 10^9 bits per second. But after filtering and preprocessing in the outer brain you are only left with about 10 bits per second for conscious processing for things like motor functions. Yes, you can see a Rubic's cube and perceive all sorts of facts about it and the environment at the same time. But try solving it with your hands while someone shows you a bunch of objects and asks you visual comprehension questions at the same time. You might still perceive those other objects, but consciously classifying them verbally is gonna be really difficult. It's no surprise that the feature space that your deeper brain can actively work on is quite limited.
I did read the summary and see what you saw, but that still leads me to believe the headline of the article is clickbait and that the authors don't understand that action signalling doesn't require the same bandwidth as information processing, but even then it's way more than 10b/s.
Look at a backhoe. It has a few levers and a couple pedals. EXTREMELY simple interface. Each lever and pedal is basically 1 bit for engage/disengage, but the operator has to process orders of magnitude more sensory info to operate it properly. You could use an arduino to control a backhoe, but you'd need quite a powerful computer to know what to tell the arduino to do. This shouldn't surprise anyone. Knowing how to use the tool well is always far more complicated than simply knowing how the tool operates.
Here's a simple reaction test: https://humanbenchmark.com/tests/reactiontime
It's 1 bit (discriminate red vs. green) but I doubt many here can manage 100 ms, which is what it would take to implement 10 of those decisions per second.
If you extend your arm straight out to point for example, that's significantly more than ten commands sent at once to your arm muscles. It's just non-conscious. That test measures a bunch of things all together too. Visual processing, color recognition, and conscious decision to act. I got just under 240, and considering everything I had to do, that's pretty neat.
This measures latency not processing power. You can have 10 b/s or 1 Gb/s of processing power with the same latency.
yes, but the greater the latency, the greater the number of bits per period required to get the same throughput.
I come in around 250 ms on the latency, which means that to get up to 10 bps I'd need to actually be able to discriminate between 6 choices each time.
To get up to 100 bps?
Can you point me to the book that says that "Booleans" are encoded in the human brain just like in a binary computer and they take exactly 1 bit to store? I mean, why not assume error correction, while we are at it, or some kind of compression algorithm that reduces the size further?
If that sounds ridiculous to you, you are beginning to get it. Every single brain cell works on electric potential (a continous value). Not a single one of them can be said to be in the "zero" or "one" state (a discrete value).
"bits of information" doesn't mean 1's and 0's. "bits" is a measure of entropy.
If you send an analog signal over a wire and the receiver on the other end can distinguish between 1024 different voltages reliably, then you sent 10 bits of information. Even though you sent neither a 0 nor a 1, but an analog voltage.
It's about the "information" as an abstract concept, not about how that information is encoded as data sent over some medium. I can send you the same thing thousands of times. I would have sent you lots of data, but 1000 copies of the same thing still only contains as much information as a single copy does.
Can the combination of mind and body only process reactive tasks serially, or can it process them with some amount of parallelism?
And is 100ms (I struggled to get below 180ms, myself) only the time needed to process, or does that include IO latency?
If it includes IO latency, then is there a buffer?
According to psychology research on human perception there is indeed something that you could call buffered IO in the human brain.
It's used in theories to explain why you can hear your name in a short sentence in a crowded room and then know what the sentence is. Including one or two words before your name. While if your name wasn't said, no attention is diverted to that sound and the "buffer" contents get dropped without retrieving the information.
Yes, and there's different buffers with different capacity for different types of perception according to these theories. Look up Baddeley & Hitch for details. I would guess there are similar buffers for other senses that didn't make it in the lectures in college.
> or can it process them with some amount of parallelism
I guess someone with two boxes handy could set them up next to each other and run two copies of this test to see if their reaction time holds up or if it lengthens?
EDIT: mine suffers greatly on dual wield: 225 -> 320 ms
see also https://news.ycombinator.com/item?id=42453704
I'll have a good think about whether we can process things in parallel or only in series the next time I decide to walk and chew gum while bouncing a ball, not tripping over things, and staying out of traffic.
none of those activities need involve conscious decisions, however
Having a think is not a conscious activity?
that, yes, sorry.
I meant walking, chewing gum, dribbling a basketball, and avoiding traffic.
Depends on expertise. Consider a toddler learning to walk, while eating a snack, carrying a toy and avoiding the cat.
Driving stick is a good example, as shifting gears requires conscious thought at first and becomes mostly automated with experience.
This might be the worst possible way to measure the net throughput of conscious decision making I've ever seen. A wildly ecologically invalid, serialized discrimination task.
Now take someone navigating a social situation with three simultaneous participants, actively listening, coordinating body language, interjecting, responding to questions... and you have a human being operating at a "bitrate" of information that is so many orders of magnitude removed from this bullshit psychometry task as to prompt the question: "what the actual are they talking about".
Using a computer analogy to try to understand the brain functions is ok as long as we remember that the brain is not actually one of them. There have been different technological analogies throughout history including hydraulic systems, telegraphic and telephonic lines, all of them might seem comical today but they were the best explanation at the time, same will be in the future with ours, comparing something like the brain (or rather the whole living being) to a mathematical processing device is more than short sighted. We try to explain everything from the engineering point of view by isolating things but that’s not how life or the universe works.
But on the other hand, a "fast" debator from a high school debate team can process 260 words per second while compressing that down to notes, while simultaneously queuing up meaningful responses, while evaluating which/what to go with for a particular audience/judge, while listening for mistakes from their opponent. If you distill bitrate down to serialized responses to a canned psychometric task, sure, but why do we think that's the total throughput? It isn't, trivially, if we inspect what people do every day.
260 words per SECOND? Are you sure about that? I can only assume you mean for skim/speed reading.
that's a funny typo, I apologize.
No need to apologize! We're all human!
There is a difference between latency and throughput.
True, maybe you can find a test which asks, instead of a single button click, to click one of 4 possibilities for a reaction?
EDIT: see appendix A.3 Perception
You mean like typing? Typing we click buttons accurately in sequence, people can type way faster than 10 bits per second.
Then consider something more complex like drawing, you draw way way way more than 10 bits a second.
not like typing, which is largely automatic; conscious decisions per second.
EDIT: not the expanded bit rate, the optimally compressed bit rate, if that makes sense?
EDIT2: typing at 90 wpm is 1,5 wps, or <10 bps according to Shannon (~1 bit per letter, 5 letters per word)
> EDIT2: typing at 90 wpm is 1,5 wps, or <10 bps according to Shannon (~1 bit per letter, 5 letters per word)
People can type way faster than 90 WPM, we talk about bitrate of the hardware here its the same even if you haven't practiced typing. And typing is still not what our consciousness is made to do, we have way higher bitrate than that when doing more native tasks such as running in the woods. You can't run in the woods without consciously thinking about each step and deciding where to put your foot next to not hurt your ankle and not collide with trees, that has a massive bitrate.
Running in the woods supports TFA: 180 steps per minute * 16 possible locations per step = 12 bits per second; maybe there are fewer or more possible footfall spots but no matter what it's more than 1 bit per second and less than 100. (and way less than 1e9)
> not like typing, which is largely automatic; conscious decisions per second.
That’s kind of a cop out - at what point do very quick conscious decisions become “automatic.”
Listen to a podcast at double speed. Assuming a normal talking speed of 150 words per minute, 300 words per minute of written word is not 10 bits per second.
(Shannon estimates 11.82 bits per word, so 300 WPM is 59.1 bits per second)
The Hutter prize submissions can get compression factors >9 on english wiki text. And if you're listening to podcasts the entropy is probably even lower. The human brain is obviously a much better language model than anything we have today, so you can assume that the latent layer in your brain deals with much less than 60 bits per second.
Each second of listening we're perceiving the speaker's identity, what accent they are using, how fast they are talking, and what emotions they are showing. Those should count for the bit rate dealt with by the conscious brain.
Again: perception is not what we're talking about and the paper acknowledges that perceptive input is orders of magnitude larger. I challenge you to listen comprehensively to someone talking about a topic you don't know while identifying someone in a police lineup.
Consider normal text compression and you're left with a few bits at best for most of those "fast talkers/listeners." And the human brain is very good at compression.
> And the human brain is very good at compression
Yes, but in order to measure its bitrate accurately you need to tell us whether that compression is gzip, zlib, zip or 7zip. They don't all produce the same results.
If we are going to be utterly ridiculous about this conversation, let's at least be complete.
How would that be relevant? The only relevant aspect for this discussion is that language is tightly compressible, which noone here has challenged yet. But I've seen noone come up with a true example where you'd be actively processing high entropy data at a rate that disagrees fundamentally with the paper.
And were those measurements made with vsync on by any chance?
I feel like this is splitting hairs and moving goalposts. The pro side will always have some sort of explanation why it’s 10 bps or less without a way of actually proving it.
This is a frustrating article.
>I feel like this is splitting hairs and moving goalposts.
How? The argument remains exactly the same and we're just discussed counterexamples to the statements of people who obviously don't get it.
I can type at a rate faster than 10 bits/ second (about 2 characters / 16 bits! what a slow rate! I'm well above that, at least 24 bits/second!) and you aren't compressing that to less.
And that's while also moving my hands in extremely complex ways to perform the task, looking around my office, listening for threats / wife, observing the breeze from my fan, twiddling my toes on the balance board I don't use...
It's clickbait/ragebait. Well done to the title writer.
Another way to put it: try to perform a skill you have never practiced. For example, if you've never played the piano or tried to read sheet music, see how long it takes you to play a few notes correctly. It's complex enough that you'll very likely find yourself limited to around 10 bits per second. You shouldn't count the bits handled by visual processing, basic motor control, and other things you have practiced all your life. If you practice the skill, the skill moves out of your conscious processing and no longer counts toward the 10 bits per second.
> You shouldn't count the bits handled by visual processing, basic motor control, and other things you have practiced all your life.
Ok, but how to count bits for your example with piano? It has 80 keys or so, isn't it? Should we take log2(80)=6.32...? Of if you are working with only part of the keyboard, maybe we should take log2(7)? How many bits per seconds of processing it takes to keep my hand on a keyboard in a right way that was shown by a tutor? How to measure it? Does my experience with a guitar makes it easier? How many bits easier?
Your argument boils down to “exclude anything faster than 10 bits per second, so then you’re limited to things which are 10 bits per second or less”. It’s extremely unconvincing.
What conscious motor processing? My motor functions largely take care of themselves while I daydream when walking or consider where I want to steer while driving.
That's just motor reflexes that don't even enter higher cognitive processing. But daydreaming probably does process at the same rate as normal language, as was explained in the other comment. Try doing algebra in your head while running an obstacle course you've never seen and you'll be much slower at everything.
There is something wrong with that calculation: our speech output alone is 40 bits per second.
That still doesn't explain strange sentences in the article such as"
> Why does the brain need billions of neurons to process 10 bits/s?
Maybe the article is being intentionally disingenuous here? The brain is definitely not processing 10 bits/s, maybe a small part of it is.
I'd also say that you can make very complicated problems with only 10 bits of input (e.g. calculating busy-beaver of N), so "processing X bits" where X is a small value is no guarantee that this should be doable with a low amount of processing anyway.
> Has anyone here even read more than the title?
Since it costs $35.95 to read the article, probably not. Seriously, paywalling of scientific research is obviously wrong.
>Seriously, paywalling of scientific research is obviously wrong.
Agreed!
Here you go: https://arxiv.org/abs/2408.10234
You're missing my point. I'm saying that `try solving it with your hands while someone shows you a bunch of objects and asks you visual comprehension questions at the same time` is more than 10 bit/s of data being processed. I'm saying made up "tasks and outcomes" in this study are not a measure of the brain's throughput IN THE INNER LAYERS.
10 bits per second is effectively nothing. Not even a single cell could operate at 10 bits per second. Every organic system would collapse immediately.
Remember, this is an intermediate encoding of a hidden feature space between perception and planning. What you see at the start and the end of the neural network might be very different. Consider this: Typing at 60 words/minute, 5 characters/word and 8 bits/character gives a gross bit rate of 40 bits/second. With today's compression algorithms, you can easily get 4:1 reduction in data. That leaves you at approximately 10bits/second that are consciously processed in your brain. Probably even less since your brain might be much better at encoding language than even our best models. Even if some of those numbers are off by a certain factor, the number in the paper is certainly in the right ballpark when you consider orders of magnitude.
So the argument is that compression is not processing? That's a very weird definition of processing. Also when we do this we can always argue that we get down to 10bit/s, just change the compression ratio.
Send a query to your favorite database. The database preprocesses hundreds of gigabytes of data, map-reduces it, and finally returns a single 32-bit integer, taking exactly 3.2 seconds to do so.
Nobody would say that the database can only process 10 bits per second. The query just happened to ask for a very simplified answer.
Everybody would say it outputs 10 bits per second. And when it comes to consciousness simplified answers at 10 bits per second is the best you can get. This article asks why.
It's an interesting question indeed, why the mind produces so little output compared to input.
Still, I get the feeling that the apparent bit rate of the conscious output is not the right metric to be measuring. There are so many ways in which the bit rate you're measuring could have been bottle-necked by factors outside of the mind, such as the speed at which you can move your tongue to articulate language, or a cultural expectation that prefers compact answers over chain-of-thought babbles. The mind also leaks a lot of information, both consciously and unconsciously, through side channels such as facial expression, gestures, and even variations in the rate of output itself.
> It's an interesting question indeed, why the mind produces so little output compared to input.
My theory is that consciousness is super complicated and brain has barely enough juice to crank out any of it.
Conscious field of vision is about 2% of total field of vision and we observe the world by constantly swiping it with this 2%. This way we reuse both neural circuitry of our brains and also training data that would otherwise necessarily be spread across larger size of neural network if our conscious field of vision was larger.
So it short, conscious output is so small because we are dumb and short lived.
I also wouldn't say that there's a lot of information in the side channels. Properly compressed it's probably less than engaged conscious output.
On the other hand, the amount of actual entropic "information" that is processed when you identify a Rubik's cube as such may be nowhere near as much as you think it is, and most importantly, 10 bits may be nowhere near as little as you think it is.
If we use your example, which is that of identifying an object, we may simply ask the entropy of what the distribution of possible objects-to-be-identified is at t=0, prior to any analysis. Saying we can resolve 10 bits of this entropy per second is equivalent to saying that we can identify one object from a uniform distribution of 1024 per second. Let's suppose this is a low estimate by several orders of magnitude, and that it's really one from a billion objects instead that you can identify per second. Then this would still only be about 30 bits/sec.
None of this changes the main thesis of the paper, which is that this is much lower than the 10⁹ bits/sec our sensory systems transmit.
But you don't just perceive an object's category (like "cat"). We also perceive a high amount of detail about the object - colour, pattern, behaviour, we make comparisons to past behaviour, predictions of what's likely to happen next and so on.
Sure, some parts of the brain don't receive all that detail, but that's necessary for abstraction. If you pumped all the sensory data everywhere, the brain would get overwhelmed for no reason.
That 30 bits was not literally intended to represent only the object's noun category, but even if it did, none of the additional pieces of information you would like to add are going to change this picture much, because what one would think of as a "high amount of detail" is not actually all that high in terms of the logarithmic growth of the entropy.
Take color: suppose the average person has 16 baseline colors memorized, and then a few variations of each: each one can be bright or dark, saturated or pastel. That would be about 6 bits for color. If you have an eye for color or you're an artist you may have some additional degrees of freedom. Hell, a computer using RGB can only represent 24 bits worth of color, maximum. I am going to suggest this stuff gets cognized less than 10 bits worth for the average person; let's just say 10.
Now, of course, people can memorize more than one color. If colors are independently distributed uniformly at random, then processing N colors requires 10N bits. But of course they aren't, so the entropy is less. But again, let's just say they were. So how many color combinations can you process per second? I would say it's a bit of a challenge to memorize a set of 10 arbitrary drawn colors shown for a second. Most people couldn't continuously do that at a rate of 10 colors per second. That would be 100 bits/sec of info.
The point is that you really don't perceive all that much. You show the average person a Rubik's cube, there is no way they're going to remember the exact pattern of colors that they saw, unless the cube were solved or something. They will perceive it as "multicolored" and that's about it.
Adding behavior, texture, etc doesn't change this picture. None of this stuff is even close to 10^9 bits of entropy, which would be 2^1,000,000,000 different equally likely possibilities.
> deciding that this is the most relevant activity to keep doing at the moment, instead of doing math or going out for a walk.
How many bits of actual decision is going on here, as compared to the period of time that decision applies to?
For example, if a person decided once per second whether or not to go for a walk, that could be 1 bit per second. But if that person is constantly transitioning back and forth between walking and not-walking, we could consider their behavior pathological. Instead, for most people, the information density of these decisions is quite low, i.e. the per-second decision bits are very compressible.
Personally, I only decide whether to go for a walk (or not) _at most_ once every few minutes. Even if we add in bits for "where" and "precisely when" and "how long", I think we're still at just a small fraction of 1 bit per second.
Your conscious planner may be making 1 decision/second, but your senses, proprioception, balancing system, etc. are handing you about a gigabit/second of data, most of which never rises to your conscious attention.
When I'm reading, that's roughly 2000 bits/second, but I am engaging it with model-making systems that can completely enrapture me.
I/O is not the same as computation; conscious computation is not the same as all computation.
Yeah, that's what the paper is talking about. A whole lot of input vs. a small amount of conscious focus.
There is a whole "OS" in the background that is way more complex than all of our programmed systems. I may be hyped focusing on a complex program and really not think about anything else, but my body is still constantly senses, processes, and reacts to signals. E.g. I'm not ragdoll falling to the ground, which requires holding balance and tone, which is far from trivial. I also constantly listening to every possible danger, which would immediately override my current activity. I also react to internal signals like hunger or pain.
A bit dumb, but maybe relevant comparison might be asking why can an Apple Watch stay on for a single day only on a charge, while Garmin can do 2 weeks/a month? Because one is a general purpose computer, while the other is an embedded software that can only ever do that few things it is preprogrammed to do.
I don't see your comparison as relevant. I'm a metre away from a nine-year-old Android 6 phone that can easily go a month on a single charge (while tracking its location!). Your Apple Watch is (a) constantly using its radio, (b) running a fairly high-end graphics chip, and (c) actively probing your body at intervals. Nothing to do with software.
I think you lost the actual comparison. Is not about Android and iOS.
Is about having a ton of mini specific "computers" vs. the very generic conscience.
For example all the balancing, background hearing, hungry, thirsty , and so on are very specific zones in the brain. While our conscient mind is doing...well, whatever we like in a very generic way, basically keeping the joy of living on.
I mentioned two watches with similarly sized batteries.
The phone battery is only about twice the volume of a smartwatch battery.
"Personally, I only decide whether to go for a walk (or not) _at most_ once every few minutes. "
Consciously. Subconsciously much more is going on.
> Personally, I only decide whether to go for a walk (or not) _at most_ once every few minutes.
And yet, if I walked into your office and shouted your name, you would have less than a second of reaction time, indicating that you are processing all sound around you and deciding whether to get up and talk to someone pretty much continuously.
The vast majority of the brain's processing power, including conceptual processing, is not conscious. Conscious thought is the framebuffer of the mind, subconscious is the RAM.
Such reductionist analyses are reliably blind to the complexity of being embodied in physical reality.
Should they analyse everything all at once? Your comment seems reductionist to the realities of writing a paper on a specific subject.
They should just not claim the 10 bits/sec non-sense.
Your brain does. It's not _one_ computer. It's dozens some interconnected and synchronized to others some with entirely independent control. It's why we have seizures, why optical illusions exist, and why you flinch when something moves at you in your peripheral vision.
Reductionism is the only way humanity ever progressed on anything.
Yes but reductions have implicit or explicit assumptions. When our reductions give non-sensical results (10 bits/sec) we should not take these results seriously/understand them as an artifact of our reductions, rather than taking them seriously and giving them ontological existence.
That's debatable. To me declaring that the world was made in a simple way like this and just listen to the priests was reductionism of the wrong kind.
I know the kind you mean though, reducing external factors(like air), so you can isolate gravity as a force unrelated to density by finding out in a vacuum a feather and a metal ball fall with the same speed.
(As for the paper I have not yet made up my mind.)
I'm not sure why reductionism brings simplicity to anyones mind. Yes, you consider one thing at a time, but there are billions of such single things to explain. And isolating one is a challenge in its own right.
Right, just like when we used to believe that "unexplained thing happened" = "god". Can't really get more reductive than that.
How's that reductive? It's epitome of holism.
Reductionism is setting everything but one thing aside and trying to figure out that one thing. Bringing in the second thing in only after you fail and trying again.
Could the 10bits be some sort of instruction-set-address-width?
Meaning that while you are solving the rubik‘s cube, your brain has the capacity of switching into one of 2^10=1024 possible next states, which might fire nerve impulses for the body and also come with a new set of 1024 next states?
When focusing on the cube about 20 or 30 of the next states could be related to the cube, while the rest is concerned with bodily sensations, other thoughts etc.
What if learning something well meant that less states need to be reserved for the task/cube at hand because the decision tree going forward has become more efficiently packed, so more states are open for something else.
THe 10 bits is just very obviously made-up nonsense. The brain doesn’t operate on anything like a binary system, sensory data is not digitally encoded - it is all continuous data. Even the abstract kind of answers its own question. It says sensory data is something like the order of 10^9 bits per second but the “information throughput” is 10 bits per second. It asks “why do we need billions of neurons to process 10 bits per second?” Well we clearly don’t because that’s clearly not what we do. We process 10^9 bits per second, and we do a bunch of things which aren’t just processing sensory data as well. And on top of that we do things which aren’t measured in the 10 bits per second also.
> sensory data is not digitally encoded - it is all continuous data
That might be correct but as far as I know the brain has circuits that fire periodically with a certain frequency (what brain waves are about). If such frequencies steer some kind of gate than you do have a kind of digital logic with 1 being where the brain waves cross the zero line.
> If such frequencies steer some kind of gate than you do have a kind of digital logic with 1 being where the brain waves cross the zero line.
What makes you think that this is the case? If someone touches you from behind do you immediately fall over dead from sensory overload? Do you react as if you where punched in the gut? Do you look over your shoulder to check who it is? Or do you fail to notice it until they break a bone? There is a significant amount of inputs where the reactions are not just zero or one but happen on a scale, your digital 1 or zero is more likely to be a 1.0 or 0.0 of a numeric type with unknown bit depth, lets go with 80bit since x87 is just the worst and completely blows the 10bit claim.
Don’t forget typing in English is also outcome of processing all the context.
Also not so conscious context like „I am writing a reply on HN not on Reddit - making obvious silly pun about cats is not going to make upvotes - making quasi intellectual comment about subconscious processing should earn me some”.
> But the brain is a general-purpose computer, not a speed-card computer, or English text computer, or binary digit computer, or Rubik's cube computer.
Look, I like the sentence. I like it so much I might steal it. But, that said I think analogising what the brain does to "general purpose computer" is a bit awry because we can invent algorithms, and given a process we can implement algorithms but I still tend to think there is something ineffable about thought/cognition which is NOT general purpose computation. It's a heuristic. I dunno what I think.
The computer analogy is definitely off. For one thing, rationality is kind of bolted on to our linguistic ability. It's not deeply seated in our brain; it takes substantial energy to engage in logical reasoning and we're not that great at it; even very skilled mathematicians make trivial mistakes.
Deeper in our brain is reasoning about causality. But even that is more of an input to our reactions and emotions than it is as a consciously accessible function, and it too is often wrong.
As you said, it's a heuristic. We certainly can't solve the halting problem with brain magic.
Implementing algorithms doesn't need a computer, and it doesn't need something that's not a computer.
That said, with enough willpower why couldn't you implement a universal Turing machine in your brain? I think you would need external memory (a tape) but that's not a problem.
Otoh, I just spent a minute to even comprehend your idea. We are living in the information era, the era of dodging an orange cat has ended. And we suck at this new thing, really. We’re like slow af emulators of logic programs and tend to fallback to animal mode when tired or clueless.
Do you really think that the fundamental human act of the current era is running logic programs? I don't dispute that digital logic and its uses deeply changed humanity, but this is like calling humans "weak af imitations of steam engines" during the late-1800s railroad boom.
Calling actions/thoughts that follow a concrete list of logical rules/algorithms "animal mode" is deeply anti-human.
I don't see how you could avoid logic and do literally anything useful that is not animalistic by nature. I can only imagine how <human-incomprehensible positive adjective> it would be to be able to remember more than 7 things, use more than one thread and grasp complex ideas by simply looking at their schema for a second, all that on average. Instead we slowly follow triсkles of data, constantly being carried away by nuances and distractions, getting overwhelmed and tired in no time. Our natural "logic" sees god behind a lightning and witch behind an accident, please don't defend it.
deeply anti-human
What's wrong with that, just look at us going, globally. Any pro-human thoughts, really? Never understood this clinging to the horribly limited nature of humanity only because it has some relatively nice pockets (which usually correlate with deep thought and negatively with the lack thereof).
I’m not any good at Rubik’s Cube, and to me it looks like a bunch of colored squares.
But stuff I am good at? I don’t see it at all. A terminal? I never have any truly tangible conscious recollection of serious coding.
It might be the same for people good at Rubik’s Cube.
Well and not only that, your brain is at the same time processing incoming sounds which it spatializes based on the minor phase/level differences and based onnreflections within the ears themselves. Your brain processes all feelings for all body parts, including a big chunk of skin, temperature perception, sense of gravity to balance yourself, etc.
If you received 10 bits once a second describing all of that, good luck, you probably won't survive the next 15 minutes.
The PhD student writing this could be excused for being young an inexperienced but their advisor, tagged second on the paper, should have cut this off at the pass.
> If the questions are properly designed, each will reveal 1 bit of information about the mystery thing. If the guesser wins routinely, this suggests that the thinker can access about 2^20 ≈ 1 million possible items in the few seconds allotted. So the speed of thinking – with no constraints imposed – corresponds to 20 bits of information over a few seconds: a rate of 10 bits per second or less.
This is an extrinsic definition of "information" which is task relative, and has little to do with any intrinsic processing rate (if such a thing can even be defined for the imagination).
The question of why does biological hardware capable of very high "intrinsic rates" deliver problem solving at "very low extrinsic rates" seems quite trivial. Its even a non-sequitur to compare them: properties of the parts are not properties of wholes. "Why does a gas move at 1 m/s, when its molecules move at 1000s m/s..."
All the 'intrinsic processing' of intelligence is concerned with deploying a very large array of cognitive skills (imagination, coordination, planning, etc.) that are fully general. Any given task has requires all of those top be in operation, and so we expect a much slower rate of 'extrinsic information processing'.
Consider how foolish the paper is to compare the intrinsic processing of a wifi network with the extrinsic task-specific processing of a human: it is likewise the case that if we set a computer the challenge of coordinating the solution of a task (eg., involving several LLMs) across a network, it's task-specific performance would drop off a cliff -- having a much slower 'solution rate' than 10bit/second.
These 'task-specific bits' represent a vast amount of processing work to solve a problem. And are at least as much to do with the problem, than the system solving it.
It seems to me all this paper does is define tasks in a highly abstract way that imposes a uniform cost to process '1 bit of task information'. Do the same for computers, and you'd likewise find tiny bitrates. The rate at which a problem is solved is 'one part of that problem per second' for a suitable definiton of 'part'
Another relevant point is the common anecdote about, eg, some master engineer who gets paid big bucks to come fix some problem that's been blocking up a factory for weeks. The engineer walks around, listens to a few of the machines, and then walks up to one of the machines and knocks it with his elbow Fonzi style and the factory starts running again. The factory boss is happy his factory is working, but annoyed that he paid so much for such an "easy" solution.
Ie, the amount of input and processing required to produce the "right" 10 bits might be far larger than 10 bits. Another obvious example is chess. The amount of bits conveyed by each move is small but, if you want to make the right move, you should probably put some deeper thought into it.
Humans are essentially organisms that collect and filter information, boil it down to a condensed soup of understanding, and emit a light sprinkle of carefully chosen bits intended to reshape the future towards their desires.
Humans are nature's best designed filters.
Or another way of saying it is, the answer was right there all along, the hard part was filtering all the non-answer out.
filtering and _acting_ on this condensed information.
For example: lighting strucks a tree, a fire starts. Man is scared but that night is very cold and near the tree is warmer. This happens a few times, und a branch falls and it is collected , incidentally is thrown on another pile of wood , starts burning -> idea of fire is formulated and since them man keeps warm.
Or: man finds shiny things in a river bed, collects them, one day the whole shack burns from lighting, and discovers that the shiny thigs are now in a different shape -> metal working is born.
I was about to say this but you beat me to it.
Seems like this 10 number comes out of the kind of research where the objective isn’t to find the truth, but to come up with an answer that is headline grabbing. It’s the scientific equivalent of clickbait.
Too bad people fall for it.
This type of comment is my least favorite on HN. "Seems quite trivial," "non-sequitur to compare them," "foolish." I am not able to read the paper as I do not have access, but the published perspective has 131 citations which seem to consider everything from task-specific human abilities, to cortical processing speeds, to perception and limb movements and eye movements, and so on.
I'm glad you thought about it too, but to assume that the authors are just silly and don't understand the problem space is really not a good contribution to conversation.
(Disclosure: I’m a former academic with more than a handful of papers to my name)
The parent comment is harshly criticizing (fairly, in my view) a paper, and not the authors. Smart people can write foolish things (ask me how I know). It’s good, actually, to call out foolishness, especially in a concrete way as the parent comment does. We do ourselves no favors by being unkind to each other. But we also do ourselves no favors by being unnecessarily kind to bad work. It’s important to keep perspective.
I realized that I do have institutional access and so I was able to read the paper, and I stand by my initial criticism of the above comment.
"It seems to me all this paper does is define tasks in a highly abstract way that imposes a uniform cost to process '1 bit of task information'."
The paper uses this number and acknowledges that it is not the only possible measure, and explains why they use this number and how it was derived. It is just the start of the paper, not "all this paper does." The paper primarily focuses on counterarguments to this number to then address the primary question of the relationship between the inner and outer brain.
A few questions it poses: does the superior colliculus contribute to a bottom-up "saliency map" to ultimately direct the attentional bottleneck in cognition? Why does the brain use the same neural circuitry for both rapid/parallel sensory processing and slow/serial cognition? This is not even how other parts of the body work (e.g., type I and II muscle fibers). Perhaps the associated routing machinery between input and output accounts for the billions of neurons? Maybe, like the visual cortex, the prefrontal cortex has a fine-grained organization of thousands of small modules each dedicated to a specific microtask?
We do ourselves the most favors by reading research with some skepticism, and asking questions. We do ourselves no favors by writing comments after only reading an abstract (please, tell me if I'm wrong). I only point out that discounting research so blithely does nothing for improving research. This was a perspective paper - an author asking questions to better understand a possible issue and guide research. And maybe the commenter is right, maybe this is the wrong focus, but I do not believe it was truly considered.
The question reduces to "how does the intrinsic capacities of intelligence, had by humans, give rise to the capacity to answer complex questions?" -- I see nothing which the framing in informational terms adds.
It's nothing more than saying: we know that wires have electrons, and are made of metal, and can support a transfer rate of 1Gbp/s -- and we know that an LLM takes 1 min to answer "Yes" to a postgraduate physics question -- so how/why does the current in the wire at 10^9 bit/s second, support this 1bit/min mechanism?
It's extremely wrong-headed. So much so the paper even makes the absurd claim that Musk's neurallink need not have any high bandwith capabilities because a "telephone" (to quote) would be sufficient.
This is like saying an internet-connected server, hosting an LLM, need not have a high bandwidth RAM, because it only needs to transmit 1bit/s to answer the "yes" question.
In my view there isn't much worthwhile to say under this framing of the problem -- it's a pseudoscientific framing --- as is quite a lot of 'research' that employs 'information' in this way, a red flag for the production of pseudoscience by computer scientists.
Their implied premise is: "computer science is the be-all and end-all of analysis, and of what one needs to know, and so reality must be as we conceive it". Thus they employ an abuse of abstraction to "prove" this fact: reduce everything down to its most abstract level, so that one speaks in "bits" and then equivocate in semantically-weighty ways between these "bits", and pretend not to be doing so. This ends with pythagorean levels of mysticism.
I appreciate that you are elaborating further on your issues with the paper. I, again, am not choosing to defend the paper itself, rather the reason for science - asking questions and finding answers, even ones that may not be "worthwhile." Because we do not always know what is worthwhile and often we ignore some important facts when we think, intuitively, something makes sense and there is no reason to study it.
But, I will counter your comparison regarding LLMs and the transfer rate of wires. We, humans, have wired up the LLM ourselves. Evolution wired our body/brain and we do not know all of the filters and connections that exist in the transfer and processing of data. There is so much about the body we do not know. With LLMs, we've created every part so it doesn't really compare.
And to say that fields of science should not consider the knowledge gleaned from other fields is preposterous. I read about a new discovery in Math almost every few months in which a person from a different field brought in different techniques and looked at a problem from a new angle. Maybe this framing of the problem is silly in the end, or maybe it is just what someone needs to read somewhere to spark an idea. It doesn't hurt to think about it.
[flagged]
Exactly. English text is thought to have about 10 bits per word of information content, yet you can read much more quickly than 1 word per second. That includes not just ingesting the word, but also comprehending the meaning the author is conveying and your own reflections on those words.
> Why can we only think about one thing at a time?
Maybe this is just a perception thing. Sure, you can only really keep up one stream of thought, visualization or inner dialogue (whatever you want to call it) at a time, but perhaps that's because we learn all our lives that direct communication is a one-channel, linear thing--speaking and listening focused on one topic at a time. Our brain does plenty of thinking in the background that leads to "a-ha!" moments even when the direct focus of our thoughts isn't on that topic. What if the mind could maintain multiple threads of thoughts at once, but our language coerces our thought patterns into being linear and non-concurrent?
As someone without an inner monologue, and someone that's spent a LOT of time meditating, it's not the language. It's the attention mechanisms themselves.
Buddhist scholars insist that while we can have multiple threads of attention in our awareness, like strings with pearls of experience/thoughts we can only actually hold one little pearl of information from that stream in our attention at a time, and that we flit between them quite rapidly.
Personally, I sort of agree, but I notice that there seems to be a time-compression thing happening where the pearl delivered to attention can contain a compressed summary of continuous perception. This seems to work for 2 things at once in awareness. When you start monitoring 3+ streams, there are gaps. And even maintaining the 2 streams continuously is exhausting so the mind tends to relax a little and leave gaps on a normal basis, but it seems like it can monitor dual feeds when its particularly important.
My understanding is that neuroscience largely seems to agree with the above.
(Actually, I'll note that the USUAL mode of being doesn't even monitor one stream continuously. A lot of the weird effects (and deeply interesting ones!) they talk about in meditative arts seem to pop up when you progress to being able to hold truly continuous attention.)
What you're describing here is software, not hardware—Cognitive Science is the relevant field, not Neuroscience.
That said, your understanding is largely supported by our current understanding of consciousness, attention, and perception. The attention mechanism doesn't handle parallel processing well—but can operate "multi-threaded", where it juggles several foci at once (with some obvious cost to switching between them). But I think its a mistake to assume that decision making has to be done within this attention context. While we may only be aware of a single thread at any given time, the brain is doing a lot of parallel processing. We can only focus our attention on a single cognitive task, but that doesn't mean we're not actively performing many others.
What you're describing here is dualism and Descartes, in response to a post that references Buddhist scholars, a philosophy famously focused on monism.
"Cognitive science" vs "neuroscience" as a concept is just how we decided to slice the problem up for academia.
Next time, maybe cut the first paragraph ;)
As far as I know, our wetware builds the hardware as the software runs. Like some kind of self-agglomerating and pruning FPGA
"... neurons that fire together wire together" (essentially)
we may only be aware of a single thread at any given time
We may be not a single mind, but a bunch of minds. It just happens that the mind that “you” are reads this and has written the above comment, cause it’s of that kind (just like “all biological beings in this thread happen to be humans” type of a filter). Other minds can live completely different lives, just inside the same skull. And share emotions and thoughts with you sometimes from their prison.
This “aware” part is pretty mysterious, because the physical mind could operate without it perfectly. But for some reason, the space containing a mind experiences this awareness thing.
I think this is clearly seen whenever we go for a walk or a drive, the mind drifts but you still somehow get to your destination and in hindsight wonder who was actually driving the car?
Sometimes I'll be deeply thinking about something while driving, and discover I'm at the last road to my house without remembering having driven the previous few blocks. It's quite disturbing. When I say deeply thinking I don't mean anything involving phones or external stimuli - really just thinking about a particular problem I'm working on. I also don't deliberately engage this deep mode of thought, I just sort of slide into it naturally.
Does anyone else have this happen? I don't think my driving is suffering, but it's hard to really honestly say?
Yes, it's a classic example of the power and skill of your "unconscious" mind - your consciousness is freed up to do novel work because the drive home is so routine that your unconscious mind can do almost all of the work. Should something change - a traffic jam, a detour, a pedestrian crossing the road - your conscious attention will be called back to the more urgent task which is making a decision about how to handle the driving situation.
It seams interesting to me that what we refer to as the conscious mind is unconscious a third of each day and the part we call unconscious is active 24 by 7.
I'm out of my depth here, but a high-level response:
First, I don't think the "unconscious" part is a single process, but myriad processes, and I'd bet they wax and wane.
Second, the "conscious" part is the part that can reason about itself and think abstractly. I think it would be correct to say it's doing higher level computations. The important part is that this is more costly - it's not optimized because it has to be flexible, so it would make sense that it's resting as often as possible.
So, one high-performance, high-power, general-purpose processor to handle the foreground task, and a bunch of low-power processors for background tasks.
Looks like ARM got it right with its big.LITTLE architecture. :)
The low power processors are not general purpose, they are fpga's, they get trained and then are highly optimized at their task.
And they stay in the brain even when not used. You can ride a bike or play a piano years after stopping.
When I have a deeply engrossing unitary (I.e. not one of five tasks but one task for months) project at work I had better start commuting by train and cut out the driving. I have lost two cars by not doing that. Fortunately no one was hurt. One car I had towed to the work parking lot, and just never thought about it until some time after the project when it turned out the office just had it towed off as unknown junk. The project went well.
Oof, that's kind of scary. Sorry you experienced it, and glad nobody was hurt.
I had a workmate at a gamedev place I worked (so lots of deep technical challenges) who apparently regularly missed the motorway exit for work because he was thinking about what he was working on.
I guess the point is even if the distraction is 100% internal you should still do your best to pull yourself out and get your focus back on the road.
The way most people refer to this is "driving on autopilot."
Also I do not agree with the premise that we can only think about one thing at a time.
We routinely communicate with multiple people at once and also communicate with the same persons in multiple threads of conversations.
Of cause this means that we switch between those tasks and do not really do them in parallel. At most we listen to one person, answer a second via speech, a third via text while thinking about what to respond to a fourth
We just switch our focus of attention quite fast
This is the part that bothers me. I can definitely think of multiple things at a time. It really just depends on the complexity of the tasks. I can listen to and process and audiobook while driving to work every morning, for instance. I definitely can have multiple thoughts in parallel. I remember when I used to recite prayers, I would be reciting the memorized prayer while thinking about other things. Both things were happening at the same time. The memorized task takes less processing power, but it still requires some thought to execute.
I don't know what ever became of the line of research, but there was a very interesting book I read decades ago called Descartes' Error by Antonio Damasio that examined case studies of patients who had their corpus collosum severed, resulting in a split brain. You could show their left and right eyes different images and ask them what they saw and they would write and speak different answers, because speech and writing are controlled by different brain hemispheres.
This seems to suggest that any bottleneck in conscious attention is not an inherent limitation of an animal brain but rather a consensus mechanism we've developed to keep our chain of experience coherent. If we get rid of the constraint that all of our external communication channels need to present the same narrative, we can seemingly process more information even when it requires being a conscious center of attention.
It's like UIs being single-threaded, because otherwise you would have chaos if several background threads are trying to update the UI at the same time.
I wonder if some people with dissociative identity disorder, or who at least identify as plural, experience overlapping simultaneous trains of thought
Heh if there are two yous occurring at the same time, one you would never know about it. Only third party observation would be able to tell you
That assumes clean swaps between personalities, I'd wager that it gets messier than that for some.
I am not qualified to judge whether you're right or wrong but I love that concept!
We think about many things at a time. But for those with malfunctioning brains that have the internal monologue going constantly, they mistaken that monologue for their thoughts and so it must be "one thing at a time". The language they experience their monologue in is by its very nature, sequential, you can't speak or even hear/understand two parallel streams of speech.
>Our brain does plenty of thinking in the background that leads to "a-ha!" moments even
That's not "in the background". That's the real you, your real mind. That's the foreground. But, if your brain malfunctions as many do, then the monologue shows up and crowds out everything. Sometimes it is apparently loud enough that it even prevents those "a-ha!" moments.
>but our language coerces our thought patterns into being linear and non-concurrent?
The language should just be discarded. What you want is an internal silence.
I wouldn’t say it’s language so much as unnecessarily added language. Words and sentences can appear and be useful, but there is a lot of mental activity that is not essential but added on responses to things. I wouldn’t say a component that generates comments is a broken brain, it believing the comments or the beliefs embedded inside them can break your contentedness.
Where do they get 10 bits/second?
Heck, I can type way faster than 10 bits per second, even after gzipping the output.
And when I consider the amount of sensory information that I consciously process (not that comes in, but that I conceptually analyze), it's got to be way higher.
10 bits/s doesn't pass the smell test.
From the paper:
> Quick, think of a thing... Now I’ll guess that thing by asking you yes/no questions.” The game ‘Twenty Questions’ has been popular for centuries1as a thinking challenge. If the questions are properly designed, each will reveal 1 bit of information about the mystery thing. If the guesser wins routinely, this suggests that the thinker can access about 220≈ 1 million possible items in the few seconds allotted. So the speed of thinking – with no constraints imposed – corresponds to 20 bits of information over a few seconds: a rate of 10 bits per second or less.
As the answerer, if you have a wide vocabulary or if you're a technical person then it's not too difficult to routinely choose words the other person simply does not know so that no amount of yes/no questions will get them there.
Obscure medical terms (phlebotomy), names of uncommonly-known stars (Fomalhaut), obscure data structures (cache-oblivious lookahead arrays), mathematical constants (Feigenbaum's constants)... The list goes on and on!
The point I'm trying to make is that most people who play Twenty Questions aren't trying to maximize the number of bits per second in their answer. They're actually trying to play semi-cooperatively. The fun part of Twenty Questions is when the other person guesses your word with as few questions remaining as possible. Having them get all the way to 20 and then you tell them "no you were way off to guess toothache, it was actually temporomandibular joint dysfunction" makes you look rather unsporting!
Thus, since I think we can expect people who play Twenty Questions to actually try to choose a word they know the other person can guess within the space allowed, we can reasonably conclude that using the game as a way to establish some sort of rough constraint on the speed of thinking (in bits per second) is way off. In fact, I know from my own experience playing the game that I will think of and discard many words in a short time as I try to find one that will be in the sweet spot of challenge for the other person to guess.
And so there's a vast amount of social cognition taking place which is unaccounted for in the 10 bits/sec.
So, in the context of random word lookup with filter for things, we have a latency of a few seconds and a total selection of 20 bits.
Meanwhile the machinery in understanding that it is a game, processing the audio input of the question, producing the output of the answer is all taken for granted.
It's nice when authors let you know you can safely ignore them so succinctly!
Yeah, no reason to keep reading
What a truly bizarre method. There are so many things wrong with it I don't even know where to begin.
No wonder they came up with such an obviously nonsensical answer in the end.
If the questions were pre-determined, which they're usually not. Reminds me of Huffman coding and the reason that compression challenges measure submissions looking at artifacts required to run them in addition to compressed size. I tend to agree with OP that this doesn't pass the smell test
> Quick, think of a thing... Now I’ll guess that thing by asking you yes/no questions.”
Every time I play this game, I can only think of one thing: https://t3.ftcdn.net/jpg/02/07/37/42/500_F_207374213_kNgoMel...
So I guess that means I can only think at 1 bit per second.
If there there is only one answer it is zero bits.
English is about one bit per letter. If you type at a very fast 120WPM then you’re right at 10bps. Computers just don’t represent English very efficiently, even with gzip.
> English is about one bit per letter
Where did you get that number from? How would you represent a letter using 1 bit?
It’s an experimental result by Shannon: https://archive.org/details/bstj30-1-50/page/n5/mode/1up
In short, you show someone an English text cut off at an arbitrary point and ask them to predict which letter comes next. Based on how successful they are, you can calculate the information content of the text. The result from this experiment was approximately one bit per letter.
Representing it is not the concern of the experiment. I don’t think anyone has a scheme that can do this. But it’s straightforward enough in theory. You create a compressor which contains a simulated human English speaker. At each point, ask the simulation to rank all the letters that might come next, in order. Emit the rank of the actual next letter into your compressed data. To decompress, run the same procedure, but apply the ranks you read from the data stream to the simulation’s predictions. If your simulation is deterministic, this will produce output matching the compressor’s input.
Say that experiment is correct. Wouldn't that imply that the information context of a single letter varies based on the possible future permutations?
I.e., The string "I'v_" provides way more context than "con_" because you're much more likely to get I'm typing "I've" instead of "contraception"
That seems to disprove the idea that a letter is a bit.
Also the fact that there are more than two letters also indicate more than one bit, though I wouldn't want to even start to guess the encoding scheme of the brain
I don’t follow. Of course the probabilities change depending on context. 1 bit per letter is an average, not an exact measure for each individual letter. There are cases where the next letter is virtually guaranteed, and the information content of that letter is much less than one bit. There are cases where it could easily be many different possibilities and that’s more than one bit. On average it’s about one bit.
> Also the fact that there are more than two letters also indicate more than one bit
This seems to deny the possibility of data compression, which I hope you’d reconsider, given that this message has probably been compressed and decompressed several times before it gets to you.
Anyway, it should be easy to see that the number of bits per symbol isn’t tied to the number of symbols when there’s knowledge about the structure of the data. Start with the case where there are 256 symbols. That implies eight bits per symbol. Now take this comment, encode it as ASCII, and run it through gzip. The result is less than 8 bits per symbol.
For a contrived example, consider a case where a language has three symbols, A, B, and C. In this language, A appears with a frequency of 999,999,998 per billion. B and C each appear with a frequency of one in a billion. Now, take some text from this language and apply a basic run-length encoding to it. You’ll end up with something like 32 bits per billion letters on average (around 30 bits to encode a typical run length of approximately 1 billion, and 2 bits to encode which letter is in the run), which is way less than one bit per letter.
> I.e., The string "I'v_" provides way more context than "con_" because you're much more likely to get I'm typing "I've" instead of "contraception"
Yes the entropy of the next letter always depends on the context. One bit per letter is just an average for all kinds of contexts.
> Also the fact that there are more than two letters also indicate more than one bit
Our alphabet is simply not the most efficient way of encoding information. It takes about 5 bits to encode 26 letters, space, comma and period. Even simple algorithms like Huffman or LZ77 only require just 3 bits per letter. Current state-of-the-art algorithms compress the English Wikipedia using a mere 0.8 bits per character: https://www.mattmahoney.net/dc/text.html
>I don’t think anyone has a scheme that can do this
If you substitute "token", for "letter", what you have described is exactly what a modern LLM does, out of the box. llama.cpp even has a setting, "show logits", which emits the probability of each token (sadly, only of the text it outputs, not the text it ingests - oh well).
I don't think anyone actually uses this as a text compressor for reasons of practicality. But it's no longer a theoretical thought experiment - it's possible today, on a laptop. Certainly you can experimentally verify Shannon's result, if you believe that LLMs are a sufficiently high fidelity model of English (you should - it takes multiple sentences before it's possible to sniff that text is LLM generated, a piece of information worth a single bit).
Oh look, Fabrice Bellard (who else?) already did it: https://bellard.org/ts_zip/ and you may note that indeed, it achieves a compression ratio of just north of 1 bit per byte, using a very small language model.
In practice, it is even less. Current state-of-the-art algorithms compress the English Wikipedia using just 0.8 bits per character: https://www.mattmahoney.net/dc/text.html
These letters are jointly distributed, and the entropy of the joint distribution of a second of "plausible" English text is much lower than the naive sum of the marginal entropies of each letter. In fact, with LLMs that report the exact probability distribution of each token, it is now possible to get a pretty decent estimate of what the entropy of larger segments of English text actually is.
What if you are typing not an English text, but a series of random letters? This gets you to 5-6 bits per letter.
I think this gets into what you consider to be “information.” Random noise is high entropy and thus high information in one sense, and zero information in another.
Well the information used in the article is the classical shannon's information, so the former. Though I suspect that the entropy of what we can actually "randomly" type is not that high.
Even very fast typists are unable to do stenography without a machine specialized to the task. Speech, in turn, can usually be understood at two or even three times the rate at which it is ordinarily produced. Meanwhile, I can read several times faster than I can understand speech, even at the highest speedup which I find coherent.
Ergo, 10 bits per second just doesn't hold up. It's an interesting coincidence that a reasonably fast typing speed hits that rate, but humans routinely operate on language at multiples of it.
I don’t think a difference of this magnitude meaningfully changes what the paper is talking about. They already have other human behaviors in their table with bit rates up to 5 times higher. Even if you set it at 100bps it wouldn’t change much. They’re addressing a difference of eight orders of magnitude. Making it seven instead of eight isn’t all that important.
No but 10 bits/sec is better clickbait of a title, science my ass.
> English is about one bit per letter.*
* when whole sentences or paragraphs are considered.
I’d say that is implied by “English.”
Entropy is a measure of the source, not output.
What else would we consider?
The symbols aka words of the language itself?
I’m afraid I don’t understand your point.
If someone types English for a minute at 120WPM then they’ll have produced about 600 bits of information.
Are you saying we should consider the rate in a smaller window of time? Or we should consider the rate when the typist is producing a series of unrelated English words that don’t form a coherent sentence?
From the paper:
Take for example a human typist working from a hand-written manuscript. An advanced typist produces 120 words per minute. If each word is taken as 5 characters, this typing speed corresponds to 10 keystrokes a second. How many bits of information does that represent? One is tempted to count the keys on the keyboard and take the logarithm to get the entropy per character, but that is a huge overestimate. Imagine that after reading the start of this paragraph you are asked what will be the next let…
English contains orderly internal structures that make the character stream highly predictable. In fact, the entropy of English is only ∼ 1 bit per character [1]. Expert typists rely on all this redundancy: if forced to type a random character sequence, their speed drops precipitously.
[1] Shannon CE. Prediction and Entropy of Printed English. Bell System Technical Journal. 1951;30(1):50-64.
How do you measure information density of English text?
You show a bunch of English speakers some text that’s cut off, and ask them to predict the next letter. Their success at prediction tells you the information content of the text. Shannon ran this experiment and got a result of about 1 bit per letter: https://archive.org/details/bstj30-1-50/page/n5/mode/1up
OK. When talking about language I find it's always good to be explicit about what level you're talking about, especially when you're using terms as overloaded as "information". I'm not really sure how to connect this finding to semantics.
If the text can be reproduced with one bit per letter, then the semantic information content is necessarily at most equal to N bits where N is the length of the text in letters. Presumably it will normally be much less, since there are things like synonyms and equivalent word ordering which don’t change the meaning, but this gives a solid upper bound.
The response to the question of "where do they get 10 bits/second" can be found in the paper, in great detail if I might add.
I don't have access. Nor do most of us here probably. Can you share for us then?
this thread has 20 comments at the time of writing my comment. About two of them contain a link to the full paper, please take a look.
It would be a lot less abrasive to say "It's linked elsewhere, but here it is: https://arxiv.org/abs/2408.10234" or some variation, instead of saying "it's here somewhere, go find it".
Thanks for this. I scrolled for ages hoping for something like this ^
with all due respect, it was meant to be slightly abrasive. it's understandable (?) not finding something when the thread has hundreds of comments, not so much when the thread had like 15-20 comments.
I was iterating over the different citations for bitrate, at least some of them, like Starcraft and the Rubik's cube, are literally a Guinness Book of Records that's a tiny blurb about APMs and a video of a guy solving the rubik's cube.
Going from APM and/or image wiggling to "bits per second" is .... hilariously reductive and I struggle to consider this response to be woefully incomplete at convincing this reader.
And yeah, my immediate response to reading the title was "where the hell are they getting that number", so I have gone and looked and am unsatisfied.
It seems weird to me. They say 10/bits/sec "behavioral throughput."
Have they not seen a football match? The brain controls 600 or so muscles in a rapid manner. That alone must be a lot of bits per second, certainly far better than computer controlled robots.
Re
>Why does the brain need billions of neurons to process 10 bits/s?
Tesla's FSD cars have a lot of processing power but still struggle not to drive into fire trucks. You probably need a lot.
I beg you, please read the paper before commenting. It's very interesting and it answers a lot of questions that might arise from just skimming the title.
That might be the funniest comment I've ever seen on HN!
A plea to reason, that is probably not outside the posting guidelines, but is certainly in a gray area :-)
I honestly don't understand why it would be funny or in a gray area to recommend people to actual read the paper?
Asking people to read the article before commenting? A commonsense suggestion that needs to be made makes me smirk inside, not the least because I am guilty of this, too, around here. (But not this time, thank you, kind Sir.)
As to being in a "gray area", have you read the posting guidelines? ;-)
I'm pretty sure it says we shouldn't say things like "read the article" or "you haven't read the article, have you?" in our comments.
Anyway, I'm laughing at this community (myself included) and the fact that your innocent and well-intentioned comment needs to be said here. And it did and does, my friend!
I am very very annoyed by many of the shallow "it's obviously wrong" comments on this story. And thank you to those rebutting more politely than I feel inclined to.
It's a fascinating paper and something that I have been interested in since before [0] and ties in to a strand of work in my PhD research. Also see for example [1].
[0] Stevens, M. Sensory Ecology, Behaviour, and Evolution, OUP Oxford, 2013, ISBN 9780199601783, LCCN 2012554461
[1] Coupé, Christophe and Oh, Yoon Mi and Dediu, Dan and Pellegrino, François Different languages, similar encoding efficiency: Comparable information rates across the human communicative niche, American Association for the Advancement of Science (AAAS), 2019-09, Science Advances, volume 5, report/number 9, ISSN 2375-2548, doi:10.1126/sciadv.aaw2594
> "it's obviously wrong" comments
The article is not even wrong imo, it is non-sense. Eg when we speak we convey much more information than just with the words we say. We communicate "information" using intonation, changing the rate of the speech, body language etc. Statements like "10 bits per seconds" are ridiculous clickbaits, and cognitive scientists should study cognition in more ecologically valid settings if they want to make any sense.
You appreciate that the contrast that is being highlighted in the paper is the missing 8 orders of magnitude? A little bit more than the OOM baseline claim of 10bps highlighted is neither here nor there.
Put a bunch of people in an empty, lacking as much as possible any details, room. Try to minimise an environmental stimulation. Have the people dress as plainly, boring as possible. Think of diddy's white parties, all white. Have people sing and dance and do stuff. Have a person just watch them. Do you think the person will find that this context is understimulating? Do you think that the "perception bandwidth" of them there is gonna fall down, or that people will have the same feeling as looking at an empty wall or some painting for several minutes? I don't think so, and if not then we have to think about where the information is coming from, who produces the processes that encode this information and how they produce them.
Buddy, I followed the link and they want $35.95 to read the paper.
This is... not a recipe for a successful discussion between people who have read the paper.
https://news.ycombinator.com/item?id=42449845
Here's the link that link links to, to save people time: https://arxiv.org/abs/2408.10234
I conclude that if you perform horrific experiments on animals then our intelligent universe reduces the rate at which you can continue to 10bps.
This is why enlightenment cures you of your curiosity.
Only 10 beatings per second? This is a just universe, Sir!
On a serious note, enlightenment only cures us of our selfish curiosity, i.e. any action which causes harm to others. The Way requires us to harmonize with universal compassion, so there is take and give (especially with regard to our required sustenance), but we definitely lose our propensity to experiment with our power at the expense of others. No, we are to increase our curiosity in how we can better help others, help being the cornerstone of compassion.
I don't need to read the paper. The problem is that mechanical systems have inertia and are limited in their ability to change direction and thereby their ability to signal discrete information.
> In particular, our peripheral nervous system is capable of absorbing information from the environment at much higher rates, on the order of gigabits/s. This defines a paradox: The vast gulf between the tiny information throughput of human behavior, and the huge information inputs on which the behavior is based. This enormous ratio – about 100,000,000 – remains largely unexplained
The GPU is capable of performing billions of operations per second, yet Cyberpunk barely runs at 60 fps. And there is no paradox at all.
By the way, the brain seems to perform better than a GPU at tasks like image recognition. Probably because it does even more operations per second than the GPU.
There is also another comparison. Imagine if your goal is to calculate an integral over 100 dimensional space (or solve a quantum system) and answer whether it is larger or less than zero. This will take enourmous time but produces a single bit of information.
Is the brain better than a GPU at image recognition nowadays? Actually I’m not sure how that’s measured. Certainly a GPU could be tied to a database with a lot more things in it, like you can get some pretty impressive facial recognition demos where it’ll recognize a ton of random people.
But humans can see objects they’ve never seen before and sometimes guess what they might be used for, which is sort of like object recognition but better. (Or sometimes I see an object I’m technically familiar with, like an old tool of my grandpa’s, and remembering what he used it for feels more like imagining… maybe it is).
The only time 'bits' will ever be an appropriate measure of human processing is when we are processing or producing diginal information artifacts, e.g. a typewritten document.
Our bodies' systems are biochemical wetware that will never be aptly described using a boolean basis. That is one of the primary problems of society's obsessions with classical notions of gender.
No one is male OR female. We are, every single one of us, a combination of male and female hormones. The more "male" a person is is the result of that balance favoring the male hormones; and vice versa. What humanity is now struggling with is that there are plenty of folks with lots of both or little of either and all kinds of combinations.
Of course, my not being a biochemist means my categorization of hormones into "male" and "female" is, itself, likely to be a poorly booleanized representation of their natures.
We are much more akin to Boltzmann's statistical mechanics description of reality, than to digital logic's boolean description.
Bits are a perfectly acceptable way to measure biological information processing. These are not the boolean logic digital bits like on a computer. They're the more abstract concept of a bit in information theory.
Take the number of distinct possible configurations a system can be in (accounting for statistical uncertainty/biases if needed), take the base 2 logarithm of that number, and you have the bits of information in the system. This can be applied to basically anything, biological or otherwise.
But if your measurements are unreliable or downright flawed, then it's just garbage-in-garbage-out.
Sounds like the statistics in the papers from the social "sciences".
"There's lies, damned lies, and statistics." --Unknown
I don't think you're going to be able to count the "number of distinct possible configurations" of an even moderately complex living system.
It's more like statistical mechanics and the foundations of the second law of thermodynamics.
Unless entropy is a damned lie. Which I'm not saying it isn't, but claiming such a thing is a pretty strong claim. Possibly one of the strongest claims you can make in physics (which is why it's associated with cranks).
I'd expect some perpetual motion machines after overturning such a principle.
But I do agree you need to be careful defining the scope of microstates and macro states.
> Take the number of distinct possible configurations a system
Easy for an isolated system. Human body is 6000 billion cells, each of them has many possible configurations, most of them share and process informations. I respectfully doubt there’s much to do with bits outside of a tiny bit if flesh in a petri dish.
6 trillion * number of configurations of each cell * number of configurations of interactions between cells = (very very large) finite number. The thing about numbers is that they go on forever. There's not a cutoff for "ok, this number is too big for bits".
Take whatever that mind-bogglingly huge number is, take the logarithm base 2, there you go, that's how many bits you need.
Yes no problem to imagine or process numbers, how big they might be.
I just don’t see how that makes a practical sense in the context of the article. Doing a full "scan" of a tiny biological material to study it, ok. Doing a full scan of "our body" as MrMcCall describe (and reject it)? I don’t think it’s possible in a near future. I totally agree with Asraelite in theory but don’t think that will be apply in reality.
I think I see the disconnect. "Practical application" here does not mean "doing a scan". Don't let the word "bit" lure you into thinking we want to do something on a computer. This is using bits purely as a unit of measurement of information quantity. It applies practically in reality the same way any other unit of measurement applies. The same way we can ask how much something weighs or how fast it's going, we can ask how much information it has.
> That is one of the primary problems of society's obsessions with classical notions of gender.
What you go on to discuss is sex, and sexual dimorphism, which is a remarkably robust way of classification. The "classical" notions of gender (tbh, "classical" doesn't make much sense here) as sex based is fairly reasonable all things considered. Consider the arguments presented in this essay [0]. That, however, doesn't really mean much for how we should treaty people in public who desire to express their gender in different ways, which is, of course, respecting of their dignity and desires, in most cases.
[0]: https://philosophersmag.com/unexceptional-sex/
Well said.
Yeah, what I mean by classical would boil down to just genitalia, which doesn't really hold up in how we must respect the person and how they feel and choose to express themselves. Yes, so long as their expressions are not harming others, then we must respect their human right to choose who they are.
I've got to give a huge hat tip to Suzi (Eddie) Izzard, who -- beyond their being just a brilliant comic and generally good human being -- taught me and my fam about how the spectrum of human configuration is way more complex than just male and female.
Cheers, friend.
The use of "bits" here doesn't mean we are working in binary.
It is more like the way it is used in information theory. The number of bits is log2 of the number of states that can be represented, and it doesn't have to be an integer. For example, with 10 bits of information, we can distinguish between 1024 different states, it can be 1024 colors for instance, or 1024 genders if you wish, it doesn't matter, the important part is that there are 1024 boxes to put things in, no matter what they are. Of course, it doesn't mean that only 1024 colors exist in the universe, there are an infinity of them, but with 10 bits, you can only distinguish between 1024 of them. If you want more, you need more bits, if you can do with less, you need less.
By the article results, it means your "inner brain" can process one color with 1024 nuances per second, or 2 independent colors with 16 nuances each per second. If the colors are not independent, it can process more, because, if, say, you know that the two color are highly contrasting, you don't have to allocate "boxes" for noncontrasting colors, may free some boxes for more nuances, so, you may, for instance, process two contrasting colors with 100 nuances each with these 10 bits.
A bit is the fundamental unit of information theory, and has nothing to do with digital logic in this context. No one is saying "ok use one bit to encode male or female". No one is saying "ok 101 means serotonin, and 110 is dopamine". What they are saying is that the information content produced by a human being can be compressed down to about 10 bits per second, but this is a statistical description.
You said both
and Sounds like digital compression from where I sit, friend.Are you using an information theory that is based upon something different from Shannon's?
"Compression" here is nontechnical, and I was using it by analogy as an aid to intuition. I didn't want to throw out the E word (entropy) unnecessarily.
Are you using "bit" in a sense different to Wikipedia's definition, as linked from Claude Shannon's page?
Boolean logic extends just fine to handle complexity. Instead it’s the intuitive classification people come up with that are often a poor fit for reality.
Is someone’s DNA consistent throughout their body? Y/N Does someone have any chromosomal anomalies? Y/N etc
Similarly it’s very possible for a girl to suffer from abnormally low testosterone levels which doesn’t fit with how the public thinks of it as a gendered hormone. During puberty it normally spikes in both girls and boys. From a range of (2.5 - 10) in prepubescents, the typical range in puberty for boys is much higher (100 - 970) vs (15 - 38) but that doesn’t make it a male hormone just a pathway used differently.
>What humanity is now struggling with is that there are plenty of folks with lots of both or little of either and all kinds of combinations.
Even that is a very smooth view of humanity as if was all going through more or less the same mindset.
Rest assured that most of humanity don’t conceive their life experience according to a scientific measure of information units.
In biology, or really most sciences (math being an exception), the more closely you examine a delineated this or that categorization, the more you realize it's a scale, a range, or something fuzzy.
Like even things we talk about regularly like touch and space is vague in the details. Is it still touching if the repulsive force of electron to electron is keeping nucleus apart? Where is empty space begin and an atom end? Is it after the electron shell? Outside of it's repulsive force? Some hybrid value?
Surely you will enjoy https://en.wikipedia.org/wiki/Fuzzy_mathematics
Also remember that putting a topic under mathematical form or mere layman prose is also a spectral arbitrary categorization.
To address your "empty space" question, you must first define, specifically, what you mean by this phrase.
I hope you're not asking me those questions ;-)
Yeah, those are great questions, for sure.
I can always be awestruckdumb by the understanding that we are all mostly space inhabited by fields, our littlest bits vibrating at mindblowing speeds.
What you're saying is interesting but I think the causality is backwards here and I can provide some examples to show why.
(By male hormone I'm assuming you mean testosterone, and by female hormone I assume you mean oestrogen.) i in fact If being "more male" came from having more testosterone (and vice versa), then logically when children go through puberty and develop into adults, they would become "more" male or "more" female.
As adults become elderly and naturally produce less sex-associated hormones, they would become "less" male or female.
(Fetuses do not all begin in the womb as female, that's a common misunderstanding. We start off physically undifferentiated, and develop along a genetically predetermined pathway as we grow. Some animals use temperature or other environmental triggers to pick, humans use genes.)
Would that mean a male bodybuilder who injects testosterone is more male than a man that doesn't? His phenotype may become visibly more masculine, but that doesn't change his sex at all. Same for a female bodybuilder that injects testosterone - she may develop stereotypically male physical characteristics like large muscles and a deeper voice, but her sex is unaffected.
The causality is the other way: being male - or - female results in a physiology (adult testicles/ovaries) that produces sex associated hormones in larger or lesser degrees depending on the person (and in some cases very low amounts or not at all).
This makes sense if sex is a binary (with rare differences of sex development - detailed here https://www.theparadoxinstitute.com/read/sex-development-cha... ) that results in different levels of sex hormones in the body and resulting phenotype. So yes, everyone is male or female.
(I'm not referring to gender here - I'm talking only about sex)
If there's a spectrum then some men could be biologically "more male" than others and vice versa for women. I've not seen any evidence of this myself, but I'm happy to be proven wrong!
> Our bodies' systems are biochemical wetware that will never be aptly described using a boolean basis.
All physical systems are described on a base-2 basis using bits, or shannon entropy.
it is a categorization, like all things in biology. One of the most robust and significant ones for all of life is sexual versus asexual reproduction. It is intentionally blurring understanding to say that it is not a binary. This is not a gaussian situation, and not fitting into this categorization is exceedingly rare due to defect/mutation which largely does not proliferate genetically.
What about male and female- and _N_ genetic syndrome- in terms of XX or XY chromosomes?
I think you've mixed up a few mostly unrelated things together to make a point. You're correct in that the larger point to be made is that analog and digital computing are paradigmatically distinct and analogies are hard to draw across that divide.
However, "bits" is just a quantity of information in a certain base. We could discuss it in "nits" if you prefer. The point is that information per se remains real even if the specific representation is based on some assumption of digital computing.
The rest of your comment is unfortunately out of scope of this article although it deserves some discussion on its own merit.
Definitely a neat article insofar as it summarizes many observations about task related “bandwidth” of human output.
Summarizing other comments - calculating bandwidth in a task-specific way neglects metacognition: the Rubik’s cube solver can rapidly recover if someone shoves them when their blindfold is on; the memory champion can notice a pie being aimed at their face and duck. I think that there’s still something interesting around our inability to multitask conscious behaviors, but the overarching conclusion is flawed.
Human's can transfer up to 39 bit/s during normal speech, so I highly doubt that it's accurate to describe human "throughput" as being only 10 bit/s.
https://www.science.org/content/article/human-speech-may-hav...
How long a time can you continuously generate that 40bps? If you can only do that 1/4 the time, that's 10bps
I think the authors are using information theory to inappropriately flatten the complexity of the problem. On one hand we have “bits” of pre-processed sensory measurement data, then on the other hand we have “bits” of post-processed symbolic data: in many cases directly so via human language, but that would also include “the Terran unit moved a short distance” as a compact summary of a bunch of pixels updating in StarCraft. This even extends to the animal examples: the 10 bits/s figure applies to higher-level cognition. The crucial difference is that the sensory bits can be interpreted via the same “algorithm” in a context-independent way, whereas the higher-level cognition bits need their algorithms chosen very carefully (perhaps being modified at runtime).
So I am just not sure why 10 bits/s of symbolic data processing is especially slow in the first place. We don’t have a relevant technological comparison because none of our technology actually processes data in that fashion.
I'm running a chat LLM on my local pc. It spits out text just slightly faster than I can type, but it is using much of my CPU and redlining my GPU.
Is it processing at a dozen bits per second, or hundreds of millions?
If the text the LLM generates is "that is true", can I consider that one bit of information?
I agree, they're artificially simplifying the framing of the question to generate a lower number than is sensible.
When compared directly to the 10^9 bits/s for sensory information, which uses the same type of information, it is slow.
It's more like quantum information theory isn't it?
https://en.wikipedia.org/wiki/Quantum_information
The authors of the article are smuggling in the assumption that 10bits/s is slow.
It's slow when compared to general computing system that we implemented in silicon substrate.
But this assumption doesn't translate linearly to the brain throughput and the perception of existence.
In my opinion the hypothesis is meaningless.
That is not to say the article is meaningless. Actually being able to measure brain information throughput is amazing. It's only that slowness isn't absolute.
> thinker can access about 220 ≈ 1 million possible items in the few seconds allotted
Huh, no? No one is able to think about million items in a few seconds.
The 20q thinking process involves bringing an incomplete set of abstract categories and asking a question that divides these categories into two halves (binary search). You don't even start from scratch, using previous experience (cache) to reuse whatever worked best the last time.
The optical nerve has an information density of around 10 MBit/s (ref https://pmc.ncbi.nlm.nih.gov/articles/PMC1564115/) Concentrating on only the symbolic thinking speed seems to be unnecessarily restrictive..
An additional citation substantiating that 10 MB/s figure: the 3rd sentence of TFA.
The only limit is the max size of Hacker News title allows. :)
arxiv: https://arxiv.org/abs/2408.10234
Really fun paper. I especially enjoyed this section:
> Based on the research reviewed here regarding the rate of human cognition, we predict that Musk’s brain will communicate with the computer at about 10 bits/s. Instead of the bundle of Neuralink electrodes, Musk could just use a telephone, whose data rate has been designed to match human language, which in turn is matched to the speed of perception and cognition
It might be though that even though our processing rate is limited to 10 bits per second, shortening the communication loop between the helper AI and the human might allow the human to switch subjects more productively by getting faster feedback. The human would be in an executive approval role like the lead character in Accelerando with their agents, assuming they trusted their agents to be delegated to.
The title is a reference to "The unbearable lightness of being" by Milan Kundera for those unaware.
There's interesting discussion with the authors here: https://www.alphaxiv.org/abs/2408.10234
We process information at the rate we do because that's what works for adaptation to our environment. That's how all biological things develop. Some things are remnants of old adaptations, but generally speaking, we are the way we are because that's what enables our survival. If it seems "slow", it's because it doesn't need to be faster.
Better questions would be, why are we so weak? Why are we defenseless? Why are we hairless? Why do we move so slow? Most other mammals our size are much better equipped for survival. Obviously other beings "perform better" in many ways. Yet our fleshy, weak, slow, gangly, shivery bodies are suited for thinking, for adapting, for communicating and collaborating. However unoptimal or "slow" these authors think our brains are, they are obviously perfectly capable for what they need to do, seeing as we dominate the entire planet.
In any system design, every decision tends to be a tradeoff. You can gain CPU power, but it will cost you energy, heat, and probably stability or longevity. You often do not know what a "feature" will do until you add it. So it's wiser to only add the features you need to accomplish your immediate goals.
If at some point in the future, our survival is existentially threatened by our lack of "processing power", our brains will adapt. Until then, leave well enough alone.
FWIW the title of this article is a play on the title of Milan Kundera's famous book "The Unbearable Lightness of Being".
I tried to read the article, but celldotcom has a presumably very high bit rate robot that promptly questioned my humanity, so I did the dishes and ate lunch, but that didn't get through somehow as proof.(of humanity) And so my recourse is to then read the coments here, to try and get the gist of the argument,but even well fed, doing 11 of maybe 12 bits per second, there does not seem to be any point in quibling with reality. Maybe after a bit of shock-o-late icecream, (B and G chockolate therapy with esspresso beans added)
What would it really even mean for our brain to be fast or slow? Relative to what?
Inherent speed does not matter and is ill defined, it only matters relative to the environmental processes we have to react to. We’re already orders of magnitude faster than processes like geology and weather, so it is really only other animals where speed matters, and since we’re all using the same basic hardware, we’re on a relatively level playing field there (except for house flies it seems, lol). Time as we understand it may be as much as a cultural construct as anything else (see, for example Timeless physics). Some neurodivergent people, and people from very different cultures don’t experience anything like what most people in our culture refer to as time.
As for thinking about one thing at a time- I am absolutely certain this is false. Our subconscious operates massively parallel, and we only have one conscious thought at a time. But it had an interrupt system that lets it either act instantly itself in an emergency, and to bring things to consciousness when they are important. I’m sure everyone has had the experience of reacting quickly to an emergency with no conscious thought, or suddenly knowing the solution to a problem you hadn’t even been consciously thinking about.
> Why can we only think about one thing at a time?
This sort of reasoning seems to be a symptom of inadequate communication/jargon/diction describing mental faculties. Many times during serious thought there's no discrete "number of thoughts" occuring at all: there's just a hazy mental process that resolves to some result and often many results. This reminds me of the "80% of people have no inner monologue!!!" bullshit that went around recently.
I think the discussion on serial vs parallel processing is incomplete in the linked paper, and is one reason I think the 10 bits/s bottleneck is an incomplete or misinterpreted result. Here’s a review with sources on serial processing, https://journalofcognition.org/articles/10.5334/joc.185
> Cognitive psychology has mainly focused on structural and functional limitations of cognitive processes when facing multitasking requirements. Structural limitations assume strict serial processing for at least one processing stage, while functional limitations assume flexible, parallel processing only limited by the number of available resources. Human movement science, on the other hand, emphasizes the plasticity of cognition and training possibilities. As both approaches have provided ample empirical evidence for their views but have predominantly worked in isolation, this example clearly illustrates the need for a more integrative approach to multitasking. A challenge for the contemporary research on multitasking is to bring together the issues of structure, flexibility, and plasticity in human multitasking, offering a new integrative theoretical framework that accounts for this fundamental aspect of human behaviour.
From one of the papers cited by the above reference (Hommel 2020),
> A closer look reveals that the questions being asked in dual-task research are not particularly interesting or realistic, and the answers being given lack mechanistic detail. In fact, present theorizing can be considered mere empirical generalization, which has led to merely labeling processing bottlenecks rather than describing how they operate and how they actually produce the bottleneck.
So, while I applaud the authors on generating buzz and discussion, I think their promising work will benefit from more serious consideration of the underlying neurophysiology.
> More generally, the information throughput of human behavior is about 10 bits/s.
I'm sorry, I just can't take this article seriously. They make a fundamental mistake of encoding and assume that information is discretized into word-sized or action-sized chunks.
A good example is a seemingly discrete activity such as playing a musical instrument, like a guitar. A guitar has frets and strings, a seemingly small number of finite notes it can play. So it would seem a perfect candidate for discretization along the lines of the musical scale. But any guitar player or listener knows that a guitar is not a keyboard or midi synth:
1. The attack velocity and angle of the pick intones aggression and emotion, not just along a few prescribed lines like "an angry or sad or loud or quiet".
2. Timing idiosyncracies like being slightly before or after a beat, or speeding up or slowing down, or even arhythmic; the entire expression of a piece of music is changed by subtleties in phrasing.
3. Microbends. The analog nature of strings cannot be hidden entirely behind frets. Differences in the amount of pressure, how close to the fret the fingers are, and slight bending of the strings, intentional or unintentional, static or dynamic, change the pitch of the note.
4. Non-striking sounds like the amount of palming, pick scraping, tapping, and sympathetic vibrations.
Of course there are lots of other things. All of these things make the difference between a master guitar player, say Hendrix, and someone just playing the same notes.
And yes of course we can consider the encoding of the audio coming out of the guitar to be information--at a much higher bitrate, but what about the facial expressions, body language, etc? There are tons of channels coming off a musician, particularly live performances.
This entire article just misses these in picking a quantized encoding of information that of course has a low bitrate. In short, they are missing bazillions of channels, not the least of which is expression and timing.
Free access link https://arxiv.org/pdf/2408.10234
You can compress 10Gbit/s down to 10bit/s but then you can't do the reverse operation... Yeah, we know.
To me it's like saying "I've summarised this book to 5 words, so why would you even need a few days to read it entirely?".
https://arxiv.org/abs/2408.10234
Why not? Processing an input of 10^9 bits, making sense of all of that, and contrast it against all your existing knowledge have an output speed 10 bits/s? It is not so bad. At least if we were really processing all that information in the same way.
It had to be enough to let us survive, in the context of the challenges we faced through most of our evolution. We took a lot of shortcuts and trims there, that is why we have a system 1 and system 2 in place, with a lot of builtin cognitive biases because of that.
The answer is that we defined second after the time period it takes for us to think about something, if our thinking was faster the second would also be faster and we would still feel like we only think about about one thing per second.
Seems like onsciousness is the bottleneck. It has to integrate over all the perceptions. Of course this will be slower!
"Why can we only think about one thing at a time?" - this is not true? There is a difference between thinking and conscious thinking.
> Plausible explanations exist for the large neuron numbers in the outer brain, but not for the inner brain, and we propose new research directions to remedy this.
Going on a limb here, but perhaps we shouldn’t modify biological composition of the human brain.
They're not talking about changing the brain. They're talking about remedying the lack of a plausible explanation.
I think the parent commenter was making a joke.
It seems the authors conflate a problem bring easy to state with little processing power being needed to solve it. This obviously isn't true, very complex mathematical problems can be stated in very few bits. Human interactions often can be extremely complex, even though they are relatively slow.
Reading a text isn't about matching symbols to words. It is about taking these words and putting them into a social context, potentially even doubting their content or imagining the inner world of the author. Obviously that is what the "inner" brain (which existence seems very dubious to me) has to do.
I see absolutely no paradox at all.
I prefer to think of the inherent slowness of my organic brain as a leisurely stroll through time. I’m taking the scenic route though life.
So the human brain is a great operating system that is desperately missing a good text editor...
Related question: what's the physical/biological difference between conscious and unconscious?
I see there are some people in the thread that doubt the low bandwidth between conscious thought and the rest of the central nervous system.
Do you also doubt that you're actually living half a second in the past, with the brain compensating for this lag between initial perception and conscious reception of the indirect effects of it?
Interesting summary, where can I get the full text?
Oops - my bad - here’s the arxiv link: https://arxiv.org/abs/2408.10234
Found the pre print if you don't have access
https://arxiv.org/abs/2408.10234
there's probably lots of local computation
and i think it's a mistake to simplify it all down to just one substrate or receptor
Thinking is emulated -- therefore much slower.
Ask stupid questions, receive stupid answers.
On average, people can read at least 200 WPM, but much higher at the top end. This is orders of magnitude higher than 10 bps.
No it's not
It seems this analysis is incorrectly assuming a serial communication mode for neuronal transmission, which isn’t what happens.
I have read the paper and your thesis, that the analysis is assuming a serial communication mode for neuronal transmission, is incorrect.
They say this directly in section 7.2,
> In contrast, central processing appears to be strictly serial…
and then they proceed to give misinterpretated evidence of serialization because they’re making assumptions about lower level biochemical behavior based on higher level tissue performance. In fact, that tissue-level behavior isn’t correctly described either.
Be honest.
The whole paragraph is:
"In contrast, central processing appears to be strictly serial: When faced with two tasks in competition, individuals consistently encounter a “psychological refractory period” before being able to perform the second task broadbent_perception_1958, pashler_dual-task_1994. Even in tasks that do not require any motor output, such as thinking, we can pursue only one strand at a time."
Clearly they're not talking about "neuronal transmission", but tasks, and further more, they cite their sources.
I wasn’t being “dishonest”, I couldn’t copy/paste the entire text on my phone.
I addressed the rest of that statement in my comment by noting that you can’t make the same assumptions about biochemical reactions and emergent behaviors of tissues.
Secondly, even from a neurophysiology perspective, their cited evidence is misinterpreted. Any basic dual N-back task proves their central thesis incorrect.
"It is by will alone I set my mind in motion. It is by the juice of Sapho that thoughts acquire speed, the lips acquire stains, the stains become a warning. It is by will alone I set my mind in motion."
perhaps the 10 bits/s is throttled at the simulation level ;-)
The back of the envelope computations are shockingly shallow and meaningless.
(20 Questions, from the intro) Trying to think of a thing for the game is not a search over a set of known things. Just saying the possibility set has size 2^N doesn't mean that choosing something in the set consists of processing the set. But even if that were the case, and if you do consider each of 2^N options, the consideration process itself is not trivial and probably varies wildly.
(English typing) Touch typists do not (only) simply convert an existing/known string to a sequence of hand actions by mapping character to action. There are whole words and sequences that become units/tokens from the standpoint of muscle memory and processing (this will be relevant to the rubik's cube topic as well). When i type, there's a sort of planning and queueing of actions, but also there's monitoring of actions that allows fast error correction with pressing delete a number of times or holding it and costly determining when the error has been reached, and resuming afterward. Of course the process likely varies from person to person, but there's such a host of other things going on that should count as part of the information processed in this simple behavior that the example and numbers used in the paper for it are utterly useless even as estimates.
(Rubik's cube blind speed solving) Again we see reference to the entire possibility space (from the perspective of possible configurations of the puzzle). But solvers do not identify the configuration they encounter with reference to the space, nor do they search the space for it. They look for patterns and ignore what they cannot use for the strategy they have practiced. The cuber often does not commit to memory the whole configuration, but will often convert it to a custom and bespoke mnemonic. It's just utter nonsense to refer to the number of possible configurations, it has nothing directly to do with what the human is doing.
If I memorize a 30 word passage, i have not "processed the set of possible 30 word passages".
10 baud, perhaps. Definitely not 10 bits/s!
I’m interested in the phenomenon that no matter how messed up (or absent) my sensory input becomes it doesn’t seem to impact my “inner observer”. The only thing impacting it is sleep or loss of consciousness, yet even then there can be moments during these states that this observer can become lucid.
It’s like the science of the soul and one notable implication is dementia, when people become unable to function and recognize things, I wonder if that “inner observer” is still just as in-tact as it is on a huge dose of LSD?
I skimmed over the paper so sorry if I didn't see it, but isn't the most obvious answer that (at least conscious) human decision making happens at a high level of abstraction? Information output or processing in terms of bits, say the numbers of words in a sentence, isn't equivalent to the meaning or effect that has in the world, which is what matters.
If a general orders an army to go to war that's a few bits of text on a piece of paper, but obviously that bears no relation to the consequence of what that puts into motion in the real world. It's not like we spend most of our days speedcubing or writing and reading the dictionary. We aren't low level data processors even though that might happen unconsciously somewhere in the sensory system, but the conscious mind is a reasoning system.
Even in artificial systems is that visible, from Deepmind on Alpha Zero (https://deepmind.google/discover/blog/alphazero-shedding-new...)
"For each move, AlphaZero searches only a small fraction of the positions considered by traditional chess engines. In Chess, for example, it searches only 60 thousand positions per second in chess, compared to roughly 60 million for Stockfish."
Not too draw to many parallels between the human brain and these systems, but they do obviously share the similarity that higher order conceptual decision making compared to just data processing will result in lower rates of decision making at at least the top level. That's for me what you'd expect to happen, not a paradox.
Bits are a unit of measurement we use in relation to computers. Humans are not computers. Do not use bits to measure anything in relation to humans. Stop thinking of humans as computers. It's dehumanizing, unhealthy, and a very narrow way to think of people. Using a computer model for humans is useless at best and misleading at worst.
This paper isn't proposing that humans can be understood using a "computer model".
Our awareness of time seem arbitrary. If feels time went by fast/slow, how does that work anyways?
These types of articles are so fundamentally flawed... it beggars belief. Why not ask the opposite question: if bandwidth works the way they describe, why can't H100 GPUs (3TB/s bandwidth) perform sensorimotor tasks 24 trillion times faster than a human? (Spoiler alert: they can not).
<s> Could it be... there is a bit of a straw man argument here? About how much information it actually takes to input and output a complete sensorimotor task? I dare say! </s
[dead]
It's 10^9 bits/s. Your title is wrong.
> The information throughput of a human being is about 10 bits/s. In comparison, our sensory systems gather data at ∼10^9 bits/s.
The title appears to be accurate?
Just for playing any sport the accuracy to instruct 100s of muscles to work in a certain way is certainly above that 10bits,
Pointing out positions in a 10cm x 10cm x 10cm cubic volume seems to possible significantly faster than 1/s.
The slower examples listed in the table all have some externalities like a motor/object manipulation feedback loop overhead (speed cubing) and or redundacy and are not optimized for pure information density, so I have no idea why they settled on that average, and not the optimum?
Object Recognition and Reading are already at ~50 bits.
https://arxiv.org/html/2408.10234v2#S3
> Just for playing any sport the accuracy to instruct 100s of muscles to work in a certain way is certainly above that 10bits
But significant portions of that process are not done by the conscious brain, and some aren't done by the brain at all (reflex and peripheral nervous system). We don't consciously think about each of the 100 muscles we're switching on and off at rapid speed.
I read somewhere that the eye transmits 10 Million bits per second to the brain. I think all of this is a matter of perspective.
This is such a bad paper. Almost all calculations and equations look like some back of envelope calculation. A decent researcher would have provided some tests to their hypotheses.
The numbers cited and used in calculations are supported by citations. The purpose of this paper is not to test a hypothesis, or to gather new data, but to think about existing data and new directions of research. This is spelled out in the paper's abstract, which is kind of summary of the whole paper, useful to get a very quick idea about the paper's purpose -- expanded further in the paper's introduction and re-visited again in the paper's conclusion.
Thank you for explaining what an abstract is... The fact that those number come from a citation doesn't make them true. This is a badly written paper that a decent researcher wouldn't have written (and I know that the author has many papers, I am speaking about this one) and a decent reviewer would have rejected. A paragraph about Elon Musk? Guesstimates on information rates? As a blog post would have been okay-ish, as a scientific paper is quite bad.
>The fact that those number come from a citation doesn't make them true
it does make them the citated paper problem, though.
the guesstimates are explained as guestimates, and used as illustration for possible upper limits.
The problem is that the PR machine of caltech then spits out articles like https://medicalxpress.com/news/2024-12-brain-paradox-quantif... or https://www.discovermagazine.com/mind/the-information-enteri... or https://www.scientificamerican.com/article/the-human-brain-o... with words like "measure" and "quantifies", "fill that quantitative gap".
There are no measurements here, I can guess the weight of an apple based on some prior (which my brain stores as some continuous distribution, not bits), but I am not measuring it.
It's incredibly tiring that bad science is sold as good science only because it comes from some fancy university. This paper is crap and should be treated as such.
You seem quite upset. Can you explain exactly which quantities don't make sense?
The 10 bit/s processing rate doesn't explain why a human talks better than a LLM that consumed terabytes of data traffic during learning.