Thanks for the link. The described approach is very interesting, but the tasks being solved, as in the case of other neural networks, are the detection of already known objects. Training phase with reward/penalty is presented, which implies knowing of all potentially detected situations by the compiler of the training dataset (i.e. knowledge transfer, not an ability to detect unknown things == knowledge creation).
In short: contemporary AI systems are not able to detect and conceptualize an unknown object, information about which is presented in sensory data. If Tesla's autopilot is not trained to recognize a rhinoceros, then when it encounters a rhinoceros, the autopilot simply “does not see” it.
My take on your "perception-concept gap" idea is the gap between how the human brain forms concepts and how our AI algorithms form concepts. For example, a person driving a car would have no problem "seeing" a rhinoceros in the middle of the road blocking their path even though they had never seen one before. They would be confused by what they were looking at, but they wouldn't make the mistake of trying to run it over because they "didn't see it" as a poorly written self driving car object detector might do.
If we fail to duplicate what the brain does our code has a gap between what we expect using our own brain, and what our code is doing.
So the question becomes, what is the brain doing, and why has it been so hard to duplicate it? Is there simple algorithms that would allow us to duplicate what the brain does and reduce the gap to zero, or is the brain a highly complex system evolved over billions of years, that will extremely time consuming and difficult to reproduce?
I have always believed there are simple algorithms that will duplicate what our brain is doing. Maybe this is just because I want to believe it. but there is evidence to suggest this is true.
Check out: Vladislav D Veksler , Blaine E Hoffman, Norbou Buchler
Symbolic Deep Networks: A Psychologically Inspired Lightweight and Efficient Approach to Deep Learning.
Topics in Cogn Sci. 2022 Oct;14(4):702-717. doi: 10.1111/tops.12571. Epub 2021 Oct 5.
Thanks for the link. The described approach is very interesting, but the tasks being solved, as in the case of other neural networks, are the detection of already known objects. Training phase with reward/penalty is presented, which implies knowing of all potentially detected situations by the compiler of the training dataset (i.e. knowledge transfer, not an ability to detect unknown things == knowledge creation).
Can you describe the perception-concept gap?
In short: contemporary AI systems are not able to detect and conceptualize an unknown object, information about which is presented in sensory data. If Tesla's autopilot is not trained to recognize a rhinoceros, then when it encounters a rhinoceros, the autopilot simply “does not see” it.
My take on your "perception-concept gap" idea is the gap between how the human brain forms concepts and how our AI algorithms form concepts. For example, a person driving a car would have no problem "seeing" a rhinoceros in the middle of the road blocking their path even though they had never seen one before. They would be confused by what they were looking at, but they wouldn't make the mistake of trying to run it over because they "didn't see it" as a poorly written self driving car object detector might do.
If we fail to duplicate what the brain does our code has a gap between what we expect using our own brain, and what our code is doing.
So the question becomes, what is the brain doing, and why has it been so hard to duplicate it? Is there simple algorithms that would allow us to duplicate what the brain does and reduce the gap to zero, or is the brain a highly complex system evolved over billions of years, that will extremely time consuming and difficult to reproduce?
I have always believed there are simple algorithms that will duplicate what our brain is doing. Maybe this is just because I want to believe it. but there is evidence to suggest this is true.