I wonder when people will realize that when humans are freed from NP hard problems they can truly unlock the creativity that makes human cognition so powerful?
We are slowly but surely closing those gaps. Only time can really say how long it will be before AGI is attained (assuming it is), but those places where humans can do things better than machines are going to continue falling by the wayside, one at a time.
Exactly man! I personally feel so as well. I think we'll only get better at solving tasks.
Check out this cool essay from Ribbonform which argues that the more embodied intelligent systems are the faster the capability flywheel will turn. https://studio.ribbonfarm.com/p/on-robots
Interesting thesis. The more I think about having a "body", the more I come to believe it's an integral part of my own thinking process. I'm certainly thinking a little with my hands as I type this.
The ability to incorporate AI into real life systems and allow them to make their own experiences will be a game changer,. Not only will it be a more efficient way for these models to get experience but they'll be less susceptible to the pre-existing social and cultural biases found in human generated data and the inefficiency found in simulated environments.
But they'll be way more out of our control because we'd essentially let them form their experiences. Feeding these models data is a good way to control the behaviours of AI models.
What do you think about the idea that LLMs and such already have inputs from the world around them? They don't necessarily "see", but they can "see" what's in an image, and they can't "smell", but they can describe what particular combinations of molecules would smell like and why. I'm not so sure there aren't already necessary conditions we describe.
Well I think fundamentally seeing is way different from actually making sense of. I mean you could see a traffic light but if you don't have an existing mental model which can be updated then the sight is essentially null.
I think at it's core, embodiment involves more than just granting models the ability to see but rather a way to see, interact with, and manipulate the world around them, this I think works because it builds on existing capabilities (Generative AI0 and frankly, it is a logical next step for research to take. It would require new frontier research of course, because we wouldn't just be hosting and running models from the cloud anymore but actually try to integrate them with sensors and actuators.
I think our current work with self driving cars is a good predictor(or as good as we can get) about how difficult this problem will be.
Agree on the difficulty of the problem! it's going to be one of the toughest ones ever.
I also agree about about manipulating the world around them as being incredibly important. I wonder if we are being manipulated, slowly but surely, and these generative models (which, let's remind ourselves, nobody knows exactly how they work inside the "black box") may be gathering data, just like a toddler learning to walk or a self-driving car learning to navigate the real world.
I wonder when people will realize that when humans are freed from NP hard problems they can truly unlock the creativity that makes human cognition so powerful?
I honestly think we're well on our way Michael. We are surely way more ahead than what Moravec's paradox estimated.
We are slowly but surely closing those gaps. Only time can really say how long it will be before AGI is attained (assuming it is), but those places where humans can do things better than machines are going to continue falling by the wayside, one at a time.
Exactly man! I personally feel so as well. I think we'll only get better at solving tasks.
Check out this cool essay from Ribbonform which argues that the more embodied intelligent systems are the faster the capability flywheel will turn. https://studio.ribbonfarm.com/p/on-robots
Interesting thesis. The more I think about having a "body", the more I come to believe it's an integral part of my own thinking process. I'm certainly thinking a little with my hands as I type this.
Same thoughts I have man.
The ability to incorporate AI into real life systems and allow them to make their own experiences will be a game changer,. Not only will it be a more efficient way for these models to get experience but they'll be less susceptible to the pre-existing social and cultural biases found in human generated data and the inefficiency found in simulated environments.
But they'll be way more out of our control because we'd essentially let them form their experiences. Feeding these models data is a good way to control the behaviours of AI models.
What do you think about the idea that LLMs and such already have inputs from the world around them? They don't necessarily "see", but they can "see" what's in an image, and they can't "smell", but they can describe what particular combinations of molecules would smell like and why. I'm not so sure there aren't already necessary conditions we describe.
Well I think fundamentally seeing is way different from actually making sense of. I mean you could see a traffic light but if you don't have an existing mental model which can be updated then the sight is essentially null.
I think at it's core, embodiment involves more than just granting models the ability to see but rather a way to see, interact with, and manipulate the world around them, this I think works because it builds on existing capabilities (Generative AI0 and frankly, it is a logical next step for research to take. It would require new frontier research of course, because we wouldn't just be hosting and running models from the cloud anymore but actually try to integrate them with sensors and actuators.
I think our current work with self driving cars is a good predictor(or as good as we can get) about how difficult this problem will be.
Agree on the difficulty of the problem! it's going to be one of the toughest ones ever.
I also agree about about manipulating the world around them as being incredibly important. I wonder if we are being manipulated, slowly but surely, and these generative models (which, let's remind ourselves, nobody knows exactly how they work inside the "black box") may be gathering data, just like a toddler learning to walk or a self-driving car learning to navigate the real world.
Well written, Edem
Thanks bro
Great to have you back and writing! I like this categorisation you’ve offered - Alice’s current AI capability really well
Glad to be back man! Currently working on something I'll publicly announce soon but I will definitely continue writing.
*structures