So, this article talks about the algorithms that are used to keep moving incrementally toward AI (sentient AI, I hope), and I started trying to understand what an algorithm would look like that helped a robot learn to walk.
It would clearly need to involve some sort of constant check of the various devices within the robot that determines spatial orientation. I’m guessing that’s where the increased computer power comes in, and I’m also guessing that this part would be difficult to learn from, because space will never be constant.
- Read torso sensor (tors_sen)
- Read right foot sensor (rf_sen)
- Read left foot sensor (lft_sen)
- If torso sensor > feet sensors
- raise torso
- Am I thinking about this too much from a central processing unit perspective? Is the key to what these folks are doing in the distribution of processing?
- Are their individual units that can keep individual components right side up? How does this work when those components are attached? They have to connect to the rest of the organism somehow, right?