AI from a Different Computer Universe
This isn't related to the rest of my (and Ted's) different computer universe per se, but thinking about new, dynamic media has partially enabled me to worry about not only the way that we may think, but also the ways that artificial intelligences may be designed to think as well. It is clear to me, and many others, that general AI does not arise from simple linear algebra (which neural networks essentially are). Currently we buttress linear algebra with the heuristic systems of yore to generalize AI to narrow, but still "complex" tasks like navigating streets or sidewalks. What could be the form of a future approach? The following diagram (cc:The Lagrangian) is a quick attempt to synthesize a few things I've thought across multiple domains into a workable map for myself.
It's form is (very) loosely modeled after the Wilberian AQAL, which is why it is skewed towards the upper left, where the interior of the mind resides. I don't put stock in AQAL except as a sloppy road-map, and since I'm not a very meticulous cartographer it will work here as well. Inside is everything truly inside someone (I've tried to capture which systems are closest to the surface, though), and Outside is everything else. The Inside is within the Outside, (of course?).
Overlaps are both overlaps in the sub-systems and a shorthand for "feeds back into", a recursion or feedback loop. Goals influence emotions and plans, plans effect action and new goals, senses sense the actions and their interactions with the goals, etc.
I've also included some jargon from my previous post "Goal Plane", which explains a little bit about the adaptive theory of emotion and the consequences that spin out from it. Mainly, goal (and emotion) provides a clue as to how humans turn information into action. Zeitgeist refers to the collective miasma of everyone else's goals. Emotions are the feedback mechanism of goals, and informationally resemble senses.
"Patterned" and "patterns" are inspired by David Chapman's work at Meaningness, which describes the world as both nebulous and patterned - helpful ways to think of the chaos of the world, but also our ability to move through it. It should be noted that David Chapman has been an AI researcher and has thought about similar things.
There are some qualifications and caveats I would make to the diagram.
- I think Senses must be information rich.
- I think Goals must be mediated by an explicit emotional interface, not accessed directly.
- Interpretation is the realm of isomorphism and heurism - where classifiers, abstractions and rule systems interact.
- Any Substrate must have representational tools that can accurately capture complexity and provide a reasonable way to transform data into knowledge.
- I think systems "overlapping" involves their confusion - a sort of synesthesia internal to the mind. Emotions feel like a sense, a goal, and part of the cognitive process, for example.
- Absent from the diagram is Attention, which I believe is an important sense-making instrument that coheres the layers and determines what layers are operating and what inputs and outputs are "important".
Finally, some of the areas already have workable implementations:
- Senses - modern sensors are incredibly high resolution, often exceeding human capacity. (caveat: emotions, which are a definite unsolved problem)
- Interpretation - this is where I believe almost all AI research is currently focused. Classifiers work at the object-level and heuristics bind them together with a human "what does this mean".
- Action - physical systems are getting better all the time, with great examples from Boston Dynamics, Intuitive Surgery (the da Vinci System) and other robotics laboratories across the country. That's not even to say that physical interactions are the only type of action. Action in the digital space has gotten faster and better as well.
- Substrate of memory and patterns - I think zzStructure is an interesting candidate here, though I'm sure other memory systems are just fine. I like zzStructure's dimensionality and linearizability.
I think setting Goals, crafting Emotions, enabling Planning, tuning Attention, and providing a great self-editor in the sky with Reflective Cognition are where we are, and where we will continue to be stalled. Nobody knows what a general robot goal looks like, since no one can accurately assess what their own goals are. Nobody has any idea how to make silicon feel. Except by coarsely applying their own plans as heuristics, no one knows how to turn information into meaning and then into meaningful action. And no one, no one knows what it looks like when the machines start self-reflecting towards their own ends.