"Human consciousness is inherently a quantitative mechanism. It grasps reality--i.e., the attributes of entitites and their causal relationships to one another--only through grasping quantitative data. In this sense, quantity has epistemological primacy over quality." - David Harriman
What is Induction?
Induction is the process by which one forms generalizations from particular instances. How does one know whether a generalization is true?
The answer according to David Harriman in the book “The Logical Leap” is
Let us now sum up in regard to the axioms of induction. When a first-level inducer identifies his concrete experience of cause and effect in terms of words, his perceptual grasp of the causal relationship becomes thereby a conceptual grasp of it, i.e., a generalization. And since the application of first-level concepts is automatic and self-evident, the two aspects of a first-level generalization—the perceptual and the conceptual—are each, to a human mind, self-evident.
What does this mean for the future of Artificial General Intelligence? If you take the answer seriously, then the way one builds a system that is capable of forming true generalizations is by building embodied systems that can observe the world, observe themselves, and conceptualize.
Perception and Language tasks (in the context of ai) have mainly been solved.
Induction can’t be Deduced. It is done when the perception of causation becomes language (or less metaphorically, when a mind refers to it conceptually).
Transformers already map between contexts in Embedding Spaces. An Embedding space of an agent consisting of sensorimotor controls, capacity for self-referential, internally developed, and externally perceived language, along with an environment that affords tasks that require high-level concept formation for task completion.
Potentially one might develop an agent that has high-level competency to solve tasks but is unable to generalize because it solves the problems before needing to develop abstraction.
This raises an interesting question of whether there are some tasks that require abstraction to be solved, that can’t be solved with only perception and a high dimensional latent space. Maybe though, the answers are always solved in that high-dimensional space, and one uses words to consciously (as opposed to sub-consciously) grasp the solution.