Neural-Symbolic Integration
Neural network models achieved impressive results in many areas of Artificial Intelligence: in image processing, natural language understanding, and reinforcement learning. However, many tasks solvable by rigorous symbolic methods, such as sequential decision-making, representation of conceptual knowledge, and modeling of reasoning, are solved unreliably or not solved at all by connectionist models. In addition, the symbol grounding problem, identified back in 1990 by Harnad, still dominates research topics in the field of Artificial Intelligence.

This research topic is intended to provide a showcase of the state of the art and new ideas in the field of neuro-symbolic integration in order to identify promising directions and notable advances in this field. Another goal is to put developed methods and algorithms in the general context of research on cognitive systems, models, and cognitive architects, to clarify the role and essential place of integrating approaches.
The project's primary goal is to develop hybrid methods of learning and planning in order to increase the autonomy of intelligent agents operating in human-oriented environments. We wish to solve the symbol grounding problem by developing a symbolic and sub-symbolic integration method that would allow effective use of sensory information when generating a sequence of agent's actions at the conceptual level. We aim to improve the previously proposed concept of neuro-symbolic integration based on vector-semiotic architectures and develop the approach of object-oriented reinforcement learning and the concept of simultaneous learning and planning based on the SLAP agent architecture.
Neural-Symbolic Integration in Learning and Planning
We are open for cooperation. Send us an e-mail, if you would like/want to work with us/to join us.
Team lead
Aleksandr Panov
© 2021
Russia, Moscow
Nizhny Susalny lane 5 p. 19