Moving Beyond Content‐Specific Computation in Artificial Neural Networks

Shea, Nicholas (2021) Moving Beyond Content‐Specific Computation in Artificial Neural Networks. Mind & Language . ISSN 1468-0017
Copy

A new wave of deep neural networks (DNNs) have performed astonishingly well on a range
of real‐world tasks. A basic DNN is trained to exhibit, in parallel, a large collection of different
input‐output dispositions. While this is a good model of the way humans perform some tasks
automatically and without deliberative reasoning, more is needed to approach the goal of
human‐like artificial intelligence. Indeed, DNN models are increasingly being supplemented
to overcome the limitations inherent in dispositional‐style computation. Examining these
developments, and earlier theoretical arguments, reveals a deep distinction between two
fundamentally different styles of computation, defined here for the first time: content‐
specific computation and non‐content‐specific computation. Deep episodic RL networks, for
example, combine content‐specific computations in a DNN with non‐content‐specific
computations involving explicit memories. Human concepts are also involved in processes of
both kinds. This suggests that the remarkable success of recent AI systems, and the special
power of human conceptual thinking are both due, in part, to the ability to mediate between
content‐specific and non‐content‐specific computations. Hybrid systems take advantage of
the complementary costs and benefits of each. Combining content‐specific and non‐content‐
specific computations both has practical benefits and provides a better model of human
cognitive competence.



picture_as_pdf
Shea_Moving beyond CS comptn_Preprint.pdf
subject
Accepted Version
Creative Commons: Attribution-No Derivative Works 4.0
Available under Creative Commons: ND 4.0

View Download