Moving Beyond Content‐Specific Computation in Artificial Neural Networks
A new wave of deep neural networks (DNNs) have performed astonishingly well on a range
of real‐world tasks. A basic DNN is trained to exhibit, in parallel, a large collection of different
input‐output dispositions. While this is a good model of the way humans perform some tasks
automatically and without deliberative reasoning, more is needed to approach the goal of
human‐like artificial intelligence. Indeed, DNN models are increasingly being supplemented
to overcome the limitations inherent in dispositional‐style computation. Examining these
developments, and earlier theoretical arguments, reveals a deep distinction between two
fundamentally different styles of computation, defined here for the first time: content‐
specific computation and non‐content‐specific computation. Deep episodic RL networks, for
example, combine content‐specific computations in a DNN with non‐content‐specific
computations involving explicit memories. Human concepts are also involved in processes of
both kinds. This suggests that the remarkable success of recent AI systems, and the special
power of human conceptual thinking are both due, in part, to the ability to mediate between
content‐specific and non‐content‐specific computations. Hybrid systems take advantage of
the complementary costs and benefits of each. Combining content‐specific and non‐content‐
specific computations both has practical benefits and provides a better model of human
cognitive competence.
Item Type | Article |
---|---|
Keywords | computation, deep neural networks, distributed representation, content‐specific, explicit memory, concepts |
Subjects | Philosophy |
Divisions | Institute of Philosophy |
Date Deposited | 16 Sep 2021 09:55 |
Last Modified | 06 Aug 2024 16:14 |