Finite and Confident Teaching in Expectation: Sampling from Infinite Concept Classes
Chapter
Published version
View/ Open
Date
2020Metadata
Show full item recordCollections
- Department of Informatics [999]
- Registrations from Cristin [11061]
Original version
Frontiers in Artificial Intelligence and Applications. 2020, 325: ECAI 2020, 1182-1189. http://dx.doi.org/10.3233/FAIA200217Abstract
We investigate the teaching of infinite concept classes through the effect of the learning prior (which is used by the learner to derive posteriors giving preference of some concepts over others and by the teacher to devise the teaching examples) and the sampling prior (which determines how the concepts are sampled from the class). We analyse two important classes: Turing machines and finite-state machines. We derive bounds for the teaching dimension when the learning prior is derived from a complexity measure (Kolmogorov complexity and minimal number of states respectively) and analyse the sampling distributions that lead to finite expected teaching dimensions. The learning prior goes beyond a complexity or preference choice when we use it to increase the confidence of identification, expressed as a posterior, which increases as more examples are given. We highlight the existing trade-off between three elements: the bound on teaching dimension, the representativeness of the sample and the certainty of the identification. This has implications for the understanding of what teaching from rich concept classes to machines (and humans) entails.