Kilka algorytmów uczenia maszynowego wartych rozwoju

Typ
Wykład
Data
Numer sali
107
Załączniki

Wykład będzie wygłoszony w języku polskim

Despite great popularity of deep learning methods comprehensive theory presenting a unified perspective on various machine learning methods is still missing. Large component-based data mining packages contain now hundreds of learning methods, input transformations, pre- and post-processing components that may be combined in more than 10 million ways. Although there is "no free lunch" several methods that are close to optimal may be found through meta-learning based on heuristic search in the space of all possible learning models. Model spaces that form good basis for meta-learning include heterogeneous neural, fuzzy, prototype or similarity-based support feature spaces.

Instead of pre-defined hierarchical processing layers general implementation of meta-learning is possible within transformation-based learning paradigm that unifies most of computational intelligence methods and shows how to solve the "crises of the richness" selecting optimal transformations to minimize complexity and maximize quality of the resulting data models. Meta-learning systems learn simplest data models that many sophisticated methods miss, generate multi-resolution models whenever needed, and solve difficult, highly non-separable problems that are beyond capabilities of current state-of-the-art algorithms, including neural networks and support vector machines. In contrast to backpropagation that tries to achieve linear separability in one shot additional criteria are defined after each transformation to create appropriate internal representations. Support Feature Machines build feature spaces that facilitate finding simple solutions.

Visualization of learning dynamics in transformation-based systems shows how to set simpler goals for learning, for example k-separability instead of linear separability. This approach cannot miss simple solutions and will always identify trivial data, in contrast to deep learning and ensemble-based approaches.

References:

  • Angelov, Gu, Kangin & Principe, Empirical Data Analysis: A New Tool for Data Analytics, http://eprints.lancs.ac.uk/80044/1/1008.pdf, 2016.
    • Jankowski N, Duch W, Grąbczewski K, Meta-learning in Computational Intelligence. Studies in Computational Intelligence, Vol. 358, Springer 2011.
    • Duch W, Towards comprehensive foundations of computational intelligence, In: W. Duch and J. Mandziuk, Challenges for Computational Intelligence. Springer Studies in Computational Intelligence, Vol. 63, 261-316, 2007.
    • Duch 2000, Similarity-based methods: a general framework for classification, approximation and association, Control and Cybernetics 29 (4) (2000) 937-968

For papers and links on this topic see:

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer