Suche

Interpolation-assisted Learning Classifier Systems


pic
Projektstart: 01.07.2014
Projektträger: Universität Augsburg
Projektverantwortung vor Ort: Anthony Stein, M.Sc.
Publikationen: Link zur Publikationsliste

Zusammenfassung

This research project is concerned with revealing the potential of incorporating scattered data interpolation into the algorithmic structure of Learning Classifier Systems (LCS) in order to improve their learning efficiency by deriving novel knowledge pieces (i.e., classifiers) from already gained knowledge.

Beschreibung

In my PhD studies, I investigate the research question whether it is beneficial to make use of scattered data interpolation techniques (e.g., Shepard's Methods, Neighborhood-based techniques, Radial Basis Functions, etc.) in order to enhance the way novel knowledge (usually termed `rules' or `classifiers') is created within Learning Classifier Systems (LCS), more precisely in the most prominent surrogate -- the Extended Classifier System (XCS). Therefore, I changed the means how new rules are initialized, i.e., instead of setting classifier attributes to fixed values predefined at design time, now these initial values are interpolated by considering the values of adjacent existing rules. In a similar way, the creation of offspring classifiers within the employed Genetic Algorithm (GA) was modified. Furthermore, I proposed to take into consideration the use of Radial Basis Function (RBF) interpolation in the course of predicting the expected reward (in a reinforcement learning setting), or else the value of a function to be approximated (hence, in a regression scenario).

To incorporate interpolation within XCS' algorithmic structure, a novel component was designed and proposed to extend the architecture of the system. The so-called Interpolation Component (IC) can thereby be integrated in different ways: (1) loosely coupled via a well-defined Machine Learner Interface (MLI) -- the IC approach, or (2) tightly integrated, where the existing/evolved rules themselves serve as sampling points for the interpolation -- the CIC approach.

In various experiments whose results have been published throughout the past last years, it turned out that the use of interpolation can indeed increase the learning efficiency of XCS in classification as well as regression tasks. Especially at the beginning, when the knowledge base is usually empty and novel rules have to be created frequently, interpolation leads to increased predictive accuracy as well as decreased average total number of rules during the learning phase.

Current aspects of R&D

  • Fine-tuning of interpolation-based techniques developed so far
  • Investigation of and comparison with further interpolation methods (e.g., Gaussian Process Regression/Kriging)
  • Interpolation of further scalar values maintained within individual classifiers
  • Impact of noise and covariate drift on the performance gains.
  • Differentiation between inter- and extrapolation and investigation of their individual effects on the learning progress

Potential topics for student theses

Due to a currently very high load through student supervison, I can not offer further project modules, bachelor's or master's theses at the moment, unfortunately. As soon as I have capacities available again, I will post new topics on this page.