%\title{\LARGE iCaRL: incremental Classifier and Representation Learning}
\title{Incremental Classifier and Representation Learning}
\author{\large Sylvestre-Alvise Rebuffi$^{\dag,*}$, Alexander Kolesnikov$^{*}$, Christoph H. Lampert$^{*}$}
\institute{\vskip-.5\baselineskip\large$^{\dag}$ CentraleSup\'elec\qquad$^{*}$ IST Austria}
\title{Computer Vision and Machine Learning}
\author{}
\institute{\vskip-.5\baselineskip\largeInstitute of Science and Technology (IST) Austria, 3400 Klosterneuburg, Austria}
%\institute{~}%Christoph Lampert} %\textsuperscript{1} ENS Rennes (Ecole Normale Sup\'{e}rieure de Rennes), Rennes, France \textsuperscript{2} IST Austria (Institute of Science and Technology Austria), Klosterneuburg, Austria}
%\date[]{}
...
...
@@ -139,17 +139,19 @@
\vspace*{-1.5cm}
\ \ \begin{block}{\Large Abstract}
%\large
We introduce \bblue{iCaRL}, a method for simultaneously learning classifiers
and a feature representation from training data in which classes
occur incrementally.
iCaRL uses a \blue{nearest-mean-of-exemplars} classifier, \blue{herding for
adaptive exemplar selection} and \blue{distillation for representation learning
without catastrophic forgetting}.
%
Experiments on CIFAR and ILSVRC\,2012 show that iCaRL can learn incrementally over a long period of time where other methods quickly fail.