Spikes as Regularizers

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Spikes as Regularizers. / Søgaard, Anders.

ESANN 2017 - Proceedings: 25th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. ESANN , 2017. p. 371-376 (arXiv.org).

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Søgaard, A 2017, Spikes as Regularizers. in ESANN 2017 - Proceedings: 25th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. ESANN , arXiv.org, pp. 371-376, 25th European Symposium on Artificial Neural Networks, omputational Intelligence and Machine Learning, Bruges, Belgium, 26/04/2017. <https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2017-32.pdf>

APA

Søgaard, A. (2017). Spikes as Regularizers. In ESANN 2017 - Proceedings: 25th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (pp. 371-376). ESANN . arXiv.org https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2017-32.pdf

Vancouver

Søgaard A. Spikes as Regularizers. In ESANN 2017 - Proceedings: 25th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. ESANN . 2017. p. 371-376. (arXiv.org).

Author

Søgaard, Anders. / Spikes as Regularizers. ESANN 2017 - Proceedings: 25th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. ESANN , 2017. pp. 371-376 (arXiv.org).

Bibtex

@inproceedings{a991e356aebb4c8798f07cee529b3d59,
title = "Spikes as Regularizers",
abstract = "We present a confidence-based single-layer feed-forward learning algorithm SPIRAL(Spike Regularized Adaptive Learning) relying on an encoding of activationspikes. We adaptively update a weight vector relying on confidence estimates andactivation offsets relative to previous activity. We regularize updates proportionallyto item-level confidence and weight-specific support, loosely inspired by the observationfrom neurophysiology that high spike rates are sometimes accompaniedby low temporal precision. Our experiments suggest that the new learning algorithmSPIRAL is more robust and less prone to overfitting than both the averagedperceptron and AROW.",
author = "Anders S{\o}gaard",
year = "2017",
language = "English",
isbn = "978-287587039-1",
series = "arXiv.org",
pages = "371--376",
booktitle = "ESANN 2017 - Proceedings",
publisher = "ESANN ",
note = "25th European Symposium on Artificial Neural Networks, omputational Intelligence and Machine Learning, ESANN 2017 ; Conference date: 26-04-2017 Through 28-04-2017",

}

RIS

TY - GEN

T1 - Spikes as Regularizers

AU - Søgaard, Anders

PY - 2017

Y1 - 2017

N2 - We present a confidence-based single-layer feed-forward learning algorithm SPIRAL(Spike Regularized Adaptive Learning) relying on an encoding of activationspikes. We adaptively update a weight vector relying on confidence estimates andactivation offsets relative to previous activity. We regularize updates proportionallyto item-level confidence and weight-specific support, loosely inspired by the observationfrom neurophysiology that high spike rates are sometimes accompaniedby low temporal precision. Our experiments suggest that the new learning algorithmSPIRAL is more robust and less prone to overfitting than both the averagedperceptron and AROW.

AB - We present a confidence-based single-layer feed-forward learning algorithm SPIRAL(Spike Regularized Adaptive Learning) relying on an encoding of activationspikes. We adaptively update a weight vector relying on confidence estimates andactivation offsets relative to previous activity. We regularize updates proportionallyto item-level confidence and weight-specific support, loosely inspired by the observationfrom neurophysiology that high spike rates are sometimes accompaniedby low temporal precision. Our experiments suggest that the new learning algorithmSPIRAL is more robust and less prone to overfitting than both the averagedperceptron and AROW.

M3 - Article in proceedings

SN - 978-287587039-1

T3 - arXiv.org

SP - 371

EP - 376

BT - ESANN 2017 - Proceedings

PB - ESANN

T2 - 25th European Symposium on Artificial Neural Networks, omputational Intelligence and Machine Learning

Y2 - 26 April 2017 through 28 April 2017

ER -

ID: 195004442