Training linear neural network with early stopped learning and ridge estimation - Université d'Évry Access content directly
Conference Papers Year : 1998

Training linear neural network with early stopped learning and ridge estimation

Abstract

A prominent feature of modern Artificial \nn\ classifiers is the nonlinear aspects of neural computation. So why bother with linear networks ? Nonlinear computations are obviously crucial but, by focusing on these arguments we miss subtle aspects of dynamic, structure and organization that arise in the network during training. Furthermore, general results in the nonlinear case are rare or impossible to derive analytically. One often forgets by instance that when learning starts with small random initial weights the networks is operating in its linear part. Finally, the study of linear networks leads to some interesting questions and paradigms which could not have been guessed by advance and to new way of seeing certain classical statistical techniques. It is an objective of this paper to demonstate that in some conditions a multi-layered neural network at the beginning of its training reacts like the classical Ordinary Least Squares (OLS) regressor.
No file

Dates and versions

hal-00258890 , version 1 (25-02-2008)

Identifiers

  • HAL Id : hal-00258890 , version 1

Cite

Vincent Vigneron, Claude Barret. Training linear neural network with early stopped learning and ridge estimation. 5th Brazilian Symposium on Neural Network (SBRN 98), Dec 1998, Belo Horizonte, Brazil. pp.00. ⟨hal-00258890⟩
94 View
0 Download

Share

Gmail Facebook X LinkedIn More