**Abstract** : We study a model of interacting neurons. The structure of this neural system is composed of two layers of neurons such that the neurons of the first layer send their spikes to the neurons of the second one: if $N$ is the number of neurons of the first layer, at each spiking time of the first layer, every neuron of both layers receives an amount of potential of the form $U/\sqrt{N},$ where $U$ is a centered random variable. This kind of structure of neurons can model a part of the structure of the visual cortex: the first layer represents the primary visual cortex V1 and the second one the visual area V2. In the model, we study the "averaged effect" of the neurons of the first layer on a single neuron of the second layer. The theoretical model consists in two stochastic processes, one modelling the membrane potential of the neurons of the first layer, and the other the membrane potential of the particular neuron of the second layer. We prove the convergence of these processes as the number of neurons~$N$ goes to infinity and obtain a convergence speed. The proofs rely on similar arguments as those used in [Erny, L\"ocherbach, Loukianova (2022)]: the convergence speed of the semigroups of the processes is obtained from the convergence speed of their infinitesimal generators using a Trotter-Kato formula, and from the regularity of the limit semigroup. Contrarily to the situation in [Erny, L\"ocherbach, Loukianova (2022)], the stochastic flow of the limit process is not continuous, and we need to use Girsanov's theorem for jump processes result to recover the regularity of the limit semigroup from the regularity of the stochastic flow of an auxiliary process.