Skewing the Perceptron: Modeling the Importance of System-Level Manipulation When Investigating Long-Term Potentiation

Abstract

Manipulation of learning processes in the brain has proven to be experimentally challenging. Various studies have focused on specific components of Long-Term Potentiation (LTP), attempting to systematically regulate the learning progress through manipulation of its proposed elements. The manipulation of such elements is thought to result in a proportional change in the efficacy of the whole system. Based on the inadequate results of this approach, we hypothesized that changes implemented to a fraction of synapses in a network represented an incomplete investigation of LTP function. We further hypothesized that in order to successfully manipulate learning output, whole systems must be manipulated whether working with a single synapse or a neural network. We chose a relatively simple and easily applied model, the perceptron, to simulate the effects of strengthening synaptic connections between neurons, as in learning. We expected that following the learning phase, systematic changes to a fraction of synaptic weights would jeopardize the perceptron's ability to recall the learned patterns. Indeed, fractional changes of the weight distribution caused more errors in the retrieval of patterns. However, when the entire distribution was altered, error in pattern retrieval

Access full PDF: Skewing the Perceptron: Modeling the Importance of System-Level Manipulation When Investigating Long-Term Potentiation
One of the founding fathers of JYI, Brian Su, became the youngest person to co-PI a grant from the NSF. The purpose of the grant was to fund the start-up costs for JYI.
Follow Us
For all the latest news from JYI, join our Facebook.
For all the latest news from JYI, join our Youtube.
For all the latest news from JYI, join our twitter.
For all the latest news from JYI, join our email list.
Translate