Wednesday, May 21, 2008

Integrate and Fire neuron model

Infor mation about Integrate and Fire neuron model please visit
http://icwww.epfl.ch/~gerstner//SPNM/node26.html

Hodgkin -Huxley neuron model

The standard Hodgkin - Huxley model of an excitatory neuron consists of the equation for the total membrane current, IM, obtained from Ohm's law:
where V denotes the membrane voltage, IK is the potassium current, INa is the sodium current and IL is leakage current carried by other ions that move passively through the membrane. This equation is derived by modeling the potassium, sodium and leakage currents using a simple electrical circuit model of the membrane. We think of a gate in the membrane as having an intrinsic resistance and the cell membrane itself as having an intrincis capacitance as shown in Figure 2.1:

Figure 2.1: The Membrane and Gate Circuit Model

Here we show an idealized cell with a small portion of the membrane blown up into an idealized circuit. We see a small piece of the lipid membrane with an inserted gate. We think of the gate as having some intrinsic resistance and capacitance. Now for our simple Hodgkin - Huxley model here, we want to model a sodium and potassium gate as well as the cell capacitance. So we will have a resistance for both the sodium and potassium. In addition, we know that other ions move across the membrane due to pumps, other gates and so forth. We will temporarily model this additional ion current as a leakage current with its own resistance. We also know that each ion has its own equilibrium potential which is determined by applying the Nernst equation. The driving electomotive force or driving emf is the difference between the ion equilibrium potential and the voltage across the membrane itself. Hence, if Ec is the equilibrium potential due to ion c and Vm is the membrane potential, the driving force is Vc - Vm. In Figure 2.2, we see an electric schematic that summarizes what we have just said. We model the membrane as a parellel circuit with a branch for the sodium and potassium ion, a branch for the leakage current and a branch for the membrane capacitance.

Figure 2.2: The Simple Hodgkin - Huxley Membrane Circuit Model
From circuit theory, we know that the charge q across a capacitator is q = C E, where C is the capacitance and E is the voltage across the capicitor. Hence, if the capacitance C is a constant, we see that the current through the capacitor is given by the time rate of change of the charge
If the voltage E was also space dependent, then we would write E(z,t) to indicate its dependence on both a space variable z and the time t. Then the capacitive current would be
From Ohm's law, we know that voltage is current times resistance; hence for each ion c, we can say where we label the voltage, current and resistance due to this ion with the subscript c. This implieswhere gc is the reciprocal resistance or conductance of ion c. Hence, we can model all of our ionic currents using a conductance equation of the form above. Of course, the potassium and sodium conductances are nonlinear functions of the membrane voltage V and time t. This reflects the fact that the amount of current that flows through the membrane for these ions is dependent on the voltage differential across the membrane which in turn is also time dependent. The general functional form for an ion c is thus
where as we mentioned previously, the driving force, V - Ec, is the difference between the voltage across the membrane and the equilibrium value for the ion in question, Ec. Note, the ion battery voltage Ec itself might also change in time (for example, extracellular potassium concentration changes over time ). Hence, the driving force is time dependent. The conductance is modeled as the product of a activation, m, and an inactivation, h, term that are essentially sigmoid nonlinearities. The activation and inactivationa are functions of V and t also. The conductance is assumed to have the form
where appropriate powers of p and q are found to match known data for a given ion conductance.We model the leakage current, IL, as
where the leakage battery voltage, EL, and the conductance gL are constants that are data driven.Hence, our full model would be
Activation and Inactivation Variables:We assume that the voltage dependence of our activation and inactivation has been fitted from data. Hodgkin and Huxley modeled the time dependence of these variables using first order kinetics. They assumed a typical variable of this type, say m, satisfies for each value of voltage, V:





Tuesday, May 20, 2008

Artificial Neural Model

Artificial Neuron Model
As it is mentioned in the previous section, the transmission of a signal from one neuron to
another through synapses is a complex chemical process in which specific transmitter substances are released from the sending side of the junction. The effect is to raise or lower the electrical potential inside the body of the receiving cell. If this graded potential reaches a threshold, the neuron fires. It is this characteristic that the artificial neuron model proposed by McCulloch and Pitts, attempt to reproduce. The neuron model shown in Figure 1.6 is the one that widely used in artificial neural networks with some minor modifications on it.
Figure 6. Artificial Neuron
The artificial neuron given in this figure has N input, denoted as u1, u2, ...uN. Each lineconnecting these inputs to the neuron is assigned a weight, which are denoted as w1, w2, .., wN respectively. Weights in the artificial model correspond to the synaptic connections in biological neurons. The threshold in artificial neuron is usually represented by θ and the activation corresponding to the graded potential is given by the formula: The inputs and the weights are real values. A negative value for a weight indicates an inhibitory connection while a positive value indicates an excitatory one. Although in biological neurons, θ has a negative value, it may be assigned a positive value in artificial neuron models. If θ is positive, it is usually referred as bias. For its mathematical convenience we will use (+) sign in the activation formula. Sometimes, the threshold is combined for simplicity into the summation part by assuming an imaginary input u0 =+1 and a connection weight w0 = θ. Hence the activation formula becomes:

The output value of the neuron is a function of its activation in an analogy to the firing frequency of the biological neurons:
x = f (a) Furthermore the vector notation
a=wTu+θ
is useful for expressing the activation for a neuron. Here, the jth element of the input vector u is uj and the jth element of the weight vector of w is wj. Both of these vectors are of size N. Notice that, wTu is the inner product of the vectors w and u, resulting in a scalar value. The inner product is an operation defined on equal sized vectors. In the case these vectors have unit length, the inner product is a measure of similarity of these vectors.
Originally the neuron output function f(a) in McCulloch Pitts model proposed as threshold function, however linear, ramp and sigmoid and functions (Figure 6.) are also widely used output functions:





Figure 7. Some neuron output functions
Though its simple structure, McCulloch-Pitts neuron is a powerful computational device. McCulloch and Pitts proved that a synchronous assembly of such neurons is capable in principle to perform any computation that an ordinary digital computer can, though not necessarily so rapidly or conveniently .


Biological Neuron Model and Artificial Neuron Model

Biological Neuron Model
It is claimed that the human central nervous system is comprised of about 1,3x1010 neurons and that about 1x1010 of them takes place in the brain. At any time, some of these neurons are firing and the power dissipation due this electrical activity is estimated to be in the order of 10 watts. Monitoring the activity in the brain has shown that, even when asleep, 5x107 nerveimpulses per second are being relayed back and forth between the brain and other parts of the body. This rate is increased significantly when awake. A neuron has a roughly spherical cell body called soma (Figure 1). The signals generated in soma are transmitted to other neurons through an extension on the cell body called axon or nerve fibres. Another kind of extensions around the cell body like bushy tree is the dendrites, which are responsible from receiving the incoming signals generated by other neurons.
Fig 1 Typical neuron
An axon (Figure 2), having a length varying from a fraction of a millimeter
to a meter in human body, prolongs from the cell body at the point called
axon hillock. At the other end, the axon is separated into several branches,
at the very end of which the axon enlarges and forms terminal buttons.
Terminal buttons are placed in special structures called the synapses which
are the junctions transmitting signals from one neuron to another (Figure 3).
A neuron typically drive 103 to 104 synaptic junctions

Fig2. Axon
The synaptic vesicles holding several thousands of molecules of chemical
transmitters, take place in terminal buttons. When a nerve impulse arrives
at the synapse, some of these chemical transmitters are discharged into
synaptic cleft, which is the narrow gap between the terminal button of the
neuron transmitting the signal and the membrane of the neuron receiving it.
In general the synapses take place between an axon branch of a neuron and
the dendrite of another one. Although it is not very common, synapses may
also take place betweentwo axons or two dendrites of different cells or between
an axon and a cell body.

Figure 3. The synapse
Neurons are covered with a semi-permeable membrane, with only 5 nanometer thickness. The membrane is able to selectively absorb and reject ions in the intracellular fluid. The membrane basically acts as an ion pump to maintain a different ion concentration between the intracellular fluid and extracellular fluid. While the sodium ions are continually removed from the intracellular fluid to extracellular fluid, the potassium ions are absorbed from the extracellular fluid in order to maintain an equilibrium condition. Due to the difference in the ion concentrations inside and outside, the cell membrane become polarized. In equilibrium the interior of the cell is observed to be 70 milivolts negative with respect to the outside of the cell. The mentioned potential is called the resting potential.
A neuron receives inputs from a large number of neurons via its synaptic connections. Nerve signals arriving at the presynaptic cell membrane cause chemical transmitters to be released in to the synaptic cleft. These chemical transmitters diffuse across the gap and join to the postsynaptic membrane of the receptor site. The membrane of the postsynaptic cell gathers the chemical transmitters. This causes either a decrease or an increase in the soma potatial, called graded potantial, depending on the type of the chemicals released in to the synaptic cleft. The kind of synapses encouraging depolarization is called excitatory and the others discouraging it are called inhibitory synapses. If the decrease in the polarization is adequate to exceed a threshold then the post-synaptic neuron fires.
The arrival of impulses to excitatory synapses adds to the depolarization of soma while inhibitory effect tends to cancel out the depolarizing effect of excitatory impulse. In general, although the depolarization due to a single synapse is not enough to fire the neuron, if some otherareas of the membrane are depolarized at the same time by the arrival of nerve impulses through other synapses, it may be adequate to exceed the threshold and fire.
At the axon hillock, the excitatory effects result in the interruption the regular ion transportation through the cell membrane, so that the ionic concentrations immediately begin to equalize as ions diffuse through the membrane. If the depolarization is large enough, the membrane potential eventually collapses, and for a short period of time the internal potential becomes positive. The action potential is the name of this brief reversal in the potential, which results in an electric current flowing from the region at action potential to an adjacent region on axon with a resting potential. This current causes the potential of the next resting region to change, so the effect propagates in this manner along the axon membrane.


Figure 4. The action potential on axon
Once an action potential has passed a given point, it is incapable of being reexcited for a while called refractory period. Because the depolarized parts of the neuron are in a state of recovery and can not immediately become active again, the pulse of electrical activity always propagates in only forward direction. The previously triggered region on the axon then rapidly recovers to the polarized resting state due to the action of the sodium potassium pumps. The refractory period is about 1 milliseconds, and this limits the nerve pulse transmission so that a neuron can typically fire and generate nerve pulses at a rate up to 1000 pulses per second. The number of impulses and the speed at which they arrive at the synaptic junctions to a particular neuron determine whether the total excitatory depolarization is sufficient to cause the neuron to fire and so to send a nerve impulse down its axon. The depolarization effect can propagate along the soma membrane but these effects can be dissipated before reaching the axon hillock.
However, once the nerve impulse reaches the axon hillock it will propagate until it reaches the synapses where the depolarization effect will cause the release of chemical transmitters into the synaptic cleft. The axons are generally enclosed by myelin sheath that is made of many layers of
Schwann cells promoting the growth of the axon. The speed of propagation down the axon depends on the thickness of the myelin sheath that provides for the insulation of the axon from the extracellular fluid and prevents the transmission of ions across the membrane. The myelin sheath is interrupted at regular intervals by narrow gaps called nodes of Ranvier where extracellular fluid makes contact with membrane and the transfer of ions occur. Since the axons themselves are poor conductors, the action potential is transmitted as depolarizations occur at the nodes of Ranvier. This happens in a sequential manner so that the depolarization of a node triggers the depolarization of the next one. The nerve impulse effectively jumps from a node to the next one along the axon each node acting rather like a regeneration amplifier to compensate for losses. Once an action potential is created at the axon hillock, it is transmitted through the axon to other neurons.
It is mostly tempted to conclude the signal transmission in the nervous system as having a digital nature in which a neuron is assumed to be either fully active or inactive.However this conclusion is not that correct, because the intensity of a neuron signal is coded in the frequency of pulses. A better conclusion would be to interpret the biological neural systems as if using a form of pulse frequency modulation to transmit information. The nerve pulses passing along the axon of a particular neuron are of approximately constant amplitude but the number generated pulses and their time spacing is controlled by the statistics associated with the arrival at the neuron's many synaptic junctions of sufficient excitatory inputs .
The representation of biophysical neuron output behavior is shown schematically inFigure 5 At time t=0 a neuron is excited; at time T, typically it may be of the order of 50 milliseconds, the neuron fires a train of impulses along its axon. Each of these impulses is practically of identical amplitude. Some time later, say around t=T+τ, the neuron may fire another train of impulses, as a result of the same excitation, though the second train of impulses will usually contain a smaller
number. Even when the neuron is not excited, it may send out impulses at random, though much less frequently than the case when it is excited.


Figure 5. Representation of biophysical neuron output signal after excitation at tine t=0

A considerable amount of research has been performed aiming to explain the electrochemical structure and operation of a neuron, however still remains several questions, which need to be answered in future.

Biological Neuron



The brain is a collection of about 10 billion interconnected neurons. Each neuron is a cell [right] that uses biochemical reactions to receive, process and transmit information.
A neuron's dendritic tree is connected to a thousand neighbouring neurons. When one of those neurons fire, a positive or negative charge is received by one of the dendrites. The strengths of all the received charges are added together through the processes of spatial and temporal summation. Spatial summation occurs when several weak signals are converted into a single large one, while temporal summation converts a rapid series of weak pulses from one source into one large signal. The aggregate input is then passed to the soma (cell body). The soma and the enclosed nucleus don't play a significant role in the processing of incoming and outgoing data. Their primary function is to perform the continuous maintenance required to keep the neuron functional. The part of the soma that does concern itself with the signal is the axon hillock. If the aggregate input is greater than the axon hillock's threshold value, then the neuron fires, and an output signal is transmitted down the axon. The strength of the output is constant, regardless of whether the input was just above the threshold, or a hundred times as great. The output strength is unaffected by the many divisions in the axon; it reaches each terminal button with the same intensity it had at the axon hillock. This uniformity is critical in an analogue device such as a brain where small errors can snowball, and where error correction is more difficult than in a digital system.
Each terminal button is connected to other neurons across a small gap called a synapse [left]. The physical and neurochemical characteristics of each synapse determines the strength and polarity of the new input signal. This is where the brain is the most flexible, and the most vulnerable. Changing the constitution of various neuro- transmitter chemicals can increase or decrease the amount of stimulation that the firing axon imparts on the neighbouring dendrite. Altering the neurotransmitters can also change whether the stimulation is excitatory or inhibitory. Many drugs such as alcohol and LSD have dramatic effects on the production or destruction of these critical chemicals. The infamous nerve gas sarin can kill because it neutralizes a chemical (acetylcholinesterase) that is normally responsible for the destruction of a neurotransmitter (acetylcholine). This means that once a neuron fires, it keeps on triggering all the neurons in the vicinity. One no longer has control over muscles, and suffocation ensues.
Flash about neuron you can find out in

Organisation of brain


The human brain controls the central nervous system (CNS), by way of the cranial nerves and spinal cord, the peripheral nervous system (PNS) and regulates virtually all human activity.Involuntary, or "lower," actions, such as heart rate, respiration, and digestion, are unconsciously governed by the brain, specifically through the autonomic nervous system. Complex, or "higher," mental activity, such as thought, reason, and abstraction, is consciously controlled.
Anatomically, the brain can be divided into three parts: the
forebrain, midbrain, and hindbrain;the forebrain includes the several lobes of the cerebral cortex that control higher functions, while the mid- and hindbrain are more involved with unconscious, autonomic functions. During encephalization, human brain mass increased beyond that of other species relative to body mass. This process was especially pronounced in the neocortex, a section of the brain involved with language and consciousness. The neocortex accounts for about 76% of the mass of the human brain; with a neocortex much larger than other animals, humans enjoy unique mental capacities despite having a neuroarchitecture similar to that of more primitive species. Basic systems that alert humans to stimuli, sense events in the environment, and maintain homeostasis are similar to those of basic vertebrates. Human consciousness is founded upon the extended capacity of the modern neocortex, as well as the greatly developed structures of the brain stem.
For simulation of brain visit

Humans and Computers

Man Vs Machine
Generally Speaking

Many of us think that computers are many many times faster, more powerful and more capable when compared to our brains simply because they can perform calculations thousands of time faster, workout logical computations without error and store memory at incredible speeds with flawless accuracy.But is the the computer really superior to the human brain in terms of ability , processing power and adaptability ?We now give you the real comparison.

Processing Power and Speed

The human brain - We can only estimate the processing power of the average human brain as there is no way to measure it quantitatively as of yet. If the theory of taking nerve volume to be proportional to processing power is true we then, may have a correct estimate of the human brain's processing power.
It is fortunate that we understand the neural assemblies is the retina of the vertebrate eye quite well (structurally and functionally) because it helps to give us a idea of the human brain's capability.
The retina is a nerve tissue in the back of the eyeball which detects lights and sends images to the brain. A human retina has a size of about a centimeter square is half a millimeter thick and is made up of 100 million neurons. Scientists say that the retina sends to the brain, particular patches of images indicating light intensity differences which are transported via the optic nerve, a million-fiber cable which reaches deep into the brain.
Overall, the retina seems to process about ten one-million-point images per second.
Because the 1,500 cubic centimeter human brain is about 100,000 times as large as the retina, by simple calculation, we can estimate the processing power of a average brain to be about 100 million MIPS (Million computer Instructions Per Second ). In case you're wondering how much speed that is, let us give you an idea.
1999's fastest PC processor chip on the market was a 700 MHz pentium that did 4200 MIPS. By simple calculation, we can see that we would need at least 24,000 of these processors in a system to match up to the total speed of the brain !! (Which means the brain is like a 168,0000 MHz Pentium computer). But even so, other factors like memory and the complexity of the system needed to handle so many processors will not be a simple task. Because of these factors, the figures we so childishly calculated will most probably be a very serious underestimate.

The computer - The most powerful experimental super computers in 1998, composed of thousands or tens of thousands of the fastest microprocessors and costing tens of millions of dollars, can do a few million MIPS. These systems were used mainly to stimulate physical events for high-value scientific calculations.
Here, we have a chart of processor speeds for the past few years.
Year Clock Speed (MHz) Instruction Rate (MIPS)
1992 200 200 (400)
1993.5 300 300 (600)
1995 400 800 (1600)
1996.5 500 1000 (2000)
1998 600 2400 (3600)
1999.5 700 2800 (4200)
2000 1000 ?
From the chart above, we can observe some break through s in microprocessor speeds. The current techniques used by research labs should be able to continue such improvements for about a decade. By then maybe prototype multiprocessor chips finally reaching MIPS matching that of the brain will be cheap enough to develop.
Improvements of computer speeds however have some limitations. The more memory it has, the slower it is because it takes longer to run through its memory once. Computers with less memory hence have more MIPS, but are confined to less space to run big programs. The latest, greatest super computers can do a trillion calculations per second and can have a trillion bytes of memory. As computer memory and processors improve, the Megabyte/MIPS ratio is a big factor to consider. So far, this ratio has remained constant throughout the history of computers.
So who has more processing power ?By estimation, the brain has about 100 million MIPS worth of processing power while recent super-computers only has a few million MIPS worth in processor speed. That said, the brain is still the winner in the race. Because of the cost, enthusiasm and efforts still required, computer technology has still some length to go before it will match the human brain's processing power.

Counting the Memory

The human brain - So far, we have never heard of anybody's brain being "overloaded" because it has ran out of memory. (So it seems as if, the human brain has no limit as to how much memory it can hold. That may not be true)
Our best possible guess of the average human brain's capacity would by calculating using the number of synapses connecting the neurons in the human brain. Because each of the synapses have different molecular states, we estimate each of them to be capable holding one megabyte worth of memory. Since the brain has 100-trillion-synapses, we can safely say that the average brain can hold about 100 million megabytes of memory !!!
Remember what we said about the Megabyte/MIPS ratio of a computer ? By calculation, scientists discovered that the brain's memory/MIPS ratio matches that of modern computers. The megabyte/MIPS ratio seems to hold for nervous systems too!
However, we all know that the memory of the brain is not absolute. It does not have set files or directories that can be deleted, copied or archived like those of a computer. For example, a particular person who thought he had memorized a telephone number for good suddenly realizes he can't recall the number. But some half-a-day later, he may suddenly recall the number again.) It is a strange phenomenal that we still can't really explain. A simple thoery is that the brain treats parts and peices of these ignored memories like a unactive "archives" sections until they are required. Memory spans of parts of the brain seem to depend on how often they are used. Even so, there is no such thing as deletion of data in a brain.

The computer - Computers have more than one form of memory. We can generally classify them into primary and secondary memory. Primary memory is used as a form of temporary memory for calculation processes and storage of temporary values that need rapid access or updating, the contents of the primary memory disappear when the power is turned off. Primary memory is important when executing programs, bigger programs require more primary memory. ( RAM(random access memory), Caches & buffers are just a few examples of primary memory)
Secondary memory often comes in the form of hard disks, removable disk drives and tape drives. Secondary memory is used for the storage of most of a system's data, programs and all other permanent data that should stay there even when the power is turned off. As a computer is fed with bigger, smarter programs and more data, it would naturally need more secondary memory to hold them.
The latest, greatest super computers (as of 1998) have a million megabytes of memory. Today's latest model of hard disk drives on the personal computer market (in early 2000) can hold about 40,000 megabytes (40 gigabytes) of memory.

So who is the Superior ?


The brain is still the overall winner in many fields when it comes to numbers. However, because of its other commitments, the brain is less efficient when a person tries to use it for one specific function. The brain is as we can put it, a general purpose processor when compared to the computer. It therefore loses out when it comes to efficiency and performance. We have given the estimate for total human performance at 100 million MIPS, but the level of efficiency for which this can be applied to any task may only be a small fraction of the total. (this fraction depends on the adaptibilty of the brain to the task)
Deep Blue, the chess machine that bested world chess champion Garry Kasparov in 1997, used specialized chips to process chess moves at a the speed equivalent to a 3 million MIPS universal computer. This is 1/30 of the estimate for total human performance. Since it is plausible that Kasparov, probably the best human player ever, can apply his brain power to the strange problems of chess with an efficiency of 1/30, Deep Blue's near parity with Kasparov's chess skill supports the theory of the level of efficiency of total performance. ( Garry Kasparov beat Deep Blue with a very close, 2 -1 )

Comparison between conventional computers and neural networks

Parallel processing

One of the major advantages of the neural network is its ability to do many things at once. With traditional computers, processing is sequential--one task, then the next, then the next, and so on. The idea of threading makes it appear to the human user that many things are happening at one time. For instance, the Netscape throbber is shooting meteors at the same time that the page is loading. However, this is only an appearance; processes are not actually happening simultaneously.
The artificial neural network is an inherently multiprocessor-friendly architecture. Without much modification, it goes beyond one or even two processors of the von Neumann architecture. The artificial neural network is designed from the onset to be parallel. Humans can listen to music at the same time they do their homework--at least, that's what we try to convince our parents in high school. With a massively parallel architecture, the neural network can accomplish a lot in less time. The tradeoff is that processors have to be specifically designed for the neural network.
The ways in which they function

Another fundamental difference between traditional computers and artificial neural networks is the way in which they function. While computers function logically with a set of rules and calculations, artificial neural networks can function via images, pictures, and concepts.
Based upon the way they function, traditional computers have to learn by rules, while artificial neural networks learn by example, by doing something and then learning from it. Because of these fundamental differences, the applications to which we can tailor them are extremely different. We will explore some of the applications later in the presentation.

Self-programming

The "connections" or concepts learned by each type of architecture is different as well. The von Neumann computers are programmable by higher level languages like C or Java and then translating that down to the machine's assembly language. Because of their style of learning, artificial neural networks can, in essence, "program themselves." While the conventional computers must learn only by doing different sequences or steps in an algorithm, neural networks are continuously adaptable by truly altering their own programming. It could be said that conventional computers are limited by their parts, while neural networks can work to become more than the sum of their parts.
Speed

The speed of each computer is dependant upon different aspects of the processor. Von Neumann machines requires either big processors or the tedious, error-prone idea of parallel processors, while neural networks requires the use of multiple chips customly built for the application.

Introduction

Introduction
The power and usefulness of artificial neural networks have been demonstrated in several applications including speech synthesis, diagnostic problems, medicine, business and finance, robotic control, signal processing, computer vision and many other problems that fall under the category of pattern recognition. For some application areas, neural models show promise in achieving human-like performance over more traditional artificial intelligence techniques.
What, then, are neural networks? And what can they be used for? Although von-Neumann-architecture computers are much faster than humansin numerical computation, humans are still far better at carrying out low-level tasks such as speech and image recognition. This is due in part to the massive parallelism employed by the brain, which makes it easier to solve problems with simultaneous constraints. It is with this type of problem that traditional artificial intelligence techniques have had limited success. The field of neural networks, however, looks at a variety of models with a structure roughly analogous to that of the set of neurons in the human brain.
The branch of artificial intelligence called neural networks dates back to the 1940s, when McCulloch and Pitts [1943] developed the first neural model. This was followed in 1962 by the perceptron model, devised by Rosenblatt, which generated much interest because of its ability to solve some simple pattern classification problems. This interest started to fade in 1969 when Minsky and Papert [1969] provided mathematical proofs of the limitations of the perceptron and pointed out its weakness in computation. In particular, it is incapable of solving the classic exclusive-or (XOR) problem, which will be discussed later. Such drawbacks led to the temporary decline of the field of neural networks.
The last decade, however, has seen renewed interest in neural netivorks, both among researchers and in areas of application. The development of more-powerful networks, better training algorithms, and improved hardware have all contributed to the revival of the field. Neural-network paradigms in recent years include the Boltzmann machine, Hopfield's network, Kohonen's network, Rumelhart's competitive learning model, Fukushima's model, and Carpenter and Grossberg's Adaptive Resonance Theory model [Wasserman 1989; Freeman and Skapura 1991]. The field has generated interest from researchers in such diverse areas as engineering, computer science, psychology, neuroscience, physics, and mathematics. We describe several of the more important neural models, followed by a discussion of some of the available hardware and software used to implement these models, and a sampling of applications.