You are on page 1of 3

questions in artificial neural networks

chapter 2
threshold logic

what is the abstract neuron model?


what is the abstract network model?
which execution models are possible?
definition of the mcculloch-pitts neuron
how to define a logical function?
what is a monotonic funtion?
how to build a network out of an logical expression?
how to build a network for a boolean funtion?
which kinds of primitive neurons are necessary to build logical networks?
how to express absolute inhibition by relative?
how to express relative inhibition by absolute?
how to express weights in mcculloch-pitts nets?
how to construct a net for a fsm?

chapter 3
perceptron

what is the difference between mcculloch-pitts neurons and a perceptron?


why can a diameter limited perceptron not decide whether a figur is connected?
definition of a simple perceptron?
what is a perceptron with bias?
why can a single perceptron not compute the xor function?
what is a duality concept?
how to define an error function?
what is the decision curve of a simple perceptron?
what are applications of perceptrons?

chapter 4
perceptron learning

what is the difference between supervised and unsupervised learning?


what is absolut linear separability?
how to use the error function for learning?
what is the rosenblatt learning algorithm for perceptrons?
why works the addition of the input vector to the weight vector?
why converges the rosenblatt learning algorithm?
what is the complexity of the rosenblatt learning algorithm?
are there alternatives to the rosenblatt learning algorithm?

chapter 5
clustering

what is the learning algorithm for clustering with k perceptrons?


what is batch update?
how to define the energy function corresponding to a learning task?
when is a cluster stable?
what is a principal component?
how to define the nth principal component?
what computes ojas algorithm?
what computes sangers network?

chapter 6
1 and 2 layered networks
what is a layered network?
how many different realization of xor are possible with a 2-layered net of 3
neurons?
what is the classification space?
how to construct the boolean sphere?
how many points can be separated by 2 lines?
which regions of fig 6.17 can a perceptron separate?
which regions are missing in fig. 6.18?
how many regions are there in an m-dimensional space defined by n hyperplanes?
what says the vapnik-chervonenkis dimension?
can a given neuronal net compute all logical functions?

chapter 7
the backpropagation algorithm

what are the convenient properties of the sigmoid?


what are bad influences of the sigmoid?
what is a gradient method?
why compute backwards?
how is the net extended to compute error and initialize the derivatives?

chapter 8
fast learning algorithms

what are the main problems of backpropagation?


what is the difference between offline/batch and online backpropagation?
what is the difference between firstorder and secondorder algorithms
what is a relaxation method?

chapter 9
statistics and neural networks
what is the problem of linear regression?
which technique can be used to cover sigmoids with regression?
what is the purpose of a hidden layer in case of a network of linear associators?
how to compute the pseudoinverse?
how to prognose time series?

chapter 10
complexity of learning

is there a network of algebraic nodes which can computer the roots of a 7th degree
polynomial?
what are the nodes necessary to compute any continuous function?
which functions can be used to approximate any unary continuous function?
how to build networks for binary or n-nary funtions?
how to formulate a learning problem?
what is a complexity class?
what is nondeterministic polynomial?
what is the typical proof method to prove that an algorithm is np?
apply this to the learning problem!

chapter 12
associative networks

which kinds of associative networks exist?


how many layers has the typical associative network?
when is an autoassociative network stable?
which eigenvalue will dominate the iteration?
which matrices don't have eigenvalues?
how to learn in an associative network?
why is bipolar coding interesting?
what is the crosstalk?
how to compute the pseudoinverse?
when do we need the pseudoinverse?

chapter 13
hopfield networks (drop 13.6)

what characterizes the energy function for bams?


how we apply the concept of bam to hopfield nets?
which form has a weight matrix of an hopfield net to have to enable stable states?

when will hebb-learning achieve good results for a hopfield net?


why is there an equivalence between the hopfield learning problem and a perceptron
learning problem?
how we apply hopfield nets?
what are the main problems of the application of hopfield nets to function
minimization

You might also like