The goal of this project is to implement the feed-forward algorithm for applying a multilayer perceptron to a data point and the backpropagation algorithm for training a multilayer perceptron. To simplify this problem, we will assume that this MLP does binary classification.
Write two functions in Python 3 with the following signatures:
classify(mlp, inputs) train(M, data, targets)
Where
mlp
is something representing a multilayer
perceptron
inputs
is an array-like with shape (x, d)
,
containing x
new data data points
each of dimension d
.
M
is an integer, the number of perceptrons
in a hidden layer
data
is an array-like with shape (n,d)
,
containing n
data points each of dimension d
.
targets
is a list of size n
containing
the target values for the data points in data
.
The train
function returns something that
represents a multilayer perceptron with M hidden units
in one hidden layer and a single output unit that returns 0 or 1
(or -1 or 1; or True or False).
The number of input units is determined by the dimensionality of
the training data.
How the MLP is represented is up to you, but your
train
function must return something that can
be passed to the first parameter of your classify
function.
Test your implementation on at least two data sets and report on the performance.
Submit your code in two files, mlp.py
and test_mlp.py
so that the following code will work:
import mlp clsfyr = mlp.train(30, X_train, y_train) results = mlp.classify(clsyr, X_test)
Moreover, the following should work from the command line:
python3 test_mlp.py
Which should display information about the performance of your classifier.
Finally, include a file README
that (briefly) describes how you tested your
classifier and to what results, and anything else you
think it would be good for me to know in order for me
to give your submission the fairest grading.
Specific guidelines:
sklearn.neural_network.MLPClassifier
or any similar available implementation.
You may use library functions for pieces of your implementation
as long as they do not violate the spirit of an assignment to
implement feed-forward and backpropagation.
(You may use sklearn.neural_network.MLPClassifier
or something similar
to generate results for comparison with those of your classifier.)
sklearn
or
similar libraries.
The book Data Science from Scratch,
which I placed in the lab as a reference book for this course,
contains Python code for these two algorithms.
I have removed it from the lab for the duration of this project.
README
file.
(This is a policy for our course, not just for this project.)
Perceptron
class that I used for in-class examples, you can get
that code from /homes/tvandrun/Public/cs394/perceptron.py
.
But that's up to you.
Update, March 25. It seems like those students who have put time into this project so far are having a lot of trouble getting backpropagation to work. So that we can make something of this project, here's what I am going to do and suggest:
README
file.
Also, I'll probably end up reading more code that I normally would,
so make sure your code is sufficiently documented.
To turn in:
Copy mlp.py
, test_mlp.py
,
README
, and
any other files your code needs (such as data sets for
testing) to
/cslab/class/cs394/[your userid]/mlp
Due Fri, March 29