佳礼资讯网

 找回密码
 注册

ADVERTISEMENT

查看: 797|回复: 2

求python大神帮助!!急急急

[复制链接]
发表于 20-10-2017 04:06 AM | 显示全部楼层 |阅读模式
以下是single output neuron perceptron的source code... 想请大神们帮帮忙把它modify去two output neuron perceptron that candiscriminate training examples from four class labelstraining 的example如下:
  
Input features
  
Target outputs
Input 1
Input 2
Output 1
Output 2
1
1
-1
-1
1
2
-1
-1
2
-1
-1
1
2
0
-1
1
-1
2
1
-1
-2
1
1
-1
-1
-1
1
1
-2
-2
1
1



import numpy as np

class Perceptron(object):

    #learning_rate: float - Learning rate
    #weights:array - weight values
    #errors: list - number of wrong classification
    #epoch: int - max no of iteration (epoch)

    def __init__(self,learning_rate=0.01, epoch = 10):
        self.learning_rate = learning_rate
        self.epoch=epoch

    def training(self,X,y):
        #initilize the weight
        self.weights = np.random.rand(1+X.shape[1])
        self.errors = []
        no_of_samples = X.shape[0]

        for i in range(self.epoch):
            no_errors=0

            for j in range(no_of_samples):
                target = y[j]
                xj=X[j]
                predicted= self.predict(xj)
                diff = target - predicted # compute error
                update = self.learning_rate * diff #formula to update weight

                self.weights[1:]+=update * xj
                self.weights[0]+=update

                if(diff!=0.0):
                    no_errors+=1
            self.errors.append(no_errors)
            print("epoch # "+str(i)+"no of error ="+str(no_errors)+"\n")
        return self


    def predict(self,X):
        sum_of_input=self.net_input(X) #generate predicted output
        if(sum_of_input >= 0):
            return 1
        else:
            return -1

    def net_input(self,X):
        return np.dot(X,self.weights[1:])+self.weights[0] #np.dot is matrix multiplciation

    def model(self):
        print("Weight value of NN *the first is the bias.\n")

        for val in self.weights:
            print (str(val)+"\n")
####
#start to play
logicAnd = np.array([[1,1,1],
                   [1,0,-1],
                   [0,1,-1],
                   [0,0,-1]])

#start train
andgate= Perceptron(0.1,epoch=10)
X=np.array(logicAnd[:,0:2])
y=np.array(logicAnd[:,2])
andgate.training(X,y) # pass in input and target
andgate.model()

##test the NN
print(andgate.predict(np.array(X[0])))
print(andgate.predict(np.array(X[1])))
print(andgate.predict(np.array(X[2])))
print(andgate.predict(np.array(X[3])))






回复

使用道具 举报


ADVERTISEMENT

发表于 5-11-2017 01:40 AM | 显示全部楼层
Input features
         
Target outputs
Input 1
Input 2
Output 1
Output 2
1
1
-1
-1
1
2
-1
-1
2
-1
-1
1
2
0
-1
1
-1
2
1
-1
-2
1
1
-1
-1
-1
1
1
-2
-2
1
1



import numpy as np

class Perceptron(object):

    #learning_rate: float - Learning rate
    #weights:array - weight values
    #errors: list - number of wrong classification
    #epoch: int - max no of iteration (epoch)

    def __init__(self,learning_rate=0.01, epoch = 10):
        self.learning_rate = learning_rate
        self.epoch=epoch

    def training(self,X,y):
        #initilize the weight
        self.weights = np.random.rand(1+X.shape[1])
        self.errors = []
        no_of_samples = X.shape[0]

        for i in range(self.epoch):
            no_errors=0

            for j in range(no_of_samples):
                target = y[j]
                xj=X[j]
                predicted= self.predict(xj)
                diff = target - predicted # compute error
                update = self.learning_rate * diff #formula to update weight

                self.weights[1:]+=update * xj
                self.weights[0]+=update

                if(diff!=0.0):
                    no_errors+=1
            self.errors.append(no_errors)
            print("epoch # "+str(i)+"no of error ="+str(no_errors)+"\n")
        return self


    def predict(self,X):
        sum_of_input=self.net_input(X) #generate predicted output
        if(sum_of_input >= 0):
            return 1
        else:
            return -1

    def net_input(self,X):
        return np.dot(X,self.weights[1:])+self.weights[0] #np.dot is matrix multiplciation

    def model(self):
        print("Weight value of NN *the first is the bias.\n")

        for val in self.weights:
            print (str(val)+"\n")
####
#start to play
logicAnd = np.array([[1,1,1],
                   [1,0,-1],
                   [0,1,-1],
                   [0,0,-1]])

#start train
andgate= Perceptron(0.1,epoch=10)
X=np.array(logicAnd[:,0:2])
y=np.array(logicAnd[:,2])
andgate.training(X,y) # pass in input and target
andgate.model()

##test the NN
print(andgate.predict(np.array(X[0])))
print(andgate.predict(np.array(X[1])))
print(andgate.predict(np.array(X[2])))
print(andgate.predict(np.array(X[3])))
回复

使用道具 举报

发表于 20-11-2017 10:47 AM | 显示全部楼层
哈哈 有没有兴趣来新加坡it公司工作呀?感兴趣的话请发送简历到zouqing777@gmail.com
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 注册

本版积分规则

 

ADVERTISEMENT



ADVERTISEMENT



ADVERTISEMENT

ADVERTISEMENT


版权所有 © 1996-2023 Cari Internet Sdn Bhd (483575-W)|IPSERVERONE 提供云主机|广告刊登|关于我们|私隐权|免控|投诉|联络|脸书|佳礼资讯网

GMT+8, 18-4-2024 12:23 PM , Processed in 0.056099 second(s), 24 queries , Gzip On.

Powered by Discuz! X3.4

Copyright © 2001-2021, Tencent Cloud.

快速回复 返回顶部 返回列表