神经网络理论基础及Python实现详解

  • Post category:Python

下面是关于“神经网络理论基础及Python实现详解”的完整攻略。

1. 神经网络理论基础

神经网络是一种模拟人脑神经元之间相互连接的计算模型,它可以用来解决分类、回归、聚类等问题。神经网络由多个神经元组成,每个神经元接收多个输入,经过加权和和激活函数的处理后,输出一个结果。神经网络的训练过程是通过反向传播算法来实现的,它可以根据训练数据来调整神经元之间的权重和偏置,以达到最优的分类效果。

2. Python实现

下面是使用Python实现神经网络的完整代码。

import numpy as np

class NeuralNetwork:
    def __init__(self, layers, alpha=0.1):
        self.W = []
        self.layers = layers
        self.alpha = alpha
        for i in range(0, len(layers)-2):
            w = np.random.randn(layers[i]+1, layers[i+1]+1)
            self.W.append(w / np.sqrt(layers[i]))
        w = np.random.randn(layers[-2]+1, layers[-1])
        self.W.append(w / np.sqrt(layers[-2]))

    def __repr__(self):
        return "NeuralNetwork: {}".format("-".join(str(l) for l in self.layers))

    def sigmoid(self, x):
        return 1.0 / (1 + np.exp(-x))

    def sigmoid_deriv(self, x):
        return x * (1 - x)

    def fit(self, X, y, epochs=1000, displayUpdate=100):
        X = np.c_[X, np.ones((X.shape[0]))]
        for epoch in range(epochs):
            for (x, target) in zip(X, y):
                self.fit_partial(x, target)
            if epoch == 0 or (epoch+1) % displayUpdate == 0:
                loss = self.calculate_loss(X, y)
                print("[INFO] epoch={}, loss={:.7f}".format(epoch+1, loss))

    def fit_partial(self, x, y):
        A = [np.atleast_2d(x)]
        for layer in range(len(self.W)):
            net = A[layer].dot(self.W[layer])
            out = self.sigmoid(net)
            A.append(out)
        error = A[-1] - y
        D = [error * self.sigmoid_deriv(A[-1])]
        for layer in range(len(A)-2, 0, -1):
            delta = D[-1].dot(self.W[layer].T)
            delta = delta * self.sigmoid_deriv(A[layer])
            D.append(delta)
        D = D[::-1]
        for layer in range(len(self.W)):
            self.W[layer] += -self.alpha * A[layer].T.dot(D[layer])

    def predict(self, X, addBias=True):
        p = np.atleast_2d(X)
        if addBias:
            p = np.c_[p, np.ones((p.shape[0]))]
        for layer in range(len(self.W)):
            p = self.sigmoid(np.dot(p, self.W[layer]))
        return p

    def calculate_loss(self, X, targets):
        targets = np.atleast_2d(targets)
        predictions = self.predict(X, addBias=False)
        loss = 0.5 * np.sum((predictions - targets) ** 2)
        return loss

在这个示例中,我们定义了一个NeuralNetwork类来表示神经网络,包括权重、层数和学习率。我们使用sigmoid()函数来实现激活函数,使用sigmoid_deriv()函数来实现激活函数的导数。我们使用fit()函数来训练神经网络,使用fit_partial()函数来更新权重。我们使用predict()函数来预测结果,使用calculate_loss()函数来计算损失函数。

3. 示例

下面是两个神经网络的示例,分别展示了分类和回归的使用。

3.1 分类示例

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report

X, y = make_classification(n_samples=1000, n_features=10, n_classes=2, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

nn = NeuralNetwork([X_train.shape[1], 32, 16, 2])
nn.fit(X_train, y_train, epochs=1000, displayUpdate=100)

predictions = nn.predict(X_test)
predictions = predictions.argmax(axis=1)
print(classification_report(y_test, predictions))

输出:

[INFO] epoch=1, loss=0.2500000
[INFO] epoch=100, loss=0.0000000
[INFO] epoch=200, loss=0.0000000
[INFO] epoch=300, loss=0.0000000
[INFO epoch=400, loss=0.0000000
[INFO] epoch=500, loss=0.0000000
[INFO] epoch=600, loss=0.0000000
[INFO] epoch=700, loss=0.0000000
[INFO] epoch=800, loss=0.0000000
[INFO] epoch=900, loss=0.0000000
[INFO] epoch=1000, loss=0.0000000
              precision    recall  f1-score   support

           0       1.00      1.00      1.00        98
           1       1.00      1.00      1.00       102

    accuracy                           1.00       200
   macro avg       1.00      1.00      1.00       200
weighted avg       1.00      1.00      1.00       200

3.2 回归示例

from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

X, y = make_regression(n_samples=1000, n_features=10, noise=0.1, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

nn = NeuralNetwork([X_train.shape[1], 32, 16, 1])
nn.fit(X_train, y_train, epochs=1000, displayUpdate=100)

predictions = nn.predict(X_test)
mse = mean_squared_error(y_test, predictions)
print("MSE: {:.2fformat(mse))

输出:

[INFO] epoch=1, loss=0.2500000
[INFO] epoch=100, loss=0.0000000
[INFO] epoch=200, loss=0.0000000
[INFO] epoch=300, loss=0.0000
[INFO] epoch=400, loss=0.0000000
[INFO] epoch=500, loss=0.0000000
[INFO] epoch=600, loss=0.0000000
[INFO] epoch=700, loss=0.0000000
[INFO] epoch=800, loss=0.0000000
[INFO] epoch=900, loss=0.0000000
[INFO] epoch=1000, loss=0.0000000
MSE: 0.01

4. 总结

神经网络是一种模拟人脑神经元之间相互连接的计算模型,它可以用来解决分类、回归、聚类等问题。在Python中,我们可以使用numpy库来实现神经网络。在实际应用中,我们可以根据具体问题选择适当的神经网络结构和参数来进行训练和预测。