吴恩达深度学习编程作业——Convolution model-step by step-v2

这篇博客展示了吴恩达深度学习课程中的卷积模型实现,详细讲解了从零填充、卷积操作到池化层的前向传播,并提到了后向传播的相关公式和反向操作,作为编程作业和学习备忘。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

编程作业中给出了两个版本的Convolution model-step by step,v1/v2。选择一个做即可,当然也可以做两次作为复习。更多的细节我放到另一篇文章中进行解答,感兴趣可以去主页查找。
本篇文章,主要用于展示代码,记录备忘。
1.调用使用的基本模块

import numpy as np
import h5py
import matplotlib.pyplot as plt

%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

%load_ext autoreload
%autoreload 2

np.random.seed(1)

2.首先了解网络基本结构
网络结构图
注:所有的前向传播操作都会对应后向传播,前向传播的参数会被存储在缓冲区里,进行后向传播的时候进行调用。
3.卷积神经网络
输入输出值
3.1零填充
RGB图像零填充过程

# GRADED FUNCTION: zero_pad

def zero_pad(X, pad):
    """
    Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, 
    as illustrated in Figure 1.
    
    Argument:
    X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
    pad -- integer, amount of padding around each image on vertical and horizontal dimensions
    
    Returns:
    X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
    """
    
    ### START CODE HERE ### (≈ 1 line)使用零进行填充
    X_pad = np.pad(X, ((0,0), (pad,pad), (pad,pad), (0,0)), "constant", constant_values = (0))
    ### END CODE HERE ###
    
    return X_pad

运行结果
3.2卷积中的一步
卷积实现说明

# GRADED FUNCTION: conv_single_step

def conv_single_step(a_slice_prev, W, b):
    """
    Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation 
    of the previous layer.
    
    Arguments:
    a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
    W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
    b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
    
    Returns:
    Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
    """

    ### START CODE HERE ### (≈ 2 lines of code)
    # Element-wise product between a_slice and W. Do not add the bias yet.
    a_slice = a_slice_prev * W
    # Sum over all entries of the volume s.
    s = np.sum(a_slice)
    # Add bias b to Z. Cast b to a float() so that Z results in a scalar value.
    Z = s + b
    ### END CODE HERE ###

    return Z
np.random.seed(1)
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1)

Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)

3.3卷积神经网络的前向传播
前向传播过程

# GRADED FUNCTION: conv_forward

def conv_forward(A_prev, W, b, hparameters):
    """
    Implements the forward propagation for a convolution function
    
    Arguments:
    A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
    W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
    b -- Biases, numpy array of shape (1, 1, 1, n_C)
    hparameters -- python dictionary containing "stride" and "pad"
        
    Returns:
    Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
    cache -- cache of values needed for the conv_backward() function
    """
    
    ### START CODE HERE ###
    # Retrieve dimensions from A_prev's shape (≈1 line)  
    (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
    
    # Retrieve dimensions from W's shape (≈1 line)
    (f, f, n_C_prev, n_C) 
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值