Skip to content Skip to sidebar Skip to footer

Converting Keras (tensorflow) Convolutional Neural Networks To Pytorch Convolutional Networks?

Keras and PyTorch use different arguments for padding: Keras requires a string to be input, while PyTorch works with numbers. What is the difference, and how can one be translated

Solution 1:

Regarding padding,

Keras => 'valid' - no padding; 'same' - input is padded so that the output shape is same as input shape

Pytorch => you explicitly specify the padding

Valid padding

>>>model = keras.Sequential()>>>model.add(keras.layers.Conv2D(filters=10, kernel_size=3, padding='valid', input_shape=(28,28,3)))>>>model.layers[0].output_shape
(None, 26, 26, 10)

>>>x = torch.randn((1,3,28,28))>>>conv = torch.nn.Conv2d(in_channels=3, out_channels=10, kernel_size=3)>>>conv(x).shape
torch.Size([1, 10, 26, 26])

Same padding

>>>model = keras.Sequential()>>>model.add(keras.layers.Conv2D(filters=10, kernel_size=3, padding='same', input_shape=(28,28,3)))>>>model.layers[0].output_shape
(None, 28, 28, 10)

>>>x = torch.randn((1,3,28,28))>>>conv = torch.nn.Conv2d(in_channels=3, out_channels=10, kernel_size=3, padding=1)>>>conv(x).shape
torch.Size([1, 10, 28, 28])

W - Input Width, F - Filter(or kernel) size, P - padding, S - Stride, Wout - Output width

Wout = ((W−F+2P)/S)+1

Similarly for Height. With this formula, you can calculate the amount of padding required to retain the input width or height in the output.

http://cs231n.github.io/convolutional-networks/

Regarding in_channels, out_chanels and filters,

filters is the same as out_channels. In Keras, the in_channels is automatically inferred from the previous layer shape or input_shape(in case of first layer).

Post a Comment for "Converting Keras (tensorflow) Convolutional Neural Networks To Pytorch Convolutional Networks?"