Skip to content Skip to sidebar Skip to footer

Embedding Layer In Python: How To Use Correctly With Torchsummary?

This is a minimally working/reproducible example: import torch import torch.nn as nn from torchsummary import summary class Network(nn.Module): def __init__(self, channels_im

Solution 1:

The problem lies here:

self.embed(labels)...

An embedding layer is kind of a mapping between discrete indices and continuous values, as stated here. That is, its inputs should be integers and it will give you back floats. In your case, for example, you are embedding class labels of the MNIST which range from 0 to 9, to a contiuum (for some reason that I don't know as i'm not familiar with GANs :)). But in short, that embedding layer will give a transformation of 10 -> 784 for you and those 10 numbers should be integers, PyTorch says.

A fancy name for an integer type is "long", so you need to make sure the data type of what goes into self.embed is of that type. There are some ways to do that:

self.embed(labels.long())

or

self.embed(labels.to(torch.long))

or

self.embed(labels.to(torch.int64))

Long datatype is really an 64 bit integer (you may see here), so all these work.

Post a Comment for "Embedding Layer In Python: How To Use Correctly With Torchsummary?"