Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exchange last layer in InceptionTime Model after pretraining task. #891

Open
benehiebl opened this issue Mar 28, 2024 · 0 comments
Open

Comments

@benehiebl
Copy link

benehiebl commented Mar 28, 2024

After a pretraining task, it is not possible to access and change the last layer of the InceptionTimePlus model via model.head[-1]. This is probably due to a double call of nn.Sequential in the model class (?).

The model:

  class InceptionTimePlus(nn.Sequential):
      def __init__(self, c_in, c_out, seq_len=None, nf=32, nb_filters=None,
                   flatten=False, concat_pool=False, fc_dropout=0., bn=False, y_range=None, custom_head=None, **kwargs):
          
          if nb_filters is not None: nf = nb_filters
          else: nf = ifnone(nf, nb_filters) # for compatibility
          backbone = InceptionBlockPlus(c_in, nf, **kwargs)
          
          #head
          self.head_nf = nf * 4
          self.c_out = c_out
          self.seq_len = seq_len
          if custom_head is not None: 
              if isinstance(custom_head, nn.Module): head = custom_head
              else: head = custom_head(self.head_nf, c_out, seq_len)
          else: head = self.create_head(self.head_nf, c_out, seq_len, flatten=flatten, concat_pool=concat_pool, 
                                        fc_dropout=fc_dropout, bn=bn, y_range=y_range)
              
          layers = OrderedDict([('backbone', nn.Sequential(backbone)), ('head', nn.Sequential(head))])
          super().__init__(layers)
          
      def create_head(self, nf, c_out, seq_len, flatten=False, concat_pool=False, fc_dropout=0., bn=False, y_range=None):
          if flatten: 
              nf *= seq_len
              layers = [Flatten()]
          else: 
              if concat_pool: nf *= 2
              layers = [GACP1d(1) if concat_pool else GAP1d(1)]
          layers += [LinBnDrop(nf, c_out, bn=bn, p=fc_dropout)]
          if y_range: layers += [SigmoidRange(*y_range)]
          return nn.Sequential(*layers)

Only if i change the return of the create_head function to "return layers" and subsequently "layers = OrderedDict([('backbone', nn.Sequential(backbone)), ('head', nn.Sequential(*head))])", it is possible to e.g. load a pretrained InceptionTimePlus model and
e.g. change the last Linear in the LinBnDrop Layer like:
self.model.head[-1][1] = torch.nn.Linear(last_layer.in_features, my_out_channels)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant