How to use AlexNet with one channel












1















I am new to pytorch and had a problem with channels in AlexNet.
I am using it for a ‘gta san andreas self driving car’ project, I collected the dataset from a black and white image that has one channel and trying to train AlexNet using the script:



from AlexNetPytorch import*
import torchvision
import torchvision.transforms as transforms
import torch.optim as optim
import torch.utils.data
import numpy as np
import torch
from IPython.core.debugger import set_trace

AlexNet = AlexNet()

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(AlexNet.parameters(), lr=0.001, momentum=0.9)

all_data = np.load('training_data.npy')
inputs= all_data[:,0]
labels= all_data[:,1]
inputs_tensors = torch.stack([torch.Tensor(i) for i in inputs])
labels_tensors = torch.stack([torch.Tensor(i) for i in labels])

data_set = torch.utils.data.TensorDataset(inputs_tensors,labels_tensors)
data_loader = torch.utils.data.DataLoader(data_set, batch_size=3,shuffle=True, num_workers=2)




if __name__ == '__main__':
for epoch in range(8):
runing_loss = 0.0
for i,data in enumerate(data_loader , 0):
inputs= data[0]
inputs = torch.FloatTensor(inputs)
labels= data[1]
labels = torch.FloatTensor(labels)
optimizer.zero_grad()
# set_trace()
inputs = torch.unsqueeze(inputs, 1)
outputs = AlexNet(inputs)
loss = criterion(outputs , labels)
loss.backward()
optimizer.step()

runing_loss +=loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('finished')


I am using AlexNet from the link:
https://github.com/pytorch/vision/blob/master/torchvision/models/alexnet.py



But changed line 18 from :



nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2)


To :



nn.Conv2d(1, 64, kernel_size=11, stride=4, padding=2)


Because I am using only one channel in training images, but I get this error:



 File "training_script.py", line 44, in <module>
outputs = AlexNet(inputs)
File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulesmodule.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "C:UsersMukhtarDocumentsAI_projectsgtaAlexNetPytorch.py", line 34, in forward
x = self.features(x)
File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulesmodule.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulescontainer.py", line 91, in forward
input = module(input)
File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulesmodule.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulespooling.py", line 142, in forward
self.return_indices)
File "C:UsersMukhtarAnaconda3libsite-packagestorchnnfunctional.py", line 396, in max_pool2d
ret = torch._C._nn.max_pool2d_with_indices(input, kernel_size, stride, padding, dilation, ceil_mode)
RuntimeError: Given input size: (256x1x1). Calculated output size: (256x0x0). Output size is too small at c:programdataminiconda3conda-bldpytorch-cpu_1532499824793workatensrcthnngeneric/SpatialDilatedMaxPooling.c:67


I don't know what is wrong, is it wrong to change the channel size like this, and if it is wrong can you please lead me to a neural network that work with one channel , as I said I am a newbie in pytorch and I don't want to write the nn myself.










share|improve this question





























    1















    I am new to pytorch and had a problem with channels in AlexNet.
    I am using it for a ‘gta san andreas self driving car’ project, I collected the dataset from a black and white image that has one channel and trying to train AlexNet using the script:



    from AlexNetPytorch import*
    import torchvision
    import torchvision.transforms as transforms
    import torch.optim as optim
    import torch.utils.data
    import numpy as np
    import torch
    from IPython.core.debugger import set_trace

    AlexNet = AlexNet()

    criterion = nn.CrossEntropyLoss()
    optimizer = optim.SGD(AlexNet.parameters(), lr=0.001, momentum=0.9)

    all_data = np.load('training_data.npy')
    inputs= all_data[:,0]
    labels= all_data[:,1]
    inputs_tensors = torch.stack([torch.Tensor(i) for i in inputs])
    labels_tensors = torch.stack([torch.Tensor(i) for i in labels])

    data_set = torch.utils.data.TensorDataset(inputs_tensors,labels_tensors)
    data_loader = torch.utils.data.DataLoader(data_set, batch_size=3,shuffle=True, num_workers=2)




    if __name__ == '__main__':
    for epoch in range(8):
    runing_loss = 0.0
    for i,data in enumerate(data_loader , 0):
    inputs= data[0]
    inputs = torch.FloatTensor(inputs)
    labels= data[1]
    labels = torch.FloatTensor(labels)
    optimizer.zero_grad()
    # set_trace()
    inputs = torch.unsqueeze(inputs, 1)
    outputs = AlexNet(inputs)
    loss = criterion(outputs , labels)
    loss.backward()
    optimizer.step()

    runing_loss +=loss.item()
    if i % 2000 == 1999: # print every 2000 mini-batches
    print('[%d, %5d] loss: %.3f' %
    (epoch + 1, i + 1, running_loss / 2000))
    running_loss = 0.0
    print('finished')


    I am using AlexNet from the link:
    https://github.com/pytorch/vision/blob/master/torchvision/models/alexnet.py



    But changed line 18 from :



    nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2)


    To :



    nn.Conv2d(1, 64, kernel_size=11, stride=4, padding=2)


    Because I am using only one channel in training images, but I get this error:



     File "training_script.py", line 44, in <module>
    outputs = AlexNet(inputs)
    File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulesmodule.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
    File "C:UsersMukhtarDocumentsAI_projectsgtaAlexNetPytorch.py", line 34, in forward
    x = self.features(x)
    File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulesmodule.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
    File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulescontainer.py", line 91, in forward
    input = module(input)
    File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulesmodule.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
    File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulespooling.py", line 142, in forward
    self.return_indices)
    File "C:UsersMukhtarAnaconda3libsite-packagestorchnnfunctional.py", line 396, in max_pool2d
    ret = torch._C._nn.max_pool2d_with_indices(input, kernel_size, stride, padding, dilation, ceil_mode)
    RuntimeError: Given input size: (256x1x1). Calculated output size: (256x0x0). Output size is too small at c:programdataminiconda3conda-bldpytorch-cpu_1532499824793workatensrcthnngeneric/SpatialDilatedMaxPooling.c:67


    I don't know what is wrong, is it wrong to change the channel size like this, and if it is wrong can you please lead me to a neural network that work with one channel , as I said I am a newbie in pytorch and I don't want to write the nn myself.










    share|improve this question



























      1












      1








      1








      I am new to pytorch and had a problem with channels in AlexNet.
      I am using it for a ‘gta san andreas self driving car’ project, I collected the dataset from a black and white image that has one channel and trying to train AlexNet using the script:



      from AlexNetPytorch import*
      import torchvision
      import torchvision.transforms as transforms
      import torch.optim as optim
      import torch.utils.data
      import numpy as np
      import torch
      from IPython.core.debugger import set_trace

      AlexNet = AlexNet()

      criterion = nn.CrossEntropyLoss()
      optimizer = optim.SGD(AlexNet.parameters(), lr=0.001, momentum=0.9)

      all_data = np.load('training_data.npy')
      inputs= all_data[:,0]
      labels= all_data[:,1]
      inputs_tensors = torch.stack([torch.Tensor(i) for i in inputs])
      labels_tensors = torch.stack([torch.Tensor(i) for i in labels])

      data_set = torch.utils.data.TensorDataset(inputs_tensors,labels_tensors)
      data_loader = torch.utils.data.DataLoader(data_set, batch_size=3,shuffle=True, num_workers=2)




      if __name__ == '__main__':
      for epoch in range(8):
      runing_loss = 0.0
      for i,data in enumerate(data_loader , 0):
      inputs= data[0]
      inputs = torch.FloatTensor(inputs)
      labels= data[1]
      labels = torch.FloatTensor(labels)
      optimizer.zero_grad()
      # set_trace()
      inputs = torch.unsqueeze(inputs, 1)
      outputs = AlexNet(inputs)
      loss = criterion(outputs , labels)
      loss.backward()
      optimizer.step()

      runing_loss +=loss.item()
      if i % 2000 == 1999: # print every 2000 mini-batches
      print('[%d, %5d] loss: %.3f' %
      (epoch + 1, i + 1, running_loss / 2000))
      running_loss = 0.0
      print('finished')


      I am using AlexNet from the link:
      https://github.com/pytorch/vision/blob/master/torchvision/models/alexnet.py



      But changed line 18 from :



      nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2)


      To :



      nn.Conv2d(1, 64, kernel_size=11, stride=4, padding=2)


      Because I am using only one channel in training images, but I get this error:



       File "training_script.py", line 44, in <module>
      outputs = AlexNet(inputs)
      File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulesmodule.py", line 477, in __call__
      result = self.forward(*input, **kwargs)
      File "C:UsersMukhtarDocumentsAI_projectsgtaAlexNetPytorch.py", line 34, in forward
      x = self.features(x)
      File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulesmodule.py", line 477, in __call__
      result = self.forward(*input, **kwargs)
      File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulescontainer.py", line 91, in forward
      input = module(input)
      File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulesmodule.py", line 477, in __call__
      result = self.forward(*input, **kwargs)
      File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulespooling.py", line 142, in forward
      self.return_indices)
      File "C:UsersMukhtarAnaconda3libsite-packagestorchnnfunctional.py", line 396, in max_pool2d
      ret = torch._C._nn.max_pool2d_with_indices(input, kernel_size, stride, padding, dilation, ceil_mode)
      RuntimeError: Given input size: (256x1x1). Calculated output size: (256x0x0). Output size is too small at c:programdataminiconda3conda-bldpytorch-cpu_1532499824793workatensrcthnngeneric/SpatialDilatedMaxPooling.c:67


      I don't know what is wrong, is it wrong to change the channel size like this, and if it is wrong can you please lead me to a neural network that work with one channel , as I said I am a newbie in pytorch and I don't want to write the nn myself.










      share|improve this question
















      I am new to pytorch and had a problem with channels in AlexNet.
      I am using it for a ‘gta san andreas self driving car’ project, I collected the dataset from a black and white image that has one channel and trying to train AlexNet using the script:



      from AlexNetPytorch import*
      import torchvision
      import torchvision.transforms as transforms
      import torch.optim as optim
      import torch.utils.data
      import numpy as np
      import torch
      from IPython.core.debugger import set_trace

      AlexNet = AlexNet()

      criterion = nn.CrossEntropyLoss()
      optimizer = optim.SGD(AlexNet.parameters(), lr=0.001, momentum=0.9)

      all_data = np.load('training_data.npy')
      inputs= all_data[:,0]
      labels= all_data[:,1]
      inputs_tensors = torch.stack([torch.Tensor(i) for i in inputs])
      labels_tensors = torch.stack([torch.Tensor(i) for i in labels])

      data_set = torch.utils.data.TensorDataset(inputs_tensors,labels_tensors)
      data_loader = torch.utils.data.DataLoader(data_set, batch_size=3,shuffle=True, num_workers=2)




      if __name__ == '__main__':
      for epoch in range(8):
      runing_loss = 0.0
      for i,data in enumerate(data_loader , 0):
      inputs= data[0]
      inputs = torch.FloatTensor(inputs)
      labels= data[1]
      labels = torch.FloatTensor(labels)
      optimizer.zero_grad()
      # set_trace()
      inputs = torch.unsqueeze(inputs, 1)
      outputs = AlexNet(inputs)
      loss = criterion(outputs , labels)
      loss.backward()
      optimizer.step()

      runing_loss +=loss.item()
      if i % 2000 == 1999: # print every 2000 mini-batches
      print('[%d, %5d] loss: %.3f' %
      (epoch + 1, i + 1, running_loss / 2000))
      running_loss = 0.0
      print('finished')


      I am using AlexNet from the link:
      https://github.com/pytorch/vision/blob/master/torchvision/models/alexnet.py



      But changed line 18 from :



      nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2)


      To :



      nn.Conv2d(1, 64, kernel_size=11, stride=4, padding=2)


      Because I am using only one channel in training images, but I get this error:



       File "training_script.py", line 44, in <module>
      outputs = AlexNet(inputs)
      File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulesmodule.py", line 477, in __call__
      result = self.forward(*input, **kwargs)
      File "C:UsersMukhtarDocumentsAI_projectsgtaAlexNetPytorch.py", line 34, in forward
      x = self.features(x)
      File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulesmodule.py", line 477, in __call__
      result = self.forward(*input, **kwargs)
      File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulescontainer.py", line 91, in forward
      input = module(input)
      File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulesmodule.py", line 477, in __call__
      result = self.forward(*input, **kwargs)
      File "C:UsersMukhtarAnaconda3libsite-packagestorchnnmodulespooling.py", line 142, in forward
      self.return_indices)
      File "C:UsersMukhtarAnaconda3libsite-packagestorchnnfunctional.py", line 396, in max_pool2d
      ret = torch._C._nn.max_pool2d_with_indices(input, kernel_size, stride, padding, dilation, ceil_mode)
      RuntimeError: Given input size: (256x1x1). Calculated output size: (256x0x0). Output size is too small at c:programdataminiconda3conda-bldpytorch-cpu_1532499824793workatensrcthnngeneric/SpatialDilatedMaxPooling.c:67


      I don't know what is wrong, is it wrong to change the channel size like this, and if it is wrong can you please lead me to a neural network that work with one channel , as I said I am a newbie in pytorch and I don't want to write the nn myself.







      neural-network computer-vision pytorch torchvision






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jan 2 at 4:57









      bahman parsamanesh

      1,543520




      1,543520










      asked Jan 1 at 22:57









      flybrainflybrain

      102




      102
























          2 Answers
          2






          active

          oldest

          votes


















          1














          Your error is not related to using gray-scale images instead of RGB. Your error is about the spatial dimensions of the input: while "forwarding" an input image through the net, its size (in feature space) became zero - this is the error you see. You can use this nice guide to see what happens to the output size of each layer (conv/pooling) as a function of kernel size, stride and padding.

          Alexnet expects its input images to be 224 by 224 pixels - make sure your inputs are of the same size.



          Other things you overlooked:





          • You are using Alexnet architecture, but you are initializing it to random weights instead of using pretrained weights (trained on imagenet). To get a trained copy of alexnet you'll need to instantiate the net like this



            AlexNet = alexnet(pretrained=True)


          • Once you decide to use pretrained net, you cannot change its first layer from 3 input channels to three (the trained weight simply won't fit). The easiest fix is to make your input images "colorful" by simply repeating the single channel three times. See repeat() for more info.







          share|improve this answer
























          • yes this was the problem, thank you , because of you now the network now accept the input.

            – flybrain
            Jan 2 at 18:59



















          0














          The problem was with the size of my input, I gave it a (32x32) when I should have given it a (224x224) -I am new to AlexNet so I didn't know that it takes that size-.
          I reshaped my images to (224x224) and now I am training the CNN.






          share|improve this answer























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53999587%2fhow-to-use-alexnet-with-one-channel%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1














            Your error is not related to using gray-scale images instead of RGB. Your error is about the spatial dimensions of the input: while "forwarding" an input image through the net, its size (in feature space) became zero - this is the error you see. You can use this nice guide to see what happens to the output size of each layer (conv/pooling) as a function of kernel size, stride and padding.

            Alexnet expects its input images to be 224 by 224 pixels - make sure your inputs are of the same size.



            Other things you overlooked:





            • You are using Alexnet architecture, but you are initializing it to random weights instead of using pretrained weights (trained on imagenet). To get a trained copy of alexnet you'll need to instantiate the net like this



              AlexNet = alexnet(pretrained=True)


            • Once you decide to use pretrained net, you cannot change its first layer from 3 input channels to three (the trained weight simply won't fit). The easiest fix is to make your input images "colorful" by simply repeating the single channel three times. See repeat() for more info.







            share|improve this answer
























            • yes this was the problem, thank you , because of you now the network now accept the input.

              – flybrain
              Jan 2 at 18:59
















            1














            Your error is not related to using gray-scale images instead of RGB. Your error is about the spatial dimensions of the input: while "forwarding" an input image through the net, its size (in feature space) became zero - this is the error you see. You can use this nice guide to see what happens to the output size of each layer (conv/pooling) as a function of kernel size, stride and padding.

            Alexnet expects its input images to be 224 by 224 pixels - make sure your inputs are of the same size.



            Other things you overlooked:





            • You are using Alexnet architecture, but you are initializing it to random weights instead of using pretrained weights (trained on imagenet). To get a trained copy of alexnet you'll need to instantiate the net like this



              AlexNet = alexnet(pretrained=True)


            • Once you decide to use pretrained net, you cannot change its first layer from 3 input channels to three (the trained weight simply won't fit). The easiest fix is to make your input images "colorful" by simply repeating the single channel three times. See repeat() for more info.







            share|improve this answer
























            • yes this was the problem, thank you , because of you now the network now accept the input.

              – flybrain
              Jan 2 at 18:59














            1












            1








            1







            Your error is not related to using gray-scale images instead of RGB. Your error is about the spatial dimensions of the input: while "forwarding" an input image through the net, its size (in feature space) became zero - this is the error you see. You can use this nice guide to see what happens to the output size of each layer (conv/pooling) as a function of kernel size, stride and padding.

            Alexnet expects its input images to be 224 by 224 pixels - make sure your inputs are of the same size.



            Other things you overlooked:





            • You are using Alexnet architecture, but you are initializing it to random weights instead of using pretrained weights (trained on imagenet). To get a trained copy of alexnet you'll need to instantiate the net like this



              AlexNet = alexnet(pretrained=True)


            • Once you decide to use pretrained net, you cannot change its first layer from 3 input channels to three (the trained weight simply won't fit). The easiest fix is to make your input images "colorful" by simply repeating the single channel three times. See repeat() for more info.







            share|improve this answer













            Your error is not related to using gray-scale images instead of RGB. Your error is about the spatial dimensions of the input: while "forwarding" an input image through the net, its size (in feature space) became zero - this is the error you see. You can use this nice guide to see what happens to the output size of each layer (conv/pooling) as a function of kernel size, stride and padding.

            Alexnet expects its input images to be 224 by 224 pixels - make sure your inputs are of the same size.



            Other things you overlooked:





            • You are using Alexnet architecture, but you are initializing it to random weights instead of using pretrained weights (trained on imagenet). To get a trained copy of alexnet you'll need to instantiate the net like this



              AlexNet = alexnet(pretrained=True)


            • Once you decide to use pretrained net, you cannot change its first layer from 3 input channels to three (the trained weight simply won't fit). The easiest fix is to make your input images "colorful" by simply repeating the single channel three times. See repeat() for more info.








            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Jan 2 at 6:24









            ShaiShai

            70.5k23137247




            70.5k23137247













            • yes this was the problem, thank you , because of you now the network now accept the input.

              – flybrain
              Jan 2 at 18:59



















            • yes this was the problem, thank you , because of you now the network now accept the input.

              – flybrain
              Jan 2 at 18:59

















            yes this was the problem, thank you , because of you now the network now accept the input.

            – flybrain
            Jan 2 at 18:59





            yes this was the problem, thank you , because of you now the network now accept the input.

            – flybrain
            Jan 2 at 18:59













            0














            The problem was with the size of my input, I gave it a (32x32) when I should have given it a (224x224) -I am new to AlexNet so I didn't know that it takes that size-.
            I reshaped my images to (224x224) and now I am training the CNN.






            share|improve this answer




























              0














              The problem was with the size of my input, I gave it a (32x32) when I should have given it a (224x224) -I am new to AlexNet so I didn't know that it takes that size-.
              I reshaped my images to (224x224) and now I am training the CNN.






              share|improve this answer


























                0












                0








                0







                The problem was with the size of my input, I gave it a (32x32) when I should have given it a (224x224) -I am new to AlexNet so I didn't know that it takes that size-.
                I reshaped my images to (224x224) and now I am training the CNN.






                share|improve this answer













                The problem was with the size of my input, I gave it a (32x32) when I should have given it a (224x224) -I am new to AlexNet so I didn't know that it takes that size-.
                I reshaped my images to (224x224) and now I am training the CNN.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Jan 2 at 19:09









                flybrainflybrain

                102




                102






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53999587%2fhow-to-use-alexnet-with-one-channel%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Can a sorcerer learn a 5th-level spell early by creating spell slots using the Font of Magic feature?

                    Does disintegrating a polymorphed enemy still kill it after the 2018 errata?

                    A Topological Invariant for $pi_3(U(n))$