soyeonland

Github 모음(weight copy , pruning) 본문

Study/Code Review

Github 모음(weight copy , pruning)

soyeonland 2020. 8. 16. 23:09

<NestedNet Implementation>

--------------Weight 복사----------

https://discuss.pytorch.org/t/copying-weights-from-one-net-to-another/1492/3

 

Copying weights from one net to another

How does deep copy / canonical copy differ from normal weights loading?

discuss.pytorch.org

https://discuss.pytorch.org/t/copy-deepcopy-vs-clone/55022/3

 

Copy.deepcopy() vs clone()

Hi, Thanks a lot. So this means, when I do clone() on tensors, their clones will still be on the graph and any operations on them will be reflected in the graph right? for example changing the values or attributes will also change the original tensor as we

discuss.pytorch.org

https://discuss.pytorch.org/t/deep-copying-pytorch-modules/13514

 

Deep copying PyTorch modules

Hi, I am new to PyTorch, and was wondering if there is an API defined for deep copying Modules? I have a function that accepts a Module as input, and trains it. Because I don’t want to keep track of object’s state, I am planning to just deep copy the t

discuss.pytorch.org

https://discuss.pytorch.org/t/are-there-any-recommended-methods-to-clone-a-model/483/14

 

Are there any recommended methods to clone a model?

Does something inspired from: or not work for you?

discuss.pytorch.org

 

-------------Pruning-------------

https://github.com/zepx/pytorch-weight-prune/blob/86426abb37ad626a1abce1e3da23eff37d395942/pruning/methods.py

 

zepx/pytorch-weight-prune

Pytorch version for weight pruning for Murata Group's CREST project - zepx/pytorch-weight-prune

github.com

https://github.com/jacobgil/pytorch-pruning/blob/master/prune.py

 

jacobgil/pytorch-pruning

PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference - jacobgil/pytorch-pruning

github.com

https://discuss.pytorch.org/t/how-to-set-gradients-manually-and-update-weights-using-that/17428/4

 

How to set gradients manually and update weights using that?

Do you have any idea of how to make it work for the following scenario? import torch class Network(nn.Module): def __init__(self): super(Network, self).__init__() self.l1 = nn.Linear(20, 15) self.l2 = nn.Linear(15, 10) def forward(self, input): out = self.

discuss.pytorch.org

https://stackoverflow.com/questions/53544901/how-to-mask-weights-in-pytorch-weight-parameters

 

How to mask weights in PyTorch weight parameters?

I am attempting to mask (force to zero) specific weight values in PyTorch. The weights I am trying to mask are defined as so in the def __init__ class LSTM_MASK(nn.Module): def __init__(se...

stackoverflow.com

https://discuss.pytorch.org/t/weights-disconnection-implementation/19314/10

 

Weights disconnection implementation

Care to share the code for your experiment

discuss.pytorch.org

https://discuss.pytorch.org/t/masking-module-parameters/67279/4

 

Masking module parameters

So here you have two parameters in your module: original weights of the module mask_params that are used to compute the mask I would modify the module to have all the right Parameters and recompute weight for each forward. # Example for a Linear (handle bi

discuss.pytorch.org

https://discuss.pytorch.org/t/custom-connections-in-neural-network-layers/3027/6

 

Custom connections in neural network layers

If you have a mask already then you could do an element-wise multiply between the mask and the weights for every forward I would think? I don’t imagine an element-wise multiply would be too slow to do every forward. Alternatively, you could do something

discuss.pytorch.org

https://www.kaggle.com/sironghuang/understanding-pytorch-hooks

 

Understanding Pytorch hooks

Explore and run machine learning code with Kaggle Notebooks | Using data from Backprop-toyexample

www.kaggle.com

https://blog.paperspace.com/pytorch-hooks-gradient-clipping-debugging/#:~:text=PyTorch%20provides%20two%20types%20of%20hooks.&text=A%20forward%20hook%20is%20executed,backward%20functions%20of%20an%20Autograd.

 

Debugging and Visualisation in PyTorch using Hooks

In this post, we cover debugging and Visualisation in PyTorch. We go over PyTorch hooks and how to use them to debug our backpass, visualise activations and modify gradients.

blog.paperspace.com

https://discuss.pytorch.org/t/updating-only-some-values-in-backward-pass/38192/4

 

Updating only some values in backward pass

If I understand your problem. You only want to backpropagate through only those pixels that are True in the mask. In that case, you cannot use the approach, you defined in the question as that would backpropagate through all the pixels. Use this code inste

discuss.pytorch.org

https://github.com/jacobgil/pytorch-pruning/search?q=prune_vgg16_conv_layer&unscoped_q=prune_vgg16_conv_layer

 

jacobgil/pytorch-pruning

PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference - jacobgil/pytorch-pruning

github.com