Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | ||||
4 | 5 | 6 | 7 | 8 | 9 | 10 |
11 | 12 | 13 | 14 | 15 | 16 | 17 |
18 | 19 | 20 | 21 | 22 | 23 | 24 |
25 | 26 | 27 | 28 | 29 | 30 | 31 |
Tags
- 코딩테스트네트워크
- ECCV
- 프로그래머스타겟넘버
- 프로그래머스bfs
- 최솟값 만들기
- 프로그래머스타겟넘버파이썬
- Pytorch pruning
- 프로그래머스너비우선탐색
- 코딩테스트연습
- Two stage Detector
- 프로그래머스파이썬
- Paper list
- ssd
- 타겟넘버bfs
- 프로그래머스게임맵최단거리
- deep learning
- Pruning Tutorial
- One stage detector
- SSD 리뷰
- Code Study
- 프로그래머스타겟넘버정답
- Single Shot MultiBox Detector
- 커피후기
- 다음큰숫자
- Faster R-CNN
- 코딩테스트2단계
- pytorch
- Object Detection
- 프로그래머스
- 프로그래머스네트워크
Archives
- Today
- Total
soyeonland
SSD - Pytorch 본문
The purpose of the document is study
class PriorBox(object):
""" Compute priorbox coordinates in center-offset form for each source feature map
"""
def __init__(self, cfg):
super(PriorBox, self).__init__()
self.image_size = cfg['min_dim'] #300
self.num_priors = len(cfg['aspect_ratios']) #6
self.variance = cfg['variance'] or [0.1]
self.feature_maps = cfg['feature_maps'] #[38, 19, 10, 5, 3, 1]
self.min_sizes = cfg['min_sizes'] #[30, 60, 111, 162, 213, 264]
self.max_sizes = cfg['max_sizes'] #[60, 111, 162, 213, 264, 315]
self.steps = cfg['steps'] #[8, 16, 32, 64 , 100, 300]
self.aspect_ratios = cfg['aspect_ratios'] #[[2],[2,3],[2,3],[2,3],[2],[2]]
self.clip = cfg['clip'] #True
self.version = cfg['name'] #VOC
for v in self.variance:
if v<=0:
raIse ValueError("variances must be greater than 0")
def forward(self):
mean=[]
for k,f in enumerate(self.feature_maps):
for i, j in product(range(f), repeat=2): #get feature map eg)36*36
f_k = self.image_size/self.steps[k]
# 300/[8 16 32 64 100 300]=[37.5 18.75 9.375 4.6875 3 1]
# 37.5
cx = (j + 0.5)/ f_k
cy = (i + 0.5) / f_k
#aspect ratio:1
#rel size : min_size
s_k = self.min_sizes[k]/self.image_size
#[30, 60, 111, 162, 213, 264]/300=[0.1 0.2 0.37 0.54 0.71 0.88]
mean += [cx, cy, s_k, s_k]
#aspect_ration : 1
#rel size : sqrt(s_k*s_(k+1))
#
s_k_prime = sqrt(s_k*(self.max_sizes[k]/self.image_size))
mean += [cx, cy, s_k_prime,s_k_prime]
#rest of aspect ratios
#feature map : 36 aspect ratio: 2
#featrue map : 36 aspect ratio : 2,3[2],[2,3],[2,3],[2,3],[2],[2]]
mean += [cx, cy, s_k*sqrt(ar), s_k/sqrt(ar)]
for ar in self.aspect_ratios[k]: # #[
mean += [cx, cy, s_k/sqrt(ar), s_k*sqrt(ar)] #(centerx, centery, width, height) #나중에 계싼 필요 할거 같은데..?
output = torch.Tensor(mean).view(-1,4)
if self.clip:
output.clamp_(max=1,mim=0) #좌표 안이여서 가능한 것 같음
return output
output: center x, center y, scaled width, scaled height
pytorch._clamp
a=[]
a+=[1,2,3,4]
a+=[5,6,7,8]
output = torch.Tensor(a).view(-1,4)
print(output.clamp_(min=1,max=3))
Output
tensor([[1., 2., 3., 3.],
[3., 3., 3., 3.]])
[참조]
github:https://github.com/amdegroot/ssd.pytorch
pytorch documnet: https://pytorch.org/docs/
'Study > Code Review' 카테고리의 다른 글
Github 모음(weight copy , pruning) (0) | 2020.08.16 |
---|---|
3D Convolution (0) | 2020.04.01 |
Learning both weights and connections for Efficient Neural Networks-(2) (0) | 2020.03.22 |
Learning both weights and connections for Efficient Neural Networks-(1) (0) | 2020.03.22 |
Pruning Tutorial (0) | 2020.03.22 |