참고
Click here to download the full example code
파이토치(PyTorch) 기본 익히기 || 빠른 시작 || 텐서(Tensor) || Dataset과 Dataloader || 변형(Transform) || 신경망 모델 구성하기 || Autograd || 최적화(Optimization) || 모델 저장하고 불러오기
변형(Transform)¶
데이터가 항상 머신러닝 알고리즘 학습에 필요한 최종 처리가 된 형태로 제공되지는 않습니다. 변형(transform) 을 해서 데이터를 조작하고 학습에 적합하게 만듭니다.
모든 TorchVision 데이터셋들은 변형 로직을 갖는, 호출 가능한 객체(callable)를 받는 매개변수 두개
( 특징(feature)을 변경하기 위한 transform
과 정답(label)을 변경하기 위한 target_transform
)를 갖습니다
torchvision.transforms 모듈은
주로 사용하는 몇가지 변형(transform)을 제공합니다.
FashionMNIST 특징(feature)은 PIL Image 형식이며, 정답(label)은 정수(integer)입니다.
학습을 하려면 정규화(normalize)된 텐서 형태의 특징(feature)과 원-핫(one-hot)으로 부호화(encode)된 텐서 형태의
정답(label)이 필요합니다. 이러한 변형(transformation)을 하기 위해 ToTensor
와 Lambda
를 사용합니다.
import torch
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda
ds = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor(),
target_transform=Lambda(lambda y: torch.zeros(10, dtype=torch.float).scatter_(0, torch.tensor(y), value=1))
)
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to data/FashionMNIST/raw/train-images-idx3-ubyte.gz
0%| | 0/26421880 [00:00<?, ?it/s]
0%| | 32768/26421880 [00:00<03:14, 135970.47it/s]
0%| | 65536/26421880 [00:00<03:14, 135534.30it/s]
0%| | 131072/26421880 [00:00<02:13, 197231.95it/s]
1%| | 229376/26421880 [00:00<01:33, 279892.01it/s]
2%|1 | 458752/26421880 [00:01<00:49, 519690.78it/s]
3%|3 | 917504/26421880 [00:01<00:25, 984144.56it/s]
7%|6 | 1835008/26421880 [00:01<00:12, 1894225.18it/s]
11%|# | 2850816/26421880 [00:01<00:08, 2625071.09it/s]
14%|#4 | 3735552/26421880 [00:02<00:07, 2939986.03it/s]
18%|#7 | 4653056/26421880 [00:02<00:06, 3193731.51it/s]
21%|##1 | 5570560/26421880 [00:02<00:06, 3367469.91it/s]
25%|##4 | 6553600/26421880 [00:02<00:05, 3577258.32it/s]
29%|##8 | 7536640/26421880 [00:03<00:05, 3722765.97it/s]
32%|###2 | 8552448/26421880 [00:03<00:04, 3865592.48it/s]
36%|###6 | 9568256/26421880 [00:03<00:04, 3965262.50it/s]
40%|#### | 10616832/26421880 [00:03<00:03, 4260757.19it/s]
44%|####4 | 11698176/26421880 [00:04<00:03, 4138904.72it/s]
48%|####8 | 12779520/26421880 [00:04<00:03, 4428829.93it/s]
53%|#####2 | 13893632/26421880 [00:04<00:02, 4292011.82it/s]
57%|#####6 | 15007744/26421880 [00:04<00:02, 4581006.07it/s]
61%|######1 | 16121856/26421880 [00:04<00:01, 5408592.00it/s]
63%|######3 | 16744448/26421880 [00:05<00:01, 4925231.51it/s]
65%|######5 | 17301504/26421880 [00:05<00:02, 4110037.67it/s]
70%|######9 | 18448384/26421880 [00:05<00:01, 4549560.29it/s]
74%|#######4 | 19595264/26421880 [00:05<00:01, 5582078.50it/s]
77%|#######6 | 20250624/26421880 [00:05<00:01, 5010041.09it/s]
79%|#######8 | 20807680/26421880 [00:06<00:01, 4128069.26it/s]
83%|########3 | 21987328/26421880 [00:06<00:01, 4390430.91it/s]
88%|########7 | 23166976/26421880 [00:06<00:00, 5235871.82it/s]
90%|########9 | 23756800/26421880 [00:06<00:00, 5237748.71it/s]
92%|#########2| 24379392/26421880 [00:06<00:00, 4325254.70it/s]
97%|#########6| 25591808/26421880 [00:07<00:00, 4559589.49it/s]
100%|##########| 26421880/26421880 [00:07<00:00, 3735748.35it/s]
Extracting data/FashionMNIST/raw/train-images-idx3-ubyte.gz to data/FashionMNIST/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw/train-labels-idx1-ubyte.gz
0%| | 0/29515 [00:00<?, ?it/s]
100%|##########| 29515/29515 [00:00<00:00, 124201.02it/s]
100%|##########| 29515/29515 [00:00<00:00, 123776.44it/s]
Extracting data/FashionMNIST/raw/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz
0%| | 0/4422102 [00:00<?, ?it/s]
1%| | 32768/4422102 [00:00<00:34, 126288.09it/s]
1%|1 | 65536/4422102 [00:00<00:34, 126028.69it/s]
3%|2 | 131072/4422102 [00:00<00:23, 183588.50it/s]
5%|5 | 229376/4422102 [00:01<00:16, 260370.89it/s]
10%|# | 458752/4422102 [00:01<00:08, 484669.21it/s]
21%|## | 917504/4422102 [00:01<00:03, 920086.99it/s]
30%|### | 1343488/4422102 [00:01<00:02, 1152371.59it/s]
40%|#### | 1769472/4422102 [00:02<00:02, 1306680.73it/s]
50%|##### | 2228224/4422102 [00:02<00:01, 1449982.68it/s]
61%|###### | 2686976/4422102 [00:02<00:01, 1550551.06it/s]
72%|#######1 | 3178496/4422102 [00:02<00:00, 1654230.66it/s]
84%|########3 | 3702784/4422102 [00:03<00:00, 1761306.39it/s]
95%|#########4| 4194304/4422102 [00:03<00:00, 1980363.29it/s]
100%|##########| 4422102/4422102 [00:03<00:00, 1301562.29it/s]
Extracting data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz
0%| | 0/5148 [00:00<?, ?it/s]
100%|##########| 5148/5148 [00:00<00:00, 18008571.30it/s]
Extracting data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw
ToTensor()¶
ToTensor
는 PIL Image나 NumPy ndarray
를 FloatTensor
로 변환하고, 이미지의 픽셀의 크기(intensity) 값을 [0., 1.] 범위로
비례하여 조정(scale)합니다.
Lambda 변형(Transform)¶
Lambda 변형은 사용자 정의 람다(lambda) 함수를 적용합니다. 여기에서는 정수를 원-핫으로 부호화된 텐서로 바꾸는
함수를 정의합니다.
이 함수는 먼저 (데이터셋 정답의 개수인) 크기 10짜리 영 텐서(zero tensor)를 만들고,
scatter_ 를 호출하여
주어진 정답 y
에 해당하는 인덱스에 value=1
을 할당합니다.
target_transform = Lambda(lambda y: torch.zeros(
10, dtype=torch.float).scatter_(dim=0, index=torch.tensor(y), value=1))
더 읽어보기¶
Total running time of the script: ( 0 minutes 15.621 seconds)