Rate this Page

Ray Tune을 사용한 하이퍼파라미터 튜닝#

번역: 심형준

하이퍼파라미터 튜닝은 보통의 모델과 매우 정확한 모델간의 차이를 만들어 낼 수 있습니다. 종종 다른 학습률(Learnig rate)을 선택하거나 layer size를 변경하는 것과 같은 간단한 작업만으로도 모델 성능에 큰 영향을 미치기도 합니다.

다행히, 최적의 매개변수 조합을 찾는데 도움이 되는 도구가 있습니다. Ray Tune 은 분산 하이퍼파라미터 튜닝을 위한 업계 표준 도구입니다. Ray Tune은 최신 하이퍼파라미터 검색 알고리즘을 포함하고 다양한 분석 라이브러리와 통합되며 기본적으로 Ray 의 분산 기계 학습 엔진 을 통해 학습을 지원합니다.

이 튜토리얼은 Ray Tune을 파이토치 학습 workflow에 통합하는 방법을 알려줍니다. CIFAR10 이미지 분류기를 훈련하기 위해 파이토치 문서에서 이 튜토리얼을 확장할 것입니다.

아래와 같이 약간의 수정만 추가하면 됩니다.

  1. 함수에서 데이터 로딩 및 학습 부분을 감싸두고,

  2. 일부 네트워크 파라미터를 구성 가능하게 하고,

  3. 체크포인트를 추가하고 (선택 사항),

  4. 모델 튜닝을 위한 검색 공간을 정의합니다.


이 튜토리얼을 실행하기 위해 아래의 패키지가 설치되어 있는지 확인하세요:

  • ray[tune]: 배포된 하이퍼파라미터 튜닝 라이브러리

  • torchvision: 데이터 변형을 위해 필요

설정 / 불러오기#

필요한 라이브러리들을 불러오는 것(import)으로 시작해보겠습니다:

from functools import partial
import os
import tempfile
from pathlib import Path
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import random_split
import torchvision
import torchvision.transforms as transforms
from ray import tune
from ray import train
from ray.train import Checkpoint, get_checkpoint
from ray.tune.schedulers import ASHAScheduler
import ray.cloudpickle as pickle
/opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning:

pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.

대부분의 import들은 파이토치 모델을 빌드하는데 필요합니다. 가장 마지막의 import만이 Ray Tune을 사용하기 위한 것입니다.

Data loaders#

data loader를 자체 함수로 감싸두고 전역 데이터 디렉토리로 전달합니다. 이런 식으로 서로 다른 실험들 간에 데이터 디렉토리를 공유할 수 있습니다.

def load_data(data_dir="./data"):
    transform = transforms.Compose(
        [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
    )

    trainset = torchvision.datasets.CIFAR10(
        root=data_dir, train=True, download=True, transform=transform
    )

    testset = torchvision.datasets.CIFAR10(
        root=data_dir, train=False, download=True, transform=transform
    )

    return trainset, testset

구성 가능한 신경망#

구성 가능한 파라미터만 튜닝이 가능합니다. 이 예시를 통해 fully connected layer 크기를 지정할 수 있습니다:

class Net(nn.Module):
    def __init__(self, l1=120, l2=84):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, l1)
        self.fc2 = nn.Linear(l1, l2)
        self.fc3 = nn.Linear(l2, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = torch.flatten(x, 1)  # 배치(batch) 차원을 제외한 모든 차원을 평탄화(flatten)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

학습 함수#

흥미를 더해보고자 파이토치 문서의 예제 일부를 변경하여 소개합니다.

학습 스크립트를 train_cifar(config, data_dir=None) 함수로 감싸둡니다. config 매개변수는 학습할 하이퍼파라미터(hyperparameter)를 받습니다. data_dir 은 여러 번의 실행(run) 시 동일한 데이터 소스를 공유할 수 있도록 데이터를 읽고 저장하는 디렉토리를 지정합니다. 또한, checkpoint가 지정되는 경우에는 실행 시작 시점의 모델과 옵티마이저 상태(optimizer state)를 불러올 수 있습니다. 이 튜토리얼의 아래쪽에서 체크포인트(checkpoint)를 지정하는 방법과 체크포인트의 용도에 대한 정보를 확인할 수 있습니다.

net = Net(config["l1"], config["l2"])

checkpoint = get_checkpoint()
if checkpoint:
    with checkpoint.as_directory() as checkpoint_dir:
        data_path = Path(checkpoint_dir) / "data.pkl"
        with open(data_path, "rb") as fp:
            checkpoint_state = pickle.load(fp)
        start_epoch = checkpoint_state["epoch"]
        net.load_state_dict(checkpoint_state["net_state_dict"])
        optimizer.load_state_dict(checkpoint_state["optimizer_state_dict"])
else:
    start_epoch = 0

또한, 옵티마이저의 학습률(learning rate)을 구성할 수 있습니다.

optimizer = optim.SGD(net.parameters(), lr=config["lr"], momentum=0.9)

또한 학습 데이터를 학습 및 검증 세트로 나눕니다. 따라서 데이터의 80%는 모델 학습에 사용하고, 나머지 20%에 대해 유효성 검사 및 손실을 계산합니다. 학습 및 테스트 세트를 반복하는 배치 크기도 구성할 수 있습니다.

DataParallel을 이용한 GPU(다중)지원 추가#

이미지 분류는 GPU를 사용할 때 이점이 많습니다. 운좋게도 Ray Tune에서 파이토치의 추상화를 계속 사용할 수 있습니다. 따라서 여러 GPU에서 데이터 병렬 훈련을 지원하기 위해 모델을 nn.DataParallel 으로 감쌀 수 있습니다.

device = "cpu"
if torch.cuda.is_available():
    device = "cuda:0"
    if torch.cuda.device_count() > 1:
        net = nn.DataParallel(net)
net.to(device)

device 변수를 사용하여 사용 가능한 GPU가 없을 때도 학습이 가능한지 확인합니다. 파이토치는 다음과 같이 데이터를 GPU메모리에 명시적으로 보내도록 요구합니다.

for i, data in enumerate(trainloader, 0):
    inputs, labels = data
    inputs, labels = inputs.to(device), labels.to(device)

이 코드는 이제 CPU들, 단일 GPU 및 다중 GPU에 대한 학습을 지원합니다. 특히 Ray는 fractional-GPUs 도 지원하므로 모델이 GPU 메모리에 적합한 상황에서는 테스트 간에 GPU를 공유할 수 있습니다. 이는 나중에 다룰 것입니다.

Ray Tune으로 통신하기#

가장 흥미로운 부분은 Ray Tune과의 통신입니다:

checkpoint_data = {
    "epoch": epoch,
    "net_state_dict": net.state_dict(),
    "optimizer_state_dict": optimizer.state_dict(),
}
with tempfile.TemporaryDirectory() as checkpoint_dir:
    data_path = Path(checkpoint_dir) / "data.pkl"
    with open(data_path, "wb") as fp:
        pickle.dump(checkpoint_data, fp)

    checkpoint = Checkpoint.from_directory(checkpoint_dir)
    train.report(
        {"loss": val_loss / val_steps, "accuracy": correct / total},
        checkpoint=checkpoint,
    )

여기서 먼저 체크포인트를 저장한 다음 일부 메트릭을 Ray Tune에 다시 보냅니다. 특히, validation loss와 accuracy를 Ray Tune으로 다시 보냅니다. 그 후 Ray Tune은 이러한 메트릭을 사용하여 최상의 결과를 유도하는 하이퍼파라미터 구성을 결정할 수 있습니다. 이러한 메트릭들은 또한 리소스 낭비를 방지하기 위해 성능이 좋지 않은 실험을 조기에 중지하는 데 사용할 수 있습니다.

체크포인트 저장은 선택사항이지만, Population Based Training 과 같은 고급 스케줄러를 사용하기 위해서는 필요합니다. 또한, 체크포인트를 저장해두면 나중에 학습된 모델을 로드하고 평가 세트(test set)에서 검증할 수 있습니다.

전체 학습 함수#

전체 예제 코드는 다음과 같습니다.

def train_cifar(config, data_dir=None):
    net = Net(config["l1"], config["l2"])

    device = "cpu"
    if torch.cuda.is_available():
        device = "cuda:0"
        if torch.cuda.device_count() > 1:
            net = nn.DataParallel(net)
    net.to(device)

    criterion = nn.CrossEntropyLoss()
    optimizer = optim.SGD(net.parameters(), lr=config["lr"], momentum=0.9)

    checkpoint = get_checkpoint()
    if checkpoint:
        with checkpoint.as_directory() as checkpoint_dir:
            data_path = Path(checkpoint_dir) / "data.pkl"
            with open(data_path, "rb") as fp:
                checkpoint_state = pickle.load(fp)
            start_epoch = checkpoint_state["epoch"]
            net.load_state_dict(checkpoint_state["net_state_dict"])
            optimizer.load_state_dict(checkpoint_state["optimizer_state_dict"])
    else:
        start_epoch = 0

    trainset, testset = load_data(data_dir)

    test_abs = int(len(trainset) * 0.8)
    train_subset, val_subset = random_split(
        trainset, [test_abs, len(trainset) - test_abs]
    )

    trainloader = torch.utils.data.DataLoader(
        train_subset, batch_size=int(config["batch_size"]), shuffle=True, num_workers=8
    )
    valloader = torch.utils.data.DataLoader(
        val_subset, batch_size=int(config["batch_size"]), shuffle=True, num_workers=8
    )

    for epoch in range(start_epoch, 10):  # loop over the dataset multiple times
        running_loss = 0.0
        epoch_steps = 0
        for i, data in enumerate(trainloader, 0):
            # get the inputs; data is a list of [inputs, labels]
            inputs, labels = data
            inputs, labels = inputs.to(device), labels.to(device)

            # zero the parameter gradients
            optimizer.zero_grad()

            # forward + backward + optimize
            outputs = net(inputs)
            loss = criterion(outputs, labels)
            loss.backward()
            optimizer.step()

            # print statistics
            running_loss += loss.item()
            epoch_steps += 1
            if i % 2000 == 1999:  # print every 2000 mini-batches
                print(
                    "[%d, %5d] loss: %.3f"
                    % (epoch + 1, i + 1, running_loss / epoch_steps)
                )
                running_loss = 0.0

        # Validation loss
        val_loss = 0.0
        val_steps = 0
        total = 0
        correct = 0
        for i, data in enumerate(valloader, 0):
            with torch.no_grad():
                inputs, labels = data
                inputs, labels = inputs.to(device), labels.to(device)

                outputs = net(inputs)
                _, predicted = torch.max(outputs.data, 1)
                total += labels.size(0)
                correct += (predicted == labels).sum().item()

                loss = criterion(outputs, labels)
                val_loss += loss.cpu().numpy()
                val_steps += 1

        checkpoint_data = {
            "epoch": epoch,
            "net_state_dict": net.state_dict(),
            "optimizer_state_dict": optimizer.state_dict(),
        }
        with tempfile.TemporaryDirectory() as checkpoint_dir:
            data_path = Path(checkpoint_dir) / "data.pkl"
            with open(data_path, "wb") as fp:
                pickle.dump(checkpoint_data, fp)

            checkpoint = Checkpoint.from_directory(checkpoint_dir)
            train.report(
                {"loss": val_loss / val_steps, "accuracy": correct / total},
                checkpoint=checkpoint,
            )

    print("Finished Training")

보다시피, 대부분의 코드는 원본 예제에서 직접 적용되었습니다.

테스트셋 정확도(Test set accuracy)#

일반적으로 머신러닝 모델의 성능은 모델 학습 시 사용하지 않은 데이터를 테스트셋으로 따로 떼어낸 뒤, 이를 사용하여 테스트합니다. 이러한 테스트셋 또한 함수로 감싸둘 수 있습니다:

def test_accuracy(net, device="cpu"):
    trainset, testset = load_data()

    testloader = torch.utils.data.DataLoader(
        testset, batch_size=4, shuffle=False, num_workers=2
    )

    correct = 0
    total = 0
    with torch.no_grad():
        for data in testloader:
            images, labels = data
            images, labels = images.to(device), labels.to(device)
            outputs = net(images)
            _, predicted = torch.max(outputs.data, 1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()

    return correct / total

이 함수는 또한 device 파라미터를 요구하므로, test set 평가를 GPU에서 수행할 수 있습니다.

검색 공간 구성#

마지막으로 Ray Tune의 검색 공간을 정의해야 합니다. 예시는 다음과 같습니다:

config = {
    "l1": tune.choice([2 ** i for i in range(9)]),
    "l2": tune.choice([2 ** i for i in range(9)]),
    "lr": tune.loguniform(1e-4, 1e-1),
    "batch_size": tune.choice([2, 4, 8, 16])
}

tune.choice() 함수는 균일하게 샘플링된 값들의 목록을 입력으로 받습니다. 위 예시에서 l1l2 파라미터는 4와 256 사이의 2의 거듭제곱 값인 4, 8, 16, 32, 64, 128, 256 입니다. lr (학습률)은 0.0001과 0.1 사이에서 균일하게 샘플링 되어야 합니다. 마지막으로, 배치 크기는 2, 4, 8, 16중에서 선택할 수 있습니다.

각 실험에서, Ray Tune은 이제 이러한 검색 공간에서 매개변수 조합을 무작위로 샘플링합니다. 그런 다음 여러 모델을 병렬로 훈련하고 이 중에서 가장 성능이 좋은 모델을 찾습니다. 또한 성능이 좋지 않은 실험을 조기에 종료하는 ASHAScheduler 를 사용합니다.

상수 data_dir 파라미터를 설정하기 위해 functools.partialtrain_cifar 함수를 감싸둡니다. 또한 각 실험에 사용할 수 있는 자원들(resources)을 Ray Tune에 알릴 수 있습니다.

gpus_per_trial = 2
# ...
result = tune.run(
    partial(train_cifar, data_dir=data_dir),
    resources_per_trial={"cpu": 8, "gpu": gpus_per_trial},
    config=config,
    num_samples=num_samples,
    scheduler=scheduler,
    checkpoint_at_end=True)

파이토치 DataLoader 인스턴스의 num_workers 을 늘리기 위해 CPU 수를 지정하고 사용할 수 있습니다. 각 실험에서 선택한 수의 GPU들은 파이토치에 표시됩니다. 실험들은 요청되지 않은 GPU에 액세스할 수 없으므로 같은 자원들을 사용하는 중복된 실험에 대해 신경쓰지 않아도 됩니다.

부분 GPUs를 지정할 수도 있으므로, gpus_per_trial=0.5 와 같은 것 또한 가능합니다. 이후 각 실험은 GPU를 공유합니다. 사용자는 모델이 여전히 GPU메모리에 적합한지만 확인하면 됩니다.

모델을 훈련시킨 후, 가장 성능이 좋은 모델을 찾고 체크포인트 파일에서 학습된 모델을 로드합니다. 이후 test set 정확도(accuracy)를 얻고 모든 것들을 출력하여 확인할 수 있습니다.

전체 주요 기능은 다음과 같습니다.

def main(num_samples=10, max_num_epochs=10, gpus_per_trial=2):
    data_dir = os.path.abspath("./data")
    load_data(data_dir)
    config = {
        "l1": tune.choice([2**i for i in range(9)]),
        "l2": tune.choice([2**i for i in range(9)]),
        "lr": tune.loguniform(1e-4, 1e-1),
        "batch_size": tune.choice([2, 4, 8, 16]),
    }
    scheduler = ASHAScheduler(
        metric="loss",
        mode="min",
        max_t=max_num_epochs,
        grace_period=1,
        reduction_factor=2,
    )
    result = tune.run(
        partial(train_cifar, data_dir=data_dir),
        resources_per_trial={"cpu": 2, "gpu": gpus_per_trial},
        config=config,
        num_samples=num_samples,
        scheduler=scheduler,
    )

    best_trial = result.get_best_trial("loss", "min", "last")
    print(f"Best trial config: {best_trial.config}")
    print(f"Best trial final validation loss: {best_trial.last_result['loss']}")
    print(f"Best trial final validation accuracy: {best_trial.last_result['accuracy']}")

    best_trained_model = Net(best_trial.config["l1"], best_trial.config["l2"])
    device = "cpu"
    if torch.cuda.is_available():
        device = "cuda:0"
        if gpus_per_trial > 1:
            best_trained_model = nn.DataParallel(best_trained_model)
    best_trained_model.to(device)

    best_checkpoint = result.get_best_checkpoint(trial=best_trial, metric="accuracy", mode="max")
    with best_checkpoint.as_directory() as checkpoint_dir:
        data_path = Path(checkpoint_dir) / "data.pkl"
        with open(data_path, "rb") as fp:
            best_checkpoint_data = pickle.load(fp)

        best_trained_model.load_state_dict(best_checkpoint_data["net_state_dict"])
        test_acc = test_accuracy(best_trained_model, device)
        print("Best trial test set accuracy: {}".format(test_acc))


if __name__ == "__main__":
    # 매 실험당 사용할 GPU 수를 여기에서 변경할 수 있습니다:
    main(num_samples=10, max_num_epochs=10, gpus_per_trial=0)
  0%|          | 0.00/170M [00:00<?, ?B/s]
  0%|          | 32.8k/170M [00:00<14:35, 195kB/s]
  0%|          | 65.5k/170M [00:00<14:38, 194kB/s]
  0%|          | 98.3k/170M [00:00<14:21, 198kB/s]
  0%|          | 229k/170M [00:00<06:36, 430kB/s]
  0%|          | 459k/170M [00:00<03:40, 772kB/s]
  1%|          | 918k/170M [00:01<01:57, 1.44MB/s]
  1%|          | 1.84M/170M [00:01<01:01, 2.76MB/s]
  2%|▏         | 3.70M/170M [00:01<00:30, 5.43MB/s]
  4%|▍         | 7.37M/170M [00:01<00:15, 10.6MB/s]
  6%|▌         | 10.4M/170M [00:01<00:13, 12.2MB/s]
  8%|▊         | 13.3M/170M [00:01<00:11, 13.8MB/s]
 10%|▉         | 16.3M/170M [00:02<00:10, 14.8MB/s]
 11%|█         | 19.1M/170M [00:02<00:09, 15.4MB/s]
 13%|█▎        | 22.1M/170M [00:02<00:09, 15.9MB/s]
 15%|█▍        | 25.1M/170M [00:02<00:08, 16.4MB/s]
 16%|█▋        | 28.1M/170M [00:02<00:08, 16.7MB/s]
 18%|█▊        | 30.9M/170M [00:02<00:08, 16.8MB/s]
 20%|█▉        | 33.8M/170M [00:03<00:08, 16.8MB/s]
 22%|██▏       | 36.8M/170M [00:03<00:07, 16.9MB/s]
 23%|██▎       | 39.6M/170M [00:03<00:07, 16.9MB/s]
 25%|██▌       | 42.7M/170M [00:03<00:07, 17.1MB/s]
 27%|██▋       | 45.6M/170M [00:03<00:07, 17.1MB/s]
 28%|██▊       | 48.5M/170M [00:03<00:07, 17.1MB/s]
 30%|███       | 51.5M/170M [00:04<00:06, 17.1MB/s]
 32%|███▏      | 54.3M/170M [00:04<00:06, 17.0MB/s]
 34%|███▎      | 57.3M/170M [00:04<00:06, 17.2MB/s]
 35%|███▌      | 60.3M/170M [00:04<00:06, 17.2MB/s]
 37%|███▋      | 63.2M/170M [00:04<00:06, 17.1MB/s]
 39%|███▉      | 66.2M/170M [00:04<00:06, 17.2MB/s]
 41%|████      | 69.1M/170M [00:05<00:05, 17.1MB/s]
 42%|████▏     | 72.1M/170M [00:05<00:05, 17.1MB/s]
 44%|████▍     | 75.0M/170M [00:05<00:05, 17.1MB/s]
 46%|████▌     | 78.0M/170M [00:05<00:05, 17.2MB/s]
 47%|████▋     | 80.8M/170M [00:05<00:05, 17.1MB/s]
 49%|████▉     | 83.8M/170M [00:05<00:05, 17.1MB/s]
 51%|█████     | 86.8M/170M [00:06<00:04, 17.1MB/s]
 53%|█████▎    | 89.6M/170M [00:06<00:04, 17.1MB/s]
 54%|█████▍    | 92.6M/170M [00:06<00:04, 17.2MB/s]
 56%|█████▌    | 95.5M/170M [00:06<00:04, 17.1MB/s]
 58%|█████▊    | 98.5M/170M [00:06<00:04, 17.2MB/s]
 60%|█████▉    | 101M/170M [00:07<00:04, 17.2MB/s]
 61%|██████    | 104M/170M [00:07<00:03, 17.1MB/s]
 63%|██████▎   | 107M/170M [00:07<00:03, 17.1MB/s]
 65%|██████▍   | 110M/170M [00:07<00:03, 17.2MB/s]
 66%|██████▋   | 113M/170M [00:07<00:03, 17.2MB/s]
 68%|██████▊   | 116M/170M [00:07<00:03, 17.2MB/s]
 70%|██████▉   | 119M/170M [00:08<00:02, 17.2MB/s]
 72%|███████▏  | 122M/170M [00:08<00:02, 17.3MB/s]
 73%|███████▎  | 125M/170M [00:08<00:02, 17.3MB/s]
 75%|███████▌  | 128M/170M [00:08<00:02, 17.2MB/s]
 77%|███████▋  | 131M/170M [00:08<00:02, 17.2MB/s]
 79%|███████▊  | 134M/170M [00:08<00:02, 17.2MB/s]
 80%|████████  | 137M/170M [00:09<00:01, 17.2MB/s]
 82%|████████▏ | 140M/170M [00:09<00:01, 17.1MB/s]
 84%|████████▎ | 143M/170M [00:09<00:01, 17.0MB/s]
 85%|████████▌ | 146M/170M [00:09<00:01, 17.1MB/s]
 87%|████████▋ | 149M/170M [00:09<00:01, 17.0MB/s]
 89%|████████▉ | 152M/170M [00:09<00:01, 16.9MB/s]
 91%|█████████ | 155M/170M [00:10<00:00, 17.0MB/s]
 92%|█████████▏| 157M/170M [00:10<00:00, 17.1MB/s]
 94%|█████████▍| 160M/170M [00:10<00:00, 17.0MB/s]
 96%|█████████▌| 163M/170M [00:10<00:00, 17.1MB/s]
 97%|█████████▋| 166M/170M [00:10<00:00, 17.1MB/s]
 99%|█████████▉| 169M/170M [00:10<00:00, 17.2MB/s]
100%|██████████| 170M/170M [00:10<00:00, 15.5MB/s]
2025-10-03 22:26:01,958 INFO worker.py:1642 -- Started a local Ray instance.
2025-10-03 22:26:04,049 INFO tune.py:228 -- Initializing Ray automatically. For cluster usage or custom Ray initialization, call `ray.init(...)` before `tune.run(...)`.
2025-10-03 22:26:04,051 INFO tune.py:654 -- [output] This will use the new output engine with verbosity 2. To disable the new output and use the legacy output engine, set the environment variable RAY_AIR_NEW_OUTPUT=0. For more information, please see https://github.com/ray-project/ray/issues/36949
╭────────────────────────────────────────────────────────────────────╮
│ Configuration for experiment     train_cifar_2025-10-03_22-26-04   │
├────────────────────────────────────────────────────────────────────┤
│ Search algorithm                 BasicVariantGenerator             │
│ Scheduler                        AsyncHyperBandScheduler           │
│ Number of trials                 10                                │
╰────────────────────────────────────────────────────────────────────╯

View detailed results here: /root/ray_results/train_cifar_2025-10-03_22-26-04
To visualize your results with TensorBoard, run: `tensorboard --logdir /root/ray_results/train_cifar_2025-10-03_22-26-04`

Trial status: 10 PENDING
Current time: 2025-10-03 22:26:04. Total running time: 0s
Logical resource usage: 20.0/256 CPUs, 0/8 GPUs (0.0/1.0 accelerator_type:H200)
╭───────────────────────────────────────────────────────────────────────────────╮
│ Trial name                status       l1     l2            lr     batch_size │
├───────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_f2b95_00000   PENDING       2    256   0.000305994             16 │
│ train_cifar_f2b95_00001   PENDING     128     32   0.010341                16 │
│ train_cifar_f2b95_00002   PENDING       4    256   0.000582548              4 │
│ train_cifar_f2b95_00003   PENDING       8      2   0.03878                  4 │
│ train_cifar_f2b95_00004   PENDING     128     64   0.0275418                2 │
│ train_cifar_f2b95_00005   PENDING       4      8   0.000769138              4 │
│ train_cifar_f2b95_00006   PENDING      64      8   0.00236933              16 │
│ train_cifar_f2b95_00007   PENDING       8    128   0.00365739               2 │
│ train_cifar_f2b95_00008   PENDING      16    128   0.000192995             16 │
│ train_cifar_f2b95_00009   PENDING       8    256   0.00117126               8 │
╰───────────────────────────────────────────────────────────────────────────────╯
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources
(raylet) /opt/conda/lib/python3.11/site-packages/ray/_private/parameter.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
(raylet)   import pkg_resources

Trial train_cifar_f2b95_00009 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00009 config             │
├──────────────────────────────────────────────────┤
│ batch_size                                     8 │
│ l1                                             8 │
│ l2                                           256 │
│ lr                                       0.00117 │
╰──────────────────────────────────────────────────╯

Trial train_cifar_f2b95_00001 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00001 config             │
├──────────────────────────────────────────────────┤
│ batch_size                                    16 │
│ l1                                           128 │
│ l2                                            32 │
│ lr                                       0.01034 │
╰──────────────────────────────────────────────────╯

Trial train_cifar_f2b95_00007 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00007 config             │
├──────────────────────────────────────────────────┤
│ batch_size                                     2 │
│ l1                                             8 │
│ l2                                           128 │
│ lr                                       0.00366 │
╰──────────────────────────────────────────────────╯

Trial train_cifar_f2b95_00004 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00004 config             │
├──────────────────────────────────────────────────┤
│ batch_size                                     2 │
│ l1                                           128 │
│ l2                                            64 │
│ lr                                       0.02754 │
╰──────────────────────────────────────────────────╯

Trial train_cifar_f2b95_00008 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00008 config             │
├──────────────────────────────────────────────────┤
│ batch_size                                    16 │
│ l1                                            16 │
│ l2                                           128 │
│ lr                                       0.00019 │
╰──────────────────────────────────────────────────╯

Trial train_cifar_f2b95_00003 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00003 config             │
├──────────────────────────────────────────────────┤
│ batch_size                                     4 │
│ l1                                             8 │
│ l2                                             2 │
│ lr                                       0.03878 │
╰──────────────────────────────────────────────────╯

Trial train_cifar_f2b95_00000 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00000 config             │
├──────────────────────────────────────────────────┤
│ batch_size                                    16 │
│ l1                                             2 │
│ l2                                           256 │
│ lr                                       0.00031 │
╰──────────────────────────────────────────────────╯

Trial train_cifar_f2b95_00002 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00002 config             │
├──────────────────────────────────────────────────┤
│ batch_size                                     4 │
│ l1                                             4 │
│ l2                                           256 │
│ lr                                       0.00058 │
╰──────────────────────────────────────────────────╯

Trial train_cifar_f2b95_00006 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00006 config             │
├──────────────────────────────────────────────────┤
│ batch_size                                    16 │
│ l1                                            64 │
│ l2                                             8 │
│ lr                                       0.00237 │
╰──────────────────────────────────────────────────╯

Trial train_cifar_f2b95_00005 started with configuration:
╭──────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00005 config             │
├──────────────────────────────────────────────────┤
│ batch_size                                     4 │
│ l1                                             4 │
│ l2                                             8 │
│ lr                                       0.00077 │
╰──────────────────────────────────────────────────╯
(func pid=12600) [1,  2000] loss: 2.083

Trial train_cifar_f2b95_00001 finished iteration 1 at 2025-10-03 22:26:20. Total running time: 16s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00001 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000000 │
│ time_this_iter_s                                  13.00158 │
│ time_total_s                                      13.00158 │
│ training_iteration                                       1 │
│ accuracy                                            0.4763 │
│ loss                                               1.45586 │
╰────────────────────────────────────────────────────────────╯
(func pid=12592) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000000)
Trial train_cifar_f2b95_00001 saved a checkpoint for iteration 1 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000000

Trial train_cifar_f2b95_00008 finished iteration 1 at 2025-10-03 22:26:21. Total running time: 17s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00008 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000000 │
│ time_this_iter_s                                  13.74059 │
│ time_total_s                                      13.74059 │
│ training_iteration                                       1 │
│ accuracy                                            0.1092 │
│ loss                                                2.3008 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00008 saved a checkpoint for iteration 1 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00008_8_batch_size=16,l1=16,l2=128,lr=0.0002_2025-10-03_22-26-04/checkpoint_000000

Trial train_cifar_f2b95_00008 completed after 1 iterations at 2025-10-03 22:26:21. Total running time: 17s

Trial train_cifar_f2b95_00000 finished iteration 1 at 2025-10-03 22:26:21. Total running time: 17s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00000 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000000 │
│ time_this_iter_s                                  14.05161 │
│ time_total_s                                      14.05161 │
│ training_iteration                                       1 │
│ accuracy                                            0.2076 │
│ loss                                               2.05941 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00000 saved a checkpoint for iteration 1 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00000_0_batch_size=16,l1=2,l2=256,lr=0.0003_2025-10-03_22-26-04/checkpoint_000000

Trial train_cifar_f2b95_00000 completed after 1 iterations at 2025-10-03 22:26:21. Total running time: 17s

Trial train_cifar_f2b95_00006 finished iteration 1 at 2025-10-03 22:26:22. Total running time: 17s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00006 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000000 │
│ time_this_iter_s                                    14.296 │
│ time_total_s                                        14.296 │
│ training_iteration                                       1 │
│ accuracy                                            0.3684 │
│ loss                                               1.73915 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00006 saved a checkpoint for iteration 1 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00006_6_batch_size=16,l1=64,l2=8,lr=0.0024_2025-10-03_22-26-04/checkpoint_000000
(func pid=12594) [1,  4000] loss: 1.162 [repeated 11x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/ray-logging.html#log-deduplication for more options.)

Trial train_cifar_f2b95_00009 finished iteration 1 at 2025-10-03 22:26:27. Total running time: 22s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00009 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000000 │
│ time_this_iter_s                                   19.5243 │
│ time_total_s                                       19.5243 │
│ training_iteration                                       1 │
│ accuracy                                            0.4436 │
│ loss                                               1.53569 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00009 saved a checkpoint for iteration 1 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000000
(func pid=12593) [1,  6000] loss: 0.613 [repeated 7x across cluster]
(func pid=12600) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000000) [repeated 4x across cluster]

Trial train_cifar_f2b95_00001 finished iteration 2 at 2025-10-03 22:26:30. Total running time: 25s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00001 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000001 │
│ time_this_iter_s                                   9.42768 │
│ time_total_s                                      22.42926 │
│ training_iteration                                       2 │
│ accuracy                                            0.5172 │
│ loss                                               1.35872 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00001 saved a checkpoint for iteration 2 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000001

Trial train_cifar_f2b95_00006 finished iteration 2 at 2025-10-03 22:26:31. Total running time: 27s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00006 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000001 │
│ time_this_iter_s                                    9.2087 │
│ time_total_s                                       23.5047 │
│ training_iteration                                       2 │
│ accuracy                                            0.4657 │
│ loss                                               1.48145 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00006 saved a checkpoint for iteration 2 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00006_6_batch_size=16,l1=64,l2=8,lr=0.0024_2025-10-03_22-26-04/checkpoint_000001

Trial train_cifar_f2b95_00006 completed after 2 iterations at 2025-10-03 22:26:31. Total running time: 27s
(func pid=12593) [1,  8000] loss: 0.445 [repeated 7x across cluster]

Trial status: 3 TERMINATED | 7 RUNNING
Current time: 2025-10-03 22:26:34. Total running time: 30s
Logical resource usage: 14.0/256 CPUs, 0/8 GPUs (0.0/1.0 accelerator_type:H200)
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_f2b95_00001   RUNNING       128     32   0.010341                16        2            22.4293   1.35872       0.5172 │
│ train_cifar_f2b95_00002   RUNNING         4    256   0.000582548              4                                                    │
│ train_cifar_f2b95_00003   RUNNING         8      2   0.03878                  4                                                    │
│ train_cifar_f2b95_00004   RUNNING       128     64   0.0275418                2                                                    │
│ train_cifar_f2b95_00005   RUNNING         4      8   0.000769138              4                                                    │
│ train_cifar_f2b95_00007   RUNNING         8    128   0.00365739               2                                                    │
│ train_cifar_f2b95_00009   RUNNING         8    256   0.00117126               8        1            19.5243   1.53569       0.4436 │
│ train_cifar_f2b95_00000   TERMINATED      2    256   0.000305994             16        1            14.0516   2.05941       0.2076 │
│ train_cifar_f2b95_00006   TERMINATED     64      8   0.00236933              16        2            23.5047   1.48145       0.4657 │
│ train_cifar_f2b95_00008   TERMINATED     16    128   0.000192995             16        1            13.7406   2.3008        0.1092 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=12593) [1, 10000] loss: 0.339 [repeated 7x across cluster]

Trial train_cifar_f2b95_00001 finished iteration 3 at 2025-10-03 22:26:39. Total running time: 35s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00001 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000002 │
│ time_this_iter_s                                   9.45501 │
│ time_total_s                                      31.88427 │
│ training_iteration                                       3 │
│ accuracy                                            0.5172 │
│ loss                                               1.34038 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00001 saved a checkpoint for iteration 3 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000002
(func pid=12592) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000002) [repeated 3x across cluster]

Trial train_cifar_f2b95_00003 finished iteration 1 at 2025-10-03 22:26:40. Total running time: 36s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00003 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000000 │
│ time_this_iter_s                                  32.93671 │
│ time_total_s                                      32.93671 │
│ training_iteration                                       1 │
│ accuracy                                            0.0976 │
│ loss                                               2.34242 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00003 saved a checkpoint for iteration 1 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00003_3_batch_size=4,l1=8,l2=2,lr=0.0388_2025-10-03_22-26-04/checkpoint_000000

Trial train_cifar_f2b95_00003 completed after 1 iterations at 2025-10-03 22:26:40. Total running time: 36s

Trial train_cifar_f2b95_00002 finished iteration 1 at 2025-10-03 22:26:40. Total running time: 36s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00002 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000000 │
│ time_this_iter_s                                  33.11155 │
│ time_total_s                                      33.11155 │
│ training_iteration                                       1 │
│ accuracy                                            0.3566 │
│ loss                                               1.63892 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00002 saved a checkpoint for iteration 1 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00002_2_batch_size=4,l1=4,l2=256,lr=0.0006_2025-10-03_22-26-04/checkpoint_000000

Trial train_cifar_f2b95_00005 finished iteration 1 at 2025-10-03 22:26:41. Total running time: 37s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00005 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000000 │
│ time_this_iter_s                                  33.66846 │
│ time_total_s                                      33.66846 │
│ training_iteration                                       1 │
│ accuracy                                            0.3463 │
│ loss                                               1.70408 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00005 saved a checkpoint for iteration 1 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00005_5_batch_size=4,l1=4,l2=8,lr=0.0008_2025-10-03_22-26-04/checkpoint_000000

Trial train_cifar_f2b95_00009 finished iteration 2 at 2025-10-03 22:26:43. Total running time: 39s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00009 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000001 │
│ time_this_iter_s                                  16.19927 │
│ time_total_s                                      35.72356 │
│ training_iteration                                       2 │
│ accuracy                                            0.5049 │
│ loss                                               1.36047 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00009 saved a checkpoint for iteration 2 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000001
(func pid=12598) [1, 14000] loss: 0.276 [repeated 5x across cluster]

Trial train_cifar_f2b95_00001 finished iteration 4 at 2025-10-03 22:26:48. Total running time: 44s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00001 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000003 │
│ time_this_iter_s                                   9.31968 │
│ time_total_s                                      41.20394 │
│ training_iteration                                       4 │
│ accuracy                                            0.5287 │
│ loss                                               1.35828 │
╰────────────────────────────────────────────────────────────╯
(func pid=12592) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000003) [repeated 5x across cluster]
Trial train_cifar_f2b95_00001 saved a checkpoint for iteration 4 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000003
(func pid=12596) [1, 14000] loss: 0.334 [repeated 7x across cluster]
(func pid=12593) [2,  6000] loss: 0.534 [repeated 6x across cluster]

Trial train_cifar_f2b95_00001 finished iteration 5 at 2025-10-03 22:26:57. Total running time: 53s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00001 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000004 │
│ time_this_iter_s                                   9.13369 │
│ time_total_s                                      50.33764 │
│ training_iteration                                       5 │
│ accuracy                                            0.5405 │
│ loss                                               1.33497 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00001 saved a checkpoint for iteration 5 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000004
(func pid=12592) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000004)

Trial train_cifar_f2b95_00009 finished iteration 3 at 2025-10-03 22:26:59. Total running time: 54s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00009 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000002 │
│ time_this_iter_s                                  15.66666 │
│ time_total_s                                      51.39022 │
│ training_iteration                                       3 │
│ accuracy                                            0.5157 │
│ loss                                               1.30244 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00009 saved a checkpoint for iteration 3 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000002
(func pid=12600) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000002)
(func pid=12595) [2,  8000] loss: 0.401 [repeated 5x across cluster]

Trial status: 4 TERMINATED | 6 RUNNING
Current time: 2025-10-03 22:27:04. Total running time: 1min 0s
Logical resource usage: 12.0/256 CPUs, 0/8 GPUs (0.0/1.0 accelerator_type:H200)
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_f2b95_00001   RUNNING       128     32   0.010341                16        5            50.3376   1.33497       0.5405 │
│ train_cifar_f2b95_00002   RUNNING         4    256   0.000582548              4        1            33.1116   1.63892       0.3566 │
│ train_cifar_f2b95_00004   RUNNING       128     64   0.0275418                2                                                    │
│ train_cifar_f2b95_00005   RUNNING         4      8   0.000769138              4        1            33.6685   1.70408       0.3463 │
│ train_cifar_f2b95_00007   RUNNING         8    128   0.00365739               2                                                    │
│ train_cifar_f2b95_00009   RUNNING         8    256   0.00117126               8        3            51.3902   1.30244       0.5157 │
│ train_cifar_f2b95_00000   TERMINATED      2    256   0.000305994             16        1            14.0516   2.05941       0.2076 │
│ train_cifar_f2b95_00003   TERMINATED      8      2   0.03878                  4        1            32.9367   2.34242       0.0976 │
│ train_cifar_f2b95_00006   TERMINATED     64      8   0.00236933              16        2            23.5047   1.48145       0.4657 │
│ train_cifar_f2b95_00008   TERMINATED     16    128   0.000192995             16        1            13.7406   2.3008        0.1092 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Trial train_cifar_f2b95_00007 finished iteration 1 at 2025-10-03 22:27:05. Total running time: 1min 0s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00007 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000000 │
│ time_this_iter_s                                  57.44913 │
│ time_total_s                                      57.44913 │
│ training_iteration                                       1 │
│ accuracy                                            0.2941 │
│ loss                                               1.85399 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00007 saved a checkpoint for iteration 1 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00007_7_batch_size=2,l1=8,l2=128,lr=0.0037_2025-10-03_22-26-04/checkpoint_000000

Trial train_cifar_f2b95_00007 completed after 1 iterations at 2025-10-03 22:27:05. Total running time: 1min 0s
(func pid=12598) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00007_7_batch_size=2,l1=8,l2=128,lr=0.0037_2025-10-03_22-26-04/checkpoint_000000)

Trial train_cifar_f2b95_00001 finished iteration 6 at 2025-10-03 22:27:07. Total running time: 1min 3s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00001 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000005 │
│ time_this_iter_s                                   9.20401 │
│ time_total_s                                      59.54165 │
│ training_iteration                                       6 │
│ accuracy                                            0.5458 │
│ loss                                               1.38728 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00001 saved a checkpoint for iteration 6 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000005
(func pid=12596) [1, 20000] loss: 0.233 [repeated 6x across cluster]
(func pid=12592) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000005)

Trial train_cifar_f2b95_00002 finished iteration 2 at 2025-10-03 22:27:09. Total running time: 1min 5s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00002 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000001 │
│ time_this_iter_s                                  28.47496 │
│ time_total_s                                      61.58651 │
│ training_iteration                                       2 │
│ accuracy                                            0.4365 │
│ loss                                               1.50182 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00002 saved a checkpoint for iteration 2 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00002_2_batch_size=4,l1=4,l2=256,lr=0.0006_2025-10-03_22-26-04/checkpoint_000001

Trial train_cifar_f2b95_00002 completed after 2 iterations at 2025-10-03 22:27:09. Total running time: 1min 5s

Trial train_cifar_f2b95_00005 finished iteration 2 at 2025-10-03 22:27:09. Total running time: 1min 5s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00005 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000001 │
│ time_this_iter_s                                  28.50694 │
│ time_total_s                                       62.1754 │
│ training_iteration                                       2 │
│ accuracy                                            0.4021 │
│ loss                                               1.57171 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00005 saved a checkpoint for iteration 2 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00005_5_batch_size=4,l1=4,l2=8,lr=0.0008_2025-10-03_22-26-04/checkpoint_000001

Trial train_cifar_f2b95_00005 completed after 2 iterations at 2025-10-03 22:27:09. Total running time: 1min 5s

Trial train_cifar_f2b95_00004 finished iteration 1 at 2025-10-03 22:27:13. Total running time: 1min 9s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00004 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000000 │
│ time_this_iter_s                                   66.0309 │
│ time_total_s                                       66.0309 │
│ training_iteration                                       1 │
│ accuracy                                            0.0981 │
│ loss                                               2.32891 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00004 saved a checkpoint for iteration 1 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00004_4_batch_size=2,l1=128,l2=64,lr=0.0275_2025-10-03_22-26-04/checkpoint_000000

Trial train_cifar_f2b95_00004 completed after 1 iterations at 2025-10-03 22:27:13. Total running time: 1min 9s
(func pid=12596) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00004_4_batch_size=2,l1=128,l2=64,lr=0.0275_2025-10-03_22-26-04/checkpoint_000000) [repeated 3x across cluster]
(func pid=12592) [7,  2000] loss: 1.145 [repeated 2x across cluster]

Trial train_cifar_f2b95_00009 finished iteration 4 at 2025-10-03 22:27:14. Total running time: 1min 10s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00009 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000003 │
│ time_this_iter_s                                  15.57003 │
│ time_total_s                                      66.96025 │
│ training_iteration                                       4 │
│ accuracy                                            0.5614 │
│ loss                                               1.22414 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00009 saved a checkpoint for iteration 4 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000003

Trial train_cifar_f2b95_00001 finished iteration 7 at 2025-10-03 22:27:16. Total running time: 1min 12s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00001 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000006 │
│ time_this_iter_s                                   9.23141 │
│ time_total_s                                      68.77305 │
│ training_iteration                                       7 │
│ accuracy                                             0.551 │
│ loss                                               1.34653 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00001 saved a checkpoint for iteration 7 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000006
(func pid=12600) [5,  2000] loss: 1.201
(func pid=12592) [8,  2000] loss: 1.135

Trial train_cifar_f2b95_00001 finished iteration 8 at 2025-10-03 22:27:25. Total running time: 1min 21s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00001 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000007 │
│ time_this_iter_s                                   9.14384 │
│ time_total_s                                      77.91689 │
│ training_iteration                                       8 │
│ accuracy                                            0.5385 │
│ loss                                               1.37364 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00001 saved a checkpoint for iteration 8 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000007
(func pid=12592) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000007) [repeated 3x across cluster]

Trial train_cifar_f2b95_00009 finished iteration 5 at 2025-10-03 22:27:30. Total running time: 1min 26s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00009 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000004 │
│ time_this_iter_s                                  16.20591 │
│ time_total_s                                      83.16616 │
│ training_iteration                                       5 │
│ accuracy                                            0.5578 │
│ loss                                                1.2334 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00009 saved a checkpoint for iteration 5 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000004
(func pid=12600) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000004)
(func pid=12592) [9,  2000] loss: 1.111 [repeated 2x across cluster]

Trial status: 8 TERMINATED | 2 RUNNING
Current time: 2025-10-03 22:27:34. Total running time: 1min 30s
Logical resource usage: 4.0/256 CPUs, 0/8 GPUs (0.0/1.0 accelerator_type:H200)
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_f2b95_00001   RUNNING       128     32   0.010341                16        8            77.9169   1.37364       0.5385 │
│ train_cifar_f2b95_00009   RUNNING         8    256   0.00117126               8        5            83.1662   1.2334        0.5578 │
│ train_cifar_f2b95_00000   TERMINATED      2    256   0.000305994             16        1            14.0516   2.05941       0.2076 │
│ train_cifar_f2b95_00002   TERMINATED      4    256   0.000582548              4        2            61.5865   1.50182       0.4365 │
│ train_cifar_f2b95_00003   TERMINATED      8      2   0.03878                  4        1            32.9367   2.34242       0.0976 │
│ train_cifar_f2b95_00004   TERMINATED    128     64   0.0275418                2        1            66.0309   2.32891       0.0981 │
│ train_cifar_f2b95_00005   TERMINATED      4      8   0.000769138              4        2            62.1754   1.57171       0.4021 │
│ train_cifar_f2b95_00006   TERMINATED     64      8   0.00236933              16        2            23.5047   1.48145       0.4657 │
│ train_cifar_f2b95_00007   TERMINATED      8    128   0.00365739               2        1            57.4491   1.85399       0.2941 │
│ train_cifar_f2b95_00008   TERMINATED     16    128   0.000192995             16        1            13.7406   2.3008        0.1092 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Trial train_cifar_f2b95_00001 finished iteration 9 at 2025-10-03 22:27:35. Total running time: 1min 31s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00001 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000008 │
│ time_this_iter_s                                   9.95506 │
│ time_total_s                                      87.87195 │
│ training_iteration                                       9 │
│ accuracy                                            0.5591 │
│ loss                                               1.31092 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00001 saved a checkpoint for iteration 9 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000008
(func pid=12592) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000008)
(func pid=12600) [6,  4000] loss: 0.585 [repeated 2x across cluster]

Trial train_cifar_f2b95_00001 finished iteration 10 at 2025-10-03 22:27:45. Total running time: 1min 41s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00001 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000009 │
│ time_this_iter_s                                    9.5594 │
│ time_total_s                                      97.43135 │
│ training_iteration                                      10 │
│ accuracy                                            0.5712 │
│ loss                                               1.29955 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00001 saved a checkpoint for iteration 10 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000009

Trial train_cifar_f2b95_00001 completed after 10 iterations at 2025-10-03 22:27:45. Total running time: 1min 41s
(func pid=12592) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00001_1_batch_size=16,l1=128,l2=32,lr=0.0103_2025-10-03_22-26-04/checkpoint_000009)

Trial train_cifar_f2b95_00009 finished iteration 6 at 2025-10-03 22:27:46. Total running time: 1min 42s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00009 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000005 │
│ time_this_iter_s                                  16.12082 │
│ time_total_s                                      99.28699 │
│ training_iteration                                       6 │
│ accuracy                                            0.5753 │
│ loss                                               1.20882 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00009 saved a checkpoint for iteration 6 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000005
(func pid=12600) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000005)
(func pid=12600) [7,  2000] loss: 1.131 [repeated 2x across cluster]
(func pid=12600) [7,  4000] loss: 0.576

Trial train_cifar_f2b95_00009 finished iteration 7 at 2025-10-03 22:28:03. Total running time: 1min 58s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00009 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000006 │
│ time_this_iter_s                                  16.09692 │
│ time_total_s                                     115.38391 │
│ training_iteration                                       7 │
│ accuracy                                            0.5877 │
│ loss                                               1.15432 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00009 saved a checkpoint for iteration 7 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000006
(func pid=12600) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000006)

Trial status: 9 TERMINATED | 1 RUNNING
Current time: 2025-10-03 22:28:04. Total running time: 2min 0s
Logical resource usage: 2.0/256 CPUs, 0/8 GPUs (0.0/1.0 accelerator_type:H200)
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_f2b95_00009   RUNNING         8    256   0.00117126               8        7           115.384    1.15432       0.5877 │
│ train_cifar_f2b95_00000   TERMINATED      2    256   0.000305994             16        1            14.0516   2.05941       0.2076 │
│ train_cifar_f2b95_00001   TERMINATED    128     32   0.010341                16       10            97.4314   1.29955       0.5712 │
│ train_cifar_f2b95_00002   TERMINATED      4    256   0.000582548              4        2            61.5865   1.50182       0.4365 │
│ train_cifar_f2b95_00003   TERMINATED      8      2   0.03878                  4        1            32.9367   2.34242       0.0976 │
│ train_cifar_f2b95_00004   TERMINATED    128     64   0.0275418                2        1            66.0309   2.32891       0.0981 │
│ train_cifar_f2b95_00005   TERMINATED      4      8   0.000769138              4        2            62.1754   1.57171       0.4021 │
│ train_cifar_f2b95_00006   TERMINATED     64      8   0.00236933              16        2            23.5047   1.48145       0.4657 │
│ train_cifar_f2b95_00007   TERMINATED      8    128   0.00365739               2        1            57.4491   1.85399       0.2941 │
│ train_cifar_f2b95_00008   TERMINATED     16    128   0.000192995             16        1            13.7406   2.3008        0.1092 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(func pid=12600) [8,  2000] loss: 1.116
(func pid=12600) [8,  4000] loss: 0.563

Trial train_cifar_f2b95_00009 finished iteration 8 at 2025-10-03 22:28:19. Total running time: 2min 14s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00009 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000007 │
│ time_this_iter_s                                  16.00349 │
│ time_total_s                                     131.38739 │
│ training_iteration                                       8 │
│ accuracy                                            0.5892 │
│ loss                                                 1.172 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00009 saved a checkpoint for iteration 8 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000007
(func pid=12600) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000007)
(func pid=12600) [9,  2000] loss: 1.089
(func pid=12600) [9,  4000] loss: 0.556

Trial status: 9 TERMINATED | 1 RUNNING
Current time: 2025-10-03 22:28:34. Total running time: 2min 30s
Logical resource usage: 2.0/256 CPUs, 0/8 GPUs (0.0/1.0 accelerator_type:H200)
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_f2b95_00009   RUNNING         8    256   0.00117126               8        8           131.387    1.172         0.5892 │
│ train_cifar_f2b95_00000   TERMINATED      2    256   0.000305994             16        1            14.0516   2.05941       0.2076 │
│ train_cifar_f2b95_00001   TERMINATED    128     32   0.010341                16       10            97.4314   1.29955       0.5712 │
│ train_cifar_f2b95_00002   TERMINATED      4    256   0.000582548              4        2            61.5865   1.50182       0.4365 │
│ train_cifar_f2b95_00003   TERMINATED      8      2   0.03878                  4        1            32.9367   2.34242       0.0976 │
│ train_cifar_f2b95_00004   TERMINATED    128     64   0.0275418                2        1            66.0309   2.32891       0.0981 │
│ train_cifar_f2b95_00005   TERMINATED      4      8   0.000769138              4        2            62.1754   1.57171       0.4021 │
│ train_cifar_f2b95_00006   TERMINATED     64      8   0.00236933              16        2            23.5047   1.48145       0.4657 │
│ train_cifar_f2b95_00007   TERMINATED      8    128   0.00365739               2        1            57.4491   1.85399       0.2941 │
│ train_cifar_f2b95_00008   TERMINATED     16    128   0.000192995             16        1            13.7406   2.3008        0.1092 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Trial train_cifar_f2b95_00009 finished iteration 9 at 2025-10-03 22:28:34. Total running time: 2min 30s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00009 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000008 │
│ time_this_iter_s                                  15.62216 │
│ time_total_s                                     147.00955 │
│ training_iteration                                       9 │
│ accuracy                                            0.5913 │
│ loss                                               1.16831 │
╰────────────────────────────────────────────────────────────╯
(func pid=12600) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000008)
Trial train_cifar_f2b95_00009 saved a checkpoint for iteration 9 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000008
(func pid=12600) [10,  2000] loss: 1.090
(func pid=12600) [10,  4000] loss: 0.545

Trial train_cifar_f2b95_00009 finished iteration 10 at 2025-10-03 22:28:51. Total running time: 2min 47s
╭────────────────────────────────────────────────────────────╮
│ Trial train_cifar_f2b95_00009 result                       │
├────────────────────────────────────────────────────────────┤
│ checkpoint_dir_name                      checkpoint_000009 │
│ time_this_iter_s                                  16.46125 │
│ time_total_s                                     163.47079 │
│ training_iteration                                      10 │
│ accuracy                                            0.6025 │
│ loss                                                1.1439 │
╰────────────────────────────────────────────────────────────╯
Trial train_cifar_f2b95_00009 saved a checkpoint for iteration 10 at: (local)/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000009

Trial train_cifar_f2b95_00009 completed after 10 iterations at 2025-10-03 22:28:51. Total running time: 2min 47s

Trial status: 10 TERMINATED
Current time: 2025-10-03 22:28:51. Total running time: 2min 47s
Logical resource usage: 2.0/256 CPUs, 0/8 GPUs (0.0/1.0 accelerator_type:H200)
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ train_cifar_f2b95_00000   TERMINATED      2    256   0.000305994             16        1            14.0516   2.05941       0.2076 │
│ train_cifar_f2b95_00001   TERMINATED    128     32   0.010341                16       10            97.4314   1.29955       0.5712 │
│ train_cifar_f2b95_00002   TERMINATED      4    256   0.000582548              4        2            61.5865   1.50182       0.4365 │
│ train_cifar_f2b95_00003   TERMINATED      8      2   0.03878                  4        1            32.9367   2.34242       0.0976 │
│ train_cifar_f2b95_00004   TERMINATED    128     64   0.0275418                2        1            66.0309   2.32891       0.0981 │
│ train_cifar_f2b95_00005   TERMINATED      4      8   0.000769138              4        2            62.1754   1.57171       0.4021 │
│ train_cifar_f2b95_00006   TERMINATED     64      8   0.00236933              16        2            23.5047   1.48145       0.4657 │
│ train_cifar_f2b95_00007   TERMINATED      8    128   0.00365739               2        1            57.4491   1.85399       0.2941 │
│ train_cifar_f2b95_00008   TERMINATED     16    128   0.000192995             16        1            13.7406   2.3008        0.1092 │
│ train_cifar_f2b95_00009   TERMINATED      8    256   0.00117126               8       10           163.471    1.1439        0.6025 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

(func pid=12600) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/root/ray_results/train_cifar_2025-10-03_22-26-04/train_cifar_f2b95_00009_9_batch_size=8,l1=8,l2=256,lr=0.0012_2025-10-03_22-26-04/checkpoint_000009)
Best trial config: {'l1': 8, 'l2': 256, 'lr': 0.001171259491329369, 'batch_size': 8}
Best trial final validation loss: 1.143903388774395
Best trial final validation accuracy: 0.6025
Best trial test set accuracy: 0.5994

코드를 실행하면 결과는 다음과 같이 나올 것입니다:

Number of trials: 10/10 (10 TERMINATED)
+-----+--------------+------+------+-------------+--------+---------+------------+
| ... |   batch_size |   l1 |   l2 |          lr |   iter |    loss |   accuracy |
|-----+--------------+------+------+-------------+--------+---------+------------|
| ... |            2 |    1 |  256 | 0.000668163 |      1 | 2.31479 |     0.0977 |
| ... |            4 |   64 |    8 | 0.0331514   |      1 | 2.31605 |     0.0983 |
| ... |            4 |    2 |    1 | 0.000150295 |      1 | 2.30755 |     0.1023 |
| ... |           16 |   32 |   32 | 0.0128248   |     10 | 1.66912 |     0.4391 |
| ... |            4 |    8 |  128 | 0.00464561  |      2 | 1.7316  |     0.3463 |
| ... |            8 |  256 |    8 | 0.00031556  |      1 | 2.19409 |     0.1736 |
| ... |            4 |   16 |  256 | 0.00574329  |      2 | 1.85679 |     0.3368 |
| ... |            8 |    2 |    2 | 0.00325652  |      1 | 2.30272 |     0.0984 |
| ... |            2 |    2 |    2 | 0.000342987 |      2 | 1.76044 |     0.292  |
| ... |            4 |   64 |   32 | 0.003734    |      8 | 1.53101 |     0.4761 |
+-----+--------------+------+------+-------------+--------+---------+------------+

Best trial config: {'l1': 64, 'l2': 32, 'lr': 0.0037339984519545164, 'batch_size': 4}
Best trial final validation loss: 1.5310075663924216
Best trial final validation accuracy: 0.4761
Best trial test set accuracy: 0.4737

대부분의 실험은 자원 낭비를 막기 위해 일찍 중단되었습니다. 가장 좋은 결과를 얻은 실험은 47%의 정확도를 달성했으며, 이는 테스트셋에서 확인할 수 있습니다.

이것이 전부입니다! 이제 파이토치 모델의 매개변수를 조정할 수 있습니다.

Total running time of the script: (3 minutes 16.218 seconds)