classification_nn

.pdf

School

University of California, San Diego *

*We aren’t endorsed by this school

Course

176

Subject

Statistics

Date

May 14, 2024

Type

pdf

Pages

25

Uploaded by MasterJellyfish4274 on coursehero.com

classification_nn February 16, 2024 1 ECE 176 Assignment 4: Classification using Neural Network Now that you have developed and tested your model on the toy dataset set. It’s time to get down and get dirty with a standard dataset such as cifar10. At this point, you will be using the provided training data to tune the hyper-parameters of your network such that it works with cifar10 for the task of multi-class classification. Important: Recall that now we have non-linear decision boundaries, thus we do not need to do one vs all classification. We learn a single non-linear decision boundary instead. Our non-linear boundaries (thanks to relu non-linearity) will take care of differentiating between all the classes TO SUBMIT: PDF of this notebook with all the required outputs and answers. [2]: # Prepare Packages import numpy as np import matplotlib.pyplot as plt from utils.data_processing import get_cifar10_data from utils.evaluation import get_classification_accuracy % matplotlib inline plt . rcParams[ "figure.figsize" ] = ( 10.0 , 8.0 ) # set default size of plots # For auto-reloading external modules # See http://stackoverflow.com/questions/1907993/ autoreload-of-modules-in-ipython % load_ext autoreload % autoreload 2 # Use a subset of CIFAR10 for the assignment dataset = get_cifar10_data( subset_train =5000 , subset_val =250 , subset_test =500 , ) print (dataset . keys()) print ( "Training Set Data Shape: " , dataset[ "x_train" ] . shape) 1
print ( "Training Set Label Shape: " , dataset[ "y_train" ] . shape) print ( "Validation Set Data Shape: " , dataset[ "x_val" ] . shape) print ( "Validation Set Label Shape: " , dataset[ "y_val" ] . shape) print ( "Test Set Data Shape: " , dataset[ "x_test" ] . shape) print ( "Test Set Label Shape: " , dataset[ "y_test" ] . shape) dict_keys(['x_train', 'y_train', 'x_val', 'y_val', 'x_test', 'y_test']) Training Set Data Shape: (5000, 3072) Training Set Label Shape: (5000,) Validation Set Data Shape: (250, 3072) Validation Set Label Shape: (250,) Test Set Data Shape: (500, 3072) Test Set Label Shape: (500,) [3]: x_train = dataset[ "x_train" ] y_train = dataset[ "y_train" ] x_val = dataset[ "x_val" ] y_val = dataset[ "y_val" ] x_test = dataset[ "x_test" ] y_test = dataset[ "y_test" ] [4]: # Import more utilies and the layers you have implemented from layers.sequential import Sequential from layers.linear import Linear from layers.relu import ReLU from layers.softmax import Softmax from layers.loss_func import CrossEntropyLoss from utils.optimizer import SGD from utils.dataset import DataLoader from utils.trainer import Trainer 1.1 Visualize some examples from the dataset. [5]: # We show a few examples of training images from each class. classes = [ "airplane" , "automobile" , "bird" , "cat" , "deer" , "dog" , "frog" , "horse" , "ship" , ] samples_per_class = 7 2
def visualize_data (dataset, classes, samples_per_class): num_classes = len (classes) for y, cls in enumerate (classes): idxs = np . flatnonzero(y_train == y) idxs = np . random . choice(idxs, samples_per_class, replace = False ) for i, idx in enumerate (idxs): plt_idx = i * num_classes + y + 1 plt . subplot(samples_per_class, num_classes, plt_idx) plt . imshow(dataset[idx]) plt . axis( "off" ) if i == 0 : plt . title( cls ) plt . show() # Visualize the first 10 classes visualize_data( x_train . reshape( 5000 , 3 , 32 , 32 ) . transpose( 0 , 2 , 3 , 1 ), classes, samples_per_class, ) 3
1.2 Initialize the model [6]: input_size = 3072 hidden_size = 100 # Hidden layer size (Hyper-parameter) num_classes = 10 # Output # For a default setting we use the same model we used for the toy dataset. # This tells you the power of a 2 layered Neural Network. Recall the Universal Approximation Theorem. # A 2 layer neural network with non-linearities can approximate any function, given large enough hidden layer def init_model (): # np.random.seed(0) # No need to fix the seed here l1 = Linear(input_size, hidden_size) l2 = Linear(hidden_size, num_classes) r1 = ReLU() softmax = Softmax() 4
return Sequential([l1, r1, l2, softmax]) [7]: # Initialize the dataset with the dataloader class dataset = DataLoader(x_train, y_train, x_val, y_val, x_test, y_test) net = init_model() optim = SGD(net, lr =0.01 , weight_decay =0.01 ) loss_func = CrossEntropyLoss() epoch = 200 # (Hyper-parameter) batch_size = 200 # (Reduce the batch size if your computer is unable to handle it) [8]: # Initialize the trainer class by passing the above modules trainer = Trainer( dataset, optim, net, loss_func, epoch, batch_size, validate_interval =3 ) [9]: # Call the trainer function we have already implemented for you. This trains the model for the given # hyper-parameters. It follows the same procedure as in the last ipython notebook you used for the toy-dataset train_error, validation_accuracy = trainer . train() Epoch Average Loss: 2.302534 Validate Acc: 0.084 Epoch Average Loss: 2.302361 Epoch Average Loss: 2.302147 Epoch Average Loss: 2.301862 Validate Acc: 0.108 Epoch Average Loss: 2.301437 Epoch Average Loss: 2.300839 Epoch Average Loss: 2.299990 Validate Acc: 0.096 Epoch Average Loss: 2.298824 Epoch Average Loss: 2.297339 Epoch Average Loss: 2.295516 Validate Acc: 0.084 Epoch Average Loss: 2.293398 Epoch Average Loss: 2.290907 Epoch Average Loss: 2.287820 Validate Acc: 0.084 Epoch Average Loss: 2.284062 Epoch Average Loss: 2.278987 Epoch Average Loss: 2.272846 Validate Acc: 0.096 Epoch Average Loss: 2.265936 Epoch Average Loss: 2.258480 Epoch Average Loss: 2.250872 5
Validate Acc: 0.100 Epoch Average Loss: 2.243156 Epoch Average Loss: 2.235648 Epoch Average Loss: 2.228603 Validate Acc: 0.124 Epoch Average Loss: 2.221921 Epoch Average Loss: 2.215911 Epoch Average Loss: 2.210334 Validate Acc: 0.124 Epoch Average Loss: 2.204742 Epoch Average Loss: 2.200037 Epoch Average Loss: 2.195513 Validate Acc: 0.128 Epoch Average Loss: 2.191241 Epoch Average Loss: 2.187065 Epoch Average Loss: 2.183359 Validate Acc: 0.132 Epoch Average Loss: 2.179895 Epoch Average Loss: 2.176374 Epoch Average Loss: 2.173382 Validate Acc: 0.144 Epoch Average Loss: 2.170088 Epoch Average Loss: 2.167192 Epoch Average Loss: 2.164379 Validate Acc: 0.140 Epoch Average Loss: 2.161388 Epoch Average Loss: 2.159176 Epoch Average Loss: 2.156676 Validate Acc: 0.144 Epoch Average Loss: 2.154465 Epoch Average Loss: 2.152342 Epoch Average Loss: 2.149955 Validate Acc: 0.144 Epoch Average Loss: 2.148316 Epoch Average Loss: 2.145852 Epoch Average Loss: 2.144280 Validate Acc: 0.148 Epoch Average Loss: 2.142039 Epoch Average Loss: 2.140363 Epoch Average Loss: 2.138541 Validate Acc: 0.152 Epoch Average Loss: 2.136884 Epoch Average Loss: 2.135169 Epoch Average Loss: 2.133874 Validate Acc: 0.148 Epoch Average Loss: 2.132104 Epoch Average Loss: 2.130449 Epoch Average Loss: 2.129021 6
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help