이미지 데이터로 학습하기 시작하니까 컴퓨터 사양이 중요하다는걸 느끼게 되었다.
사실 기초지식을 배우는 정도의 개발연습에서 고사양 컴퓨터가필요할거란 생각을 안했고 노트북을 구매할떄
적당히 타협해서 라이젠 4650u 루느아르 cpu에 램도 추후 필요하면 업그레이드 할요량으로 8gb만 달았는데
머신러닝하면서 그레픽카드가 있었다면 ram 이 더 많았다면 하는 생각이 들었다.
# CNN을 사용하는image처리 예제 cifar
from tensorflow.keras import datasets
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.callbacks import ModelCheckpoint , EarlyStopping
from tensorflow import keras
import os
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
seed= 0
np.random.seed(seed)
tf.random.set_seed(seed)
cifar_mnist=datasets.cifar10
(X_train,Y_train),(X_test,Y_test) = cifar_mnist.load_data()
class_names = ['Airplane','Car','Birs','Cat','Deer','Dog','Frog','Horse','Ship','Truck']
print(X_train.shape) # 50000,32,32,3
print(Y_train)
print(X_test.shape) # 10000.32,32,3
print(len(Y_test))
#훈련 셋트에 있는 첫번째 이미지- 픽셀 값의 범위가 0~255 사이 확인
plt.figure()
plt.imshow(X_train[0])
plt.colorbar()
plt.grid(False)
plt.show()
#0~1사이로 만들어줌
X_train=X_train.reshape(X_train.shape[0],32,32,3).astype('float32')/255
X_test=X_test.reshape(X_test.shape[0],32,32,3).astype('float32')/255
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(X_train[i],cmap=plt.cm.binary)
plt.xlabel(class_names[Y_train[i]])
plt.show()
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Flatten
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
model=Sequential()
model.add(Conv2D(32,kernel_size=(3,3),input_shape=(32,32,3),activation='relu'))
model.add(Conv2D(64,(3,3),activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10,activation='softmax'))
#sparse_categorical_crossentropy 총합이 1이 되도록 분류
model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
MODEL_DIR='.model/'
if not os.path.exists(MODEL_DIR):
os.mkdir(MODEL_DIR)
modelpath="./model/{epoch:02d}{val_loss:.4f}.hdf5"
Checkpointer= ModelCheckpoint(filepath=modelpath, monitor='val_loss', verbose=1,save_best_only=True)
early_stopping_callback=EarlyStopping(monitor='val_loss',patience=10)
history=model.fit(X_train, Y_train , validation_data=(X_test,Y_test), epochs=30,
batch_size=200,verbose=0, callbacks=[early_stopping_callback,Checkpointer])
print('\n Test Accuracy : %.4f'%(model.evaluate(X_test, Y_test)[1]))
predictions=model.predict(X_test)
Epoch 00029: val_loss did not improve from 0.85649 313/313 [==============================] - 9s 27ms/step - loss: 0.9212 - accuracy: 0.7058 Test Accuracy : 0.7058
#예측은 10개의 숫자 배열로 푯됨 10개의 옷 품목에 상응하는 모델의신뢰도
print(predictions[0])
np.argmax(predictions[0])
print(Y_test0])
y_vloss=history.history['val_loss']
y_loss=history.history['loss']
x_len=np.arange(len(y_loss))
plt.plot(x_len, y_vloss,marker='.',c='red',label='Testset_loss')
plt.plot(x_len,y_loss,'o',c='blue',label='Trainset_loss')
plt.legend(loc='upper right')
plt.grid()
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
# CNN을 사용하는image처리 예제 cifar
from tensorflow.keras import datasets
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.callbacks import ModelCheckpoint , EarlyStopping
from tensorflow import keras
from tensorflow.keras import utils
from tensorflow.keras import layers
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
seed= 0
np.random.seed(seed)
tf.random.set_seed(seed)
cifar_mnist=datasets.cifar10
(train_images,train_labels),(test_images,test_labels) = cifar_mnist.load_data()
class_names = ['Airplane','Car','Birds','Cat','Deer','Dog','Frog','Horse','Ship','Truck']
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(X_train[i],cmap=plt.cm.binary)
plt.xlabel(class_names[int(train_labels[i])])
plt.show()
num_classe=10
train_images=train_images.astype('float32')/255
test_images=test_images.astype('float32')/255
train_labels=utils.to_categorical(train_labels,num_classe)
test_labels=utils.to_categorical(test_labels,num_classe)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Flatten
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
model = keras.Sequential([Conv2D(32,kernel_size=(3,3), padding='same',input_shape=train_images.shape[1:],
activation=tf.nn.relu),
MaxPooling2D(pool_size=(2,2)),
Dropout(0.25),
Conv2D(64,kernel_size=(3,3),padding='same',activation=tf.nn.relu),
MaxPooling2D(pool_size=(2,2)),
Dropout(0.25),
Flatten(),
Dense(64,activation=tf.nn.relu),
Dropout(0.25),
Dense(num_classe,activation=tf.nn.softmax)])
model.summary()
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
epochs=100
batch_size=64
early_stopping = EarlyStopping(monitor='val_loss',patience=10)
history=model.fit(train_images,train_labels,epochs=epochs,validation_data=(test_images,test_labels),
shuffle=True,callbacks=[early_stopping])
loss,acc=model.evaluate(test_images,test_labels)
print('\n Loss:{},Acc:{}'.format(loss,acc))
def plt_show_loss(history):
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('MOdel loss')
plt.xlabel('Epoch')
plt.ylabel('loss')
plt.legend(['Train','Test'],loc=0)
def plt_show_acc(history):
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('MOdel accuracy')
plt.xlabel('Epoch')
plt.ylabel('accuracy')
plt.legend(['Train','Test'],loc=0)
plt_show_loss(history)
plt.show()
plt_show_acc(history)
plt.show()
#예측
predictions=model.predict(test_images)
#10개 클래스에 대한 예측을 모두 그래프로 표현
def plot_image(i,predictions_array,true_label,img):
predictions_array,true_label,img= predictions_array[i], true_label[i],img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img,cmap=plt.cm.binary)
predicted_label=np.argmax(predictions_array)
if predicted_label==np.argmax(true_label):
color='blue'
else:
color='red'
plt.xlabel('{}{:2.0f}%({})'.format(class_names[predicted_label],100*np.max(predictions_array),
class_names[np.argmax(true_label)]),color=color)
#predictions_array 10개의 실수 결과 값/ true_label 분류번호
def plot_value_array(i,predictions_array,true_label):
predictions_array,true_label=predictions_array[i],true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot=plt.bar(range(10),predictions_array,color="#777777")
plt.ylim([0,1])
predicted_label=np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
num_rows=5
num_cols=3
num_images=num_rows*num_cols
plt.figure(figsize=(2*2*num_cols,2*num_rows))
for i in range(num_images):
plt.subplot(num_rows,2*num_cols,2*i+1)
plot_image(i,predictions,test_labels,test_images)
plt.subplot(num_rows,2*num_cols,2*i+2)
plot_value_array(i,predictions,test_labels)
plt.show()
#하나의 이미지를 다양한 각도, 줌인 줌아웃등 해서 데이터 증식
from tensorflow.keras.preprocessing.image import ImageDataGenerator, array_to_img,img_to_array,load_img
datagen= ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
img=load_img('/content/drive/MyDrive/Colab Notebooks/MLdata/cat/cat.1.jpg')
x=img_to_array(img) # (3,150,150) 크기의 numpy 배열
#x.reshape((1,)) 은 batch dimension을 포함
#fit input_shape 에넌 배치 차원(batch demension)은 포함되지 않음
x=x.reshape((1,)+x.shape)
i=0
for batch in datagen.flow(x,batch_size=1,
save_to_dir='/content/drive/MyDrive/Colab Notebooks/MLdata/preview',
save_prefix='cat',save_format='jpeg'):
i+=1
if i>24:
break
#개, 고양이 분류하기
import os
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
from tensorflow.keras.preprocessing import image
# from tensorflow import keras
# import numpy as np
rootPath='/content/drive/MyDrive/Colab Notebooks/MLdata'
imageGenerator = ImageDataGenerator(
rescale=1./255,
rotation_range=20,
width_shift_range=0.1,
height_shift_range=0.1,
brightness_range=[.2, .2],
horizontal_flip=True,
validation_split=0.2,
fill_mode='nearest')
trainGen=imageGenerator.flow_from_directory(
os.path.join(rootPath,'train'),
target_size=(128,128),
class_mode='binary',
subset='training')
validationGen=imageGenerator.flow_from_directory(
os.path.join(rootPath,'train'),
target_size=(128,128),
class_mode='binary',
subset='validation')
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
import tensorflow
model=Sequential()
model.add(layers.Conv2D(32,(3,3),activation='relu',
input_shape=(128,128,3)))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64,(3,3),activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(128,(3,3),activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(128,(3,3),activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Flatten())
model.add(layers.Dense(512,activation='relu'))
model.add(layers.Dense(1,activation='sigmoid'))
model.summary()
model.compile(optimizer=tensorflow.keras.optimizers.RMSprop(lr=0.0001),
loss='binary_crossentropy',metrics=['accuracy'])
MODEL_DIR='.model/'
if not os.path.exists(MODEL_DIR):
os.mkdir(MODEL_DIR)
modelpath='./model/{epoch:02d}{val_loss:.4f}{val_accuracy:.4f}hdf5'
Checkpointer= ModelCheckpoint(filepath=modelpath,monitor='val_loss',verbose=1,save_best_only=False)
early_stopping_callback = EarlyStopping(patience=5)
epochs=20
history=model.fit(trainGen,epochs=epochs,steps_per_epoch=(trainGen.samples*4)/epochs,
validation_data=validationGen,
validation_steps=(validationGen.samples*4)/epochs,
callbacks=[early_stopping_callback,Checkpointer])
accuracy = history.history['accuracy']
val_accuracy=history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs=range(1,len(loss)+1)
plt.figure(figsize=(16,1))
plt.subplot(121)
plt.subplots_adjust(top=2)
plt.plot(epochs,accuracy,'ro',label='Training accuracy')
plt.plot(epochs,val_accuracy,'r',label='Validation accuracy')
plt.title('Training and validation accuracy and loss')
plt.xlabel('Epochs')
plt.ylabel('Accuracy and Loss')
plt.legend(loc='upper center',bbox_to_anchor=(0.5,-0.1),
fancybox=True,shadow=True,ncol=5)
plt.subplot(122)
plt.plot(epochs,loss,'bo',label='Training loss')
plt.plot(epochs,val_loss,'b',label='Varidation loss')
plt.title('Training and calidation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(loc='upper center',bbox_to_anchor=(0.5,-0.1),
fancybox=True,shadow=True,ncol=5)
plt.show()
from tensorflow.keras.preprocessing.image import array_to_img
import numpy as np
import matplotlib.pyplot as plt
'''
classes: 클래스 하위 디렉토리의 선택적 리스트 ex) ['dogs','cats']
디폴트 값: None, 특별히 값을 지정하지 않으면, 각 하위 디렉토리를
각기 다른 클래스로 대하는 방식으로 클래스의 리스트가 directory 내 하위 디렉토리의
이름/구조에서 자동으로 유추됩니다.
(라벨 색인에 대응되는 클래스의 순서는 영문자 순서(사전식)를 따릅니다)
class_indices 속성을 통해서 클래스 이름과 클래스 색인 간 매핑을 담은 딕셔너리를 얻을수 있습니다.
'''
cls_index=['고양이','개']
n=20
test_num=[[0]*n for _ in range(n)]
for i in range(10):
f_name= 'cat.'+str(i+1)
file_path= '/content/drive/MyDrive/Colab Notebooks/MLdata/train/cat/'+ f_name+'.jpg'
print(file_path)
test_num[i]=image.load_img(file_path,target_size = (128,128))
for i in range(10,20):
f_name= 'dog.'+str(i-9)
file_path= '/content/drive/MyDrive/Colab Notebooks/MLdata/train/dog/' + f_name+'.jpg'
print(file_path)
test_num[i]=image.load_img(file_path,target_size = (128,128))
#test해보기
for i in range(20):
plt.figure()
plt.imshow(test_num[i])
test_num[i]=image.img_to_array(test_num[i])
test_num[i]=np.expand_dims(test_num[i],axis=0)
result=model.predict_classes(test_num[i])
if result == 0:
strName= 'cat'
else:
strName='dog'
plt.xlabel(strName)
if i<10:
if result == 0:
print('고양이 : 예측 = 고양이')
else:
print('고양이 : 예측 = 개')
else:
if result == 0:
print('개 : 예측 = 고양이')
else:
print('개 : 예측 = 개')
'First step > AI 기초반' 카테고리의 다른 글
[TIL]21.07.29 임베딩 (0) | 2021.07.29 |
---|---|
[TIL]21.07.28 (0) | 2021.07.28 |
[TIL]21.07.26 CNN (0) | 2021.07.26 |
[TIL]21.07.23 mnist 사용 기초 (0) | 2021.07.23 |
[TIL]21.07.21openCV 기초 (0) | 2021.07.21 |