9

Pimia भारतीयों मधुमेह डेटा सेट मैं निम्नलिखित अनुक्रमिक मॉडल का निर्माण किया है का उपयोग करना:Keras + Tensorflow अजीब परिणाम

import matplotlib.pyplot as plt 
import numpy 
from keras import callbacks 
from keras import optimizers 
from keras.layers import Dense 
from keras.models import Sequential 
from keras.callbacks import ModelCheckpoint 
from sklearn.preprocessing import StandardScaler 

#TensorBoard callback for visualization of training history 
tb = callbacks.TensorBoard(log_dir='./logs/latest', histogram_freq=10, batch_size=32, 
          write_graph=True, write_grads=True, write_images=False, 
          embeddings_freq=0, embeddings_layer_names=None, embeddings_metadata=None) 


# Early stopping - Stop training before overfitting 
early_stop = callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=3, verbose=1, mode='auto') 

# fix random seed for reproducibility 
seed = 42 
numpy.random.seed(seed) 
# load pima indians dataset 
dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",") 
# split into input (X) and output (Y) variables 
X = dataset[:, 0:8] 
Y = dataset[:, 8] 

# Standardize features by removing the mean and scaling to unit variance 
scaler = StandardScaler() 
X = scaler.fit_transform(X) 


#ADAM Optimizer with learning rate decay 
opt = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0001) 

## Create our model 
model = Sequential() 

model.add(Dense(12, input_dim=8, kernel_initializer='uniform', activation='relu')) 
model.add(Dense(8, kernel_initializer='uniform', activation='relu')) 
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid')) 

# Compile the model using binary crossentropy since we are predicting 0/1 
model.compile(loss='binary_crossentropy', 
       optimizer=opt, 
       metrics=['accuracy']) 

# checkpoint 
filepath="./checkpoints/weights.best.hdf5" 
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') 

# Fit the model 
history = model.fit(X, Y, validation_split=0.33, epochs=10000, batch_size=10, verbose=0, callbacks=[tb,early_stop,checkpoint]) 
# list all data in history 
print(history.history.keys()) 
# summarize history for accuracy 
plt.plot(history.history['acc']) 
plt.plot(history.history['val_acc']) 
plt.title('model accuracy') 
plt.ylabel('accuracy') 
plt.xlabel('epoch') 
plt.legend(['train', 'test'], loc='upper left') 
plt.show() 
# summarize history for loss 
plt.plot(history.history['loss']) 
plt.plot(history.history['val_loss']) 
plt.title('model loss') 
plt.ylabel('loss') 
plt.xlabel('epoch') 
plt.legend(['train', 'test'], loc='upper left') 
plt.show() 

मैं जल्दी रोक, चौकी और Tensorboard कॉलबैक जोड़ दिया है, और निम्न परिणाम है:

Epoch 00000: val_acc improved from -inf to 0.67323, saving model to ./checkpoints/weights.best.hdf5 
Epoch 00001: val_acc did not improve 
... 
Epoch 00024: val_acc improved from 0.67323 to 0.67323, saving model to ./checkpoints/weights.best.hdf5 
... 
Epoch 00036: val_acc improved from 0.76378 to 0.76378, saving model to ./checkpoints/weights.best.hdf5 
... 
Epoch 00044: val_acc improved from 0.79921 to 0.80709, saving model to ./checkpoints/weights.best.hdf5 
... 
Epoch 00050: val_acc improved from 0.80709 to 0.80709, saving model to ./checkpoints/weights.best.hdf5 
... 
Epoch 00053: val_acc improved from 0.80709 to 0.81102, saving model to ./checkpoints/weights.best.hdf5 
... 
Epoch 00257: val_acc improved from 0.81102 to 0.81102, saving model to ./checkpoints/weights.best.hdf5 
... 
Epoch 00297: val_acc improved from 0.81102 to 0.81496, saving model to ./checkpoints/weights.best.hdf5 
Epoch 00298: val_acc did not improve 
Epoch 00299: val_acc did not improve 
Epoch 00300: val_acc did not improve 
Epoch 00301: val_acc did not improve 
Epoch 00302: val_acc did not improve 
Epoch 00302: early stopping 

तो लॉग के अनुसार, अपने मॉडल सटीकता ०.८१४९६ है। अजीब बात यह है कि सत्यापन सटीकता प्रशिक्षण सटीकता (0.81 बनाम 0.76) से अधिक है, और सत्यापन हानि कम है तो प्रशिक्षण हानि (0.41 बनाम 0.47)।

enter image description here enter image description here

प्रश्न: मैं क्या याद आ रही है, क्या मैं इस समस्या को ठीक करने के लिए मेरे कोड में परिवर्तित करने के लिए की जरूरत है?

+1

मैं डाटासेट शफ़ल अगर यह समस्या ठीक हो करने के लिए होगा। –

+0

क्या ऐसा करने का कोई तरीका है? –

+2

एचएम, [* यदि 'model.fit' में 'shuffle' तर्क सत्य पर सेट है (जो डिफ़ॉल्ट है), प्रशिक्षण डेटा को प्रत्येक युग * पर यादृच्छिक रूप से shuffled किया जाएगा (https://keras.io/getting -started/faq/#)-डेटा-फेरबदल के दौरान प्रशिक्षण है। –

उत्तर

5

यदि आप डेटा को घुमाते हैं, तो समस्या हल हो जाती है।

enter image description here

import matplotlib.pyplot as plt 
import numpy 
from keras import callbacks 
from keras import optimizers 
from keras.layers import Dense 
from keras.models import Sequential 
from keras.callbacks import ModelCheckpoint 
from sklearn.preprocessing import StandardScaler 
from sklearn.utils import shuffle 

# TensorBoard callback for visualization of training history 
tb = callbacks.TensorBoard(log_dir='./logs/4', histogram_freq=10, batch_size=32, 
          write_graph=True, write_grads=True, write_images=False, 
          embeddings_freq=0, embeddings_layer_names=None, embeddings_metadata=None) 


# Early stopping - Stop training before overfitting 
early_stop = callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=3, verbose=1, mode='auto') 

# fix random seed for reproducibility 
seed = 42 
numpy.random.seed(seed) 
# load pima indians dataset 
dataset = numpy.loadtxt("../Downloads/pima-indians-diabetes.csv", delimiter=",") 
# split into input (X) and output (Y) variables 
X = dataset[:, 0:8] 
Y = dataset[:, 8] 

# Standardize features by removing the mean and scaling to unit variance 
scaler = StandardScaler() 
X = scaler.fit_transform(X) 

# This is the important part 
X, Y = shuffle(X, Y) 

#ADAM Optimizer with learning rate decay 
opt = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0001) 

## Create our model 
model = Sequential() 

model.add(Dense(12, input_dim=8, kernel_initializer='uniform', activation='relu')) 
model.add(Dense(8, kernel_initializer='uniform', activation='relu')) 
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid')) 

# Compile the model using binary crossentropy since we are predicting 0/1 
model.compile(loss='binary_crossentropy', 
       optimizer=opt, 
       metrics=['accuracy']) 

# checkpoint 
# filepath="./checkpoints/weights.best.hdf5" 
# checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') 

# Fit the model 
history = model.fit(X, Y, validation_split=0.33, epochs=1000, batch_size=10, verbose=0, callbacks=[tb,early_stop]) 
# list all data in history 
print(history.history.keys()) 
# summarize history for accuracy 
plt.plot(history.history['acc']) 
plt.plot(history.history['val_acc']) 
plt.title('model accuracy') 
plt.ylabel('accuracy') 
plt.xlabel('epoch') 
plt.legend(['train', 'test'], loc='upper left') 
plt.show() 
# summarize history for loss 
plt.plot(history.history['loss']) 
plt.plot(history.history['val_loss']) 
plt.title('model loss') 
plt.ylabel('loss') 
plt.xlabel('epoch') 
plt.legend(['train', 'test'], loc='upper left') 
plt.show() 
+0

मेरा बुरा, मैं इसे गलत कर रहा था –

संबंधित मुद्दे