Explicando o que são e como aplicar para o 1D.
Implementando
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000 # number of words to consider as features
max_len = 500 # cut texts after this number of words (among top max_features most common words)
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
Using TensorFlow backend.
Loading data... Downloading data from https://s3.amazonaws.com/text-datasets/imdb.npz 17465344/17464789 [==============================] - 2s 0us/step 25000 train sequences 25000 test sequences Pad sequences (samples x time) x_train shape: (25000, 500) x_test shape: (25000, 500)
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:66: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:541: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4432: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4267: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead. Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_1 (Embedding) (None, 500, 128) 1280000 _________________________________________________________________ conv1d_1 (Conv1D) (None, 494, 32) 28704 _________________________________________________________________ max_pooling1d_1 (MaxPooling1 (None, 98, 32) 0 _________________________________________________________________ conv1d_2 (Conv1D) (None, 92, 32) 7200 _________________________________________________________________ global_max_pooling1d_1 (Glob (None, 32) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 33 ================================================================= Total params: 1,315,937 Trainable params: 1,315,937 Non-trainable params: 0 _________________________________________________________________ WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:793: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3657: The name tf.log is deprecated. Please use tf.math.log instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/nn_impl.py:183: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1033: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1020: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3005: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. Train on 20000 samples, validate on 5000 samples Epoch 1/10 WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:197: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:207: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:216: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:223: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead. 20000/20000 [==============================] - 69s 3ms/step - loss: 0.9044 - acc: 0.5033 - val_loss: 0.6892 - val_acc: 0.5614 Epoch 2/10 20000/20000 [==============================] - 68s 3ms/step - loss: 0.6759 - acc: 0.6307 - val_loss: 0.6718 - val_acc: 0.6138 Epoch 3/10 20000/20000 [==============================] - 69s 3ms/step - loss: 0.6389 - acc: 0.7502 - val_loss: 0.6346 - val_acc: 0.7098 Epoch 4/10 20000/20000 [==============================] - 69s 3ms/step - loss: 0.5736 - acc: 0.8034 - val_loss: 0.5479 - val_acc: 0.7924 Epoch 5/10 20000/20000 [==============================] - 70s 3ms/step - loss: 0.4527 - acc: 0.8400 - val_loss: 0.4428 - val_acc: 0.8274 Epoch 6/10 20000/20000 [==============================] - 70s 4ms/step - loss: 0.3631 - acc: 0.8656 - val_loss: 0.4262 - val_acc: 0.8282 Epoch 7/10 20000/20000 [==============================] - 70s 3ms/step - loss: 0.3148 - acc: 0.8746 - val_loss: 0.4062 - val_acc: 0.8378 Epoch 8/10 20000/20000 [==============================] - 70s 4ms/step - loss: 0.2789 - acc: 0.8740 - val_loss: 0.4294 - val_acc: 0.8258 Epoch 9/10 20000/20000 [==============================] - 70s 4ms/step - loss: 0.2517 - acc: 0.8563 - val_loss: 0.4402 - val_acc: 0.8070 Epoch 10/10 20000/20000 [==============================] - 71s 4ms/step - loss: 0.2260 - acc: 0.8428 - val_loss: 0.4526 - val_acc: 0.7894
dos resultados
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
from google.colab import drive
drive.mount('/content/drive')
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly Enter your authorization code: ·········· Mounted at /content/drive
# We reuse the following variables defined in the last section:
# float_data, train_gen, val_gen, val_steps
import os
import numpy as np
data_dir = '/content/drive/My Drive/Deep_Learning/'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# This is how many steps to draw from `val_gen`
# in order to see the whole validation set:
val_steps = (300000 - 200001 - lookback) // batch_size
# This is how many steps to draw from `test_gen`
# in order to see the whole test set:
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
Epoch 1/20 500/500 [==============================] - 39s 78ms/step - loss: 0.4198 - val_loss: 0.4475 Epoch 2/20 500/500 [==============================] - 39s 79ms/step - loss: 0.3667 - val_loss: 0.4569 Epoch 3/20 500/500 [==============================] - 39s 78ms/step - loss: 0.3448 - val_loss: 0.4594 Epoch 4/20 500/500 [==============================] - 38s 76ms/step - loss: 0.3329 - val_loss: 0.4418 Epoch 5/20 500/500 [==============================] - 38s 77ms/step - loss: 0.3209 - val_loss: 0.4831 Epoch 6/20 500/500 [==============================] - 39s 78ms/step - loss: 0.3113 - val_loss: 0.4537 Epoch 7/20 500/500 [==============================] - 39s 78ms/step - loss: 0.3021 - val_loss: 0.5077 Epoch 8/20 500/500 [==============================] - 39s 79ms/step - loss: 0.2962 - val_loss: 0.4664 Epoch 9/20 500/500 [==============================] - 39s 78ms/step - loss: 0.2900 - val_loss: 0.4604 Epoch 10/20 500/500 [==============================] - 40s 80ms/step - loss: 0.2841 - val_loss: 0.4778 Epoch 11/20 500/500 [==============================] - 39s 78ms/step - loss: 0.2811 - val_loss: 0.4599 Epoch 12/20 500/500 [==============================] - 39s 79ms/step - loss: 0.2769 - val_loss: 0.4906 Epoch 13/20 500/500 [==============================] - 40s 80ms/step - loss: 0.2727 - val_loss: 0.4843 Epoch 14/20 500/500 [==============================] - 40s 80ms/step - loss: 0.2722 - val_loss: 0.4855 Epoch 15/20 500/500 [==============================] - 40s 81ms/step - loss: 0.2696 - val_loss: 0.4654 Epoch 16/20 500/500 [==============================] - 41s 81ms/step - loss: 0.2668 - val_loss: 0.4682 Epoch 17/20 500/500 [==============================] - 41s 82ms/step - loss: 0.2644 - val_loss: 0.4901 Epoch 18/20 500/500 [==============================] - 41s 82ms/step - loss: 0.2646 - val_loss: 0.4828 Epoch 19/20 500/500 [==============================] - 41s 82ms/step - loss: 0.2619 - val_loss: 0.4833 Epoch 20/20 500/500 [==============================] - 41s 82ms/step - loss: 0.2581 - val_loss: 0.5027
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# This was previously set to 6 (one point per hour).
# Now 3 (one point per 30 min).
step = 3
lookback = 720 # Unchanged
delay = 144 # Unchanged
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:148: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3733: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`. Model: "sequential_4" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv1d_6 (Conv1D) (None, None, 32) 2272 _________________________________________________________________ max_pooling1d_4 (MaxPooling1 (None, None, 32) 0 _________________________________________________________________ conv1d_7 (Conv1D) (None, None, 32) 5152 _________________________________________________________________ gru_1 (GRU) (None, 32) 6240 _________________________________________________________________ dense_3 (Dense) (None, 1) 33 ================================================================= Total params: 13,697 Trainable params: 13,697 Non-trainable params: 0 _________________________________________________________________ Epoch 1/20 500/500 [==============================] - 80s 159ms/step - loss: 0.3368 - val_loss: 0.2939 Epoch 2/20 500/500 [==============================] - 78s 156ms/step - loss: 0.3066 - val_loss: 0.2837 Epoch 3/20 500/500 [==============================] - 78s 156ms/step - loss: 0.2942 - val_loss: 0.2756 Epoch 4/20 500/500 [==============================] - 77s 155ms/step - loss: 0.2856 - val_loss: 0.2774 Epoch 5/20 500/500 [==============================] - 77s 154ms/step - loss: 0.2797 - val_loss: 0.2844 Epoch 6/20 500/500 [==============================] - 77s 155ms/step - loss: 0.2754 - val_loss: 0.2818 Epoch 7/20 500/500 [==============================] - 78s 155ms/step - loss: 0.2687 - val_loss: 0.2816 Epoch 8/20 500/500 [==============================] - 77s 154ms/step - loss: 0.2647 - val_loss: 0.2868 Epoch 9/20 500/500 [==============================] - 77s 154ms/step - loss: 0.2604 - val_loss: 0.2848 Epoch 10/20 500/500 [==============================] - 77s 153ms/step - loss: 0.2562 - val_loss: 0.2865 Epoch 11/20 500/500 [==============================] - 76s 152ms/step - loss: 0.2519 - val_loss: 0.2945 Epoch 12/20 500/500 [==============================] - 76s 152ms/step - loss: 0.2482 - val_loss: 0.2912 Epoch 13/20 500/500 [==============================] - 76s 151ms/step - loss: 0.2464 - val_loss: 0.2911 Epoch 14/20 500/500 [==============================] - 76s 152ms/step - loss: 0.2419 - val_loss: 0.2944 Epoch 15/20 500/500 [==============================] - 76s 153ms/step - loss: 0.2400 - val_loss: 0.2889 Epoch 16/20 500/500 [==============================] - 76s 152ms/step - loss: 0.2397 - val_loss: 0.2968 Epoch 17/20 500/500 [==============================] - 76s 152ms/step - loss: 0.2352 - val_loss: 0.2966 Epoch 18/20 500/500 [==============================] - 76s 152ms/step - loss: 0.2319 - val_loss: 0.2991 Epoch 19/20 500/500 [==============================] - 77s 153ms/step - loss: 0.2313 - val_loss: 0.3063
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()