Using the Keras package makes it very easy to change the network architecture and introduce more than one hidden layer, as well as switching between activation functions.
The Tensorflow backend automatically recomputes the gradient for the learning algorithm. Without this feature, it would be very tedious or practically infeasible to manually re-code the gradients.
Architecures with more than one hidden layer are commonly referred to as Deep Learning.
The Pima indians dataset that has been used extensively in machine learning. It contains 768 observations consisting of 8 diagnostic values and a boolean variable indicating cases of diabetes within 5 years after examination.
We will employ a deep architecture on this dataset. Since the number of diabetes cases is only 268 we use class weights to account for the bias in the number of observations.
In practical applications we often encounter data
with missing values. The numpy function genfromtxt() can still read
those data and use nan for not a number, but this often causes problems. We employ
the numpy function isnan() to find observations with missing values and remove
them from the data.
The numerical values in this dataset are of different magnitudes; some measure in the hundreds while others are small fractions; to facilitate the gradient descent learning we scale all columns in X by dividing them by their standard deviations.
import numpy as np
import io
from collections import Counter
from keras.preprocessing.sequence import pad_sequences
from sklearn.preprocessing import LabelEncoder
from sklearn.utils import shuffle, class_weight
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, Dropout
data = np.genfromtxt('pima_indians_diabetes.txt', delimiter=',')
# remove lines with missing values, if any
data = data[~np.isnan(data).any(axis=1)]
print('observations:', data.shape[0])
# last col is value to predict
print('positive cases:', sum(data[:,-1]==1.0))
data = np.random.permutation(data)
X, y = data[:,:-1], data[:,-1]
# scale: divide by std dev
X = X / np.std(X, axis=0)
cw = class_weight.compute_class_weight('balanced', np.unique(y), y)
model = Sequential()
model.add(Dense(100, input_dim=X.shape[1], activation='tanh'))
model.add(Dropout(0.3))
model.add(Dense(50, activation='tanh'))
model.add(Dropout(0.3))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, epochs=10, batch_size=32, validation_split=0.2, class_weight=cw)
Many machine learning applications work on sequences of data, such as natural language texts of variable length.
Converting these type of data into some fixed length format for processing with feed-forward networks is possible but difficult if the information contained in the order of the input is to be preserved.
Recurrent architectures naturally deal with sequences of variable length.
A basic recurrent net maintains a memory state $h_t$ which is updated in each input step $t$. This type of net is suitable for processing short sequences, such as numerically encoded sentences:
The hidden state $h$ depends on the previous hidden state and the current input:
$ h_t = \sigma~ (W x_t + U h_{t-1})$
The output is computed from the hidden state:
$o_t = \sigma~ (V h_t) $
The problem with this approach is that if the gap between the relevant piece of input and the output is getting too large the net 'forgets' and cannot make the proper association.
The problem of long-term dependencies is tackled by the LSTM (Long Short Term Memory) architecture [HS97].
Instead of updating only the hidden state in each time step the LSTM introduces another cell state $C_t$ which is managed in a more sophisticated manner:
From the current input and the previous hidden state the values for the gates and the new candidate values for the cell state $C$ are computed:
$f_t = \sigma ~ (W_f \cdot [h_{t-1}, x_t] + b_f)$
$i_t = \sigma ~ (W_i \cdot [h_{t-1}, x_t] + b_i)$
$o_t = \sigma ~ (W_o [ h_{t-1}, x_t] + b_o)$
$\tilde{C} = \tanh (W_C \cdot [h_{t-1}, x_t] + b_C)$
The new cell state $C_t$ is computed by 'forgetting' part of the previous state $C_{t-1}$ and (based on the current input) adding part of the candidate values $\tilde{C}_t$:
$C_t = f_t * C_{t-1} + i_t * \tilde{C}_t$
The new hidden state is based on the cell state and the current output:
$h_t = o_t * \tanh (C_t)$
The details of training a recurrent neural net are involved, but fortunately the Keras package takes care of that and allows us to concentrate on the data preparation and parameter tuning.
As always we start with a number of imports to make use of Keras and sklearn code.
We are using a rather small dataset here to allow for fast download. However, there are about 10000 sentences of variable length in two sets of equal size, labeled positive and negative. The task is to automatically predict the correct label (sentiment analysis).
The code below assumes that the two files rt-polarity.pos and rt-polarity.neg are present in the current directory.
We read the file line by line into a nested list of individual words, and the vector of associated sentiment labels.
We also update the word count; this will allow us to identify the most common words.
# http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz
sents = []
labs = []
c = Counter()
for suffix in ['pos', 'neg']:
for line in io.open('rt-polarity.' + suffix, 'r', encoding='utf-8', errors='ignore'):
words = line.strip().split()
sents += [ words ]
labs += [ int(suffix == 'pos') ]
c.update(words)
Next we shuffle the nested list and the corresponding labels.
The parameter topwords is the size of the vocabulary; words not present in this set will be ignored.
The words are then substituted by their index in the vocabulary. We arrive at a nested list of indices.
Note how the first sentence is encoded, and compare with the most common words in the vocabulary.
sents, labs = shuffle(np.array(sents), np.array(labs))
topwords = 10000
wlst = [ w for w, n in c.most_common(topwords) ]
vocd = { wlst[i]: i for i in range(len(wlst)) }
print('vocabulary:', wlst[:30], '...')
print('sentences', len(sents))
X = []
for s in sents:
X += [ [ vocd[w]+1 if w in vocd else 0 for w in s ] ]
X = np.array(X)
y = np.array(labs)
print('input shape:', X.shape)
print('first sentence:', sents[0], 'label:', y[0])
print('encoding:', X[0])
We pad the sequences so they are all the same length. This is necessary for the Keras package. Padding is done by adding zeros to the start of a line. Zeroes mean unknown which also works for words not in the dictionary.
This padding with zeroes is the reason we used index origin 1 in the previous part. In this case it does not make much of a difference as the first entry in the dictionary is the dot which does not convey much information anyway.
A sample sentence is printed so we can check the encoding.
maxlen=30
X = pad_sequences(X, maxlen=maxlen)
print(X[0])
We are now ready to build our model. The Keras package provides a convenient embedding layer that translated each word index into a vector of floating point numbers. This encoding is learned along with the other network parameters and saves us the trouble of coming up with our own word feature encoding.
As usual in this type of machine learning approaches the accuracy on the training set is higher than on the test set.
def nn():
model = Sequential()
model.add(Embedding(topwords+1, 50, input_length=maxlen))
model.add(LSTM(100, dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X, y, validation_split=0.2, epochs=10, batch_size=64)
nn()
At the time of writing (May 2019) tensorflow only supports NVIDIA graphics cards with CUDA 3.5 or higher. You also need a card with reasonable performance to see any speedup at all; cheap entry models are the GTX 1050 Ti and the RTX 2060.
Installing all the required drivers and CUDA software can be a tedious task, but you only need to do it once (until you buy a new card or significantly change your system).
If a GPU device is shown when you execute the code below then the computations can run faster by a factor ranging from 2x to 10x or even 20x, depending on your graphics card and the task: speedup will only show for demanding applications; e.g. in this example you may need to increase the number of units in the LSTM to 200 or 300. Since there is considerable overhead in using the GPU it is faster to compute less demanding tasks in the CPU.
import tensorflow as tf
print('GPU Device:', tf.test.gpu_device_name())
If no GPU device is available then the Keras code will run fine on the CPU. It should use all available CPU cores if the linear algebra libraries are installed and configured; otherwise, only one core will be used, and the performance will suffer significantly.
The code below shows how to measure the speedup when comparing CPU and GPU computing.
import tensorflow as tf
from time import time
with tf.device('/cpu:0'):
t = time()
nn()
tcpu = time()-t
print('Time CPU:', tcpu)
with tf.device('/gpu:0'):
t = time()
nn()
tgpu = time()-t
print('Time GPU:', tgpu)
print('Speedup:', tcpu/tgpu)
In this method of encoding each word is associated with a floating point vector of a fixed dimension; usually values from 25 to 500 are used. These vectors have been computed from very large text corpora such that words with similar meanings are assigned similar vectors.
The original Glove downloads are very large; an abbreviated version has been provided here; it only contains the most common 10k words and their embeddings in 50 dimensions.
http://balrog.wu.ac.at/~mitloehn/glove.10k.txt
The code below reads the encodings into a numpy array and checks some similarities. The length of the difference vector is computed with np.linalg.norm().
glove = np.genfromtxt('glove.10k.txt', dtype=str)
print(glove)
vocab = glove.shape[0]
idx = { glove[i,0] : i for i in range(vocab) }
E = glove[:,1:].astype(float)
dog, cat, house = idx['dog'], idx['cat'], idx['house']
d1 = E[dog] - E[house]
d2 = E[dog] - E[cat]
for x in (E[dog], E[cat], E[house], d1, d2):
print(np.linalg.norm(x))
Some surprising operations with word embeddings are possible, such as the difference of vectors representing relationships:
E[France] - E[Paris] is similar to E[Italy] - E[Rome]
The relationship 'capital' has been captured. Note that this relies on co-occurrence of words in very large corpora, e.g. the Glove embeddings we use here are based on text corpora of 6 billion tokens.
france, paris, italy, rome = idx['france'], idx['paris'], idx['italy'], idx['rome']
cap1 = E[france] - E[paris]
cap2 = E[italy] - E[rome]
for x in (E[france], E[paris], E[italy], E[rome], cap1, cap2, cap1 - cap2):
print(np.linalg.norm(x))
Word embeddings can also be used to embed sentences by simply adding all the word vectors. The code below defines a function to encode a list of words into a single embedding vector. We then check whether similarity of meaning can still be observed. As we can see, it is less convincing with whole sentences.
def embed(lst):
e = np.array([ E[idx[w]] for w in lst if w in idx ])
if len(e) == 0: return E[idx['.']]
else: return np.sum(e, axis=0) / len(e)
e1 = embed('the cat enjoys relaxing'.split())
e2 = embed('the dog likes to sleep'.split())
e3 = embed('the house is on fire'.split())
for x in (e1, e2, e3, e1-e2, e1-e3, e2-e3):
print(np.linalg.norm(x))
We can now use the pre-trained word embdings for the sentiment task on the movie reviews by supplying the Keras embeddings layer with the Glove weights.
print(len(sents), len(labs))
print(sents[0], labs[0])
indices = [ [ idx[w]+1 if w in idx else 0 for w in s ] for s in sents ]
maxlen = 30
W = np.append(np.zeros((1,50)), E, axis=0)
X = pad_sequences(np.array(indices), maxlen)
y = labs
model = Sequential()
model.add(Embedding(len(W), 50, input_length=maxlen, weights=[W], trainable=True))
model.add(LSTM(100, dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, epochs=10, batch_size=64, validation_split=0.2)
It turns out that in this case using the pre-trained word embeddings did not result in performance improvement. However, the overfitting on the training set is not as pronounced in this version compared to the default randomly initialised Keras embeddings which proceed more quickly to the task-specific values.
Using pre-trained word embeddings is similar to using a pre-trained part of a neural net and applying it to a different problem. This idea is taken further with the latest advances in machine learning, exemplified by BERT, the Bidirectional Encoder Representations from Transformers. Essentially BERT is a component trained as a language model i.e. predicting words in sentences.
Training a neural architecture like BERT on a sufficiently huge corpus is computationally very expensive and is only feasible on very high performance hardware; however, pre-trained versions of BERT can be downloaded and used as part of a network as a ready-made component in other tasks that only require fine-tuning, and are therefore feasible on hardware more readily available.
[HS97] S. Hochreiter and J. Schmidhuber, "Long Short-Term Memory", Neural Computation 9(8): 1735-1780 (1997).
[BERT] J. Devlin et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018).