}

Text analytics: Words, Numbers, and Vectors

10/5/2021

Humans understand words; machines work with numbers. In the world of text analytics, we must somehow convert our human words into numbers a computer can understand.

Of course, it is not enough to assign each word some sort of key value. One number cannot encompass the meaning and application of a word any more than your social insurance number conveys information about your eye color or your favorite sport. One approach is to define a multidimensional abstract space in which each individual characteristic of a word is represented by some "distance" in that direction. Each word is then represented by a vector in this abstract space. There are many possible ways that such an abstract vector space can be defined, and just as assuredly there is no abstract space that represents a word perfectly.

digital brain on colorful code background

In practice, we most often look for a word embedding. Unfortunately, this phrase is not always used consistently. Some authors observe that words are "defined" for the computer by describing the myriad of ways in which they are "embedded" among other words in phrases, sentences, and documents. A more mathematical definition would be that words are converted into vectors that are "embedded" in a vector space of lower dimensionality than the space of all uses of all words. These word vector embeddings fulfill our requirement of being a numerical word representation that computers can work with. Embeddings, of course, are not unique; there are many possible embeddings to choose from, and each has its own strengths and weaknesses. Embeddings can be broadly classified into two groups, frequency-based embeddings which essentially provide word counts, and prediction-based embeddings. Many classic methods of text analysis are frequency-based embeddings. Prediction embeddings are sometimes called "neural word embeddings" since they use simple neural networks to organize the text data for further analysis.

One very popular, and often quite effective, prediction-based embedding is word2vec, which is actually not so much a single method as a cluster of closely related algorithms. Word2vec uses a neural network, simple by today's standards, to group words based on their "similarity". One example, which has become a sort of "hello, world" for text analytics, is the pair of words "king" and "queen". Clearly kings and queens have distinct features, but the words often appear in similar contexts within a document.

[sidebar_cta header="ON DEMAND WEBINAR:Machine Learning and Text Analytics" color="white" icon="" btn_href="https://www.learningtree.com/resources-library/webinars/machine-learning-and-text-analytics/" btn_href_en="https://www.learningtree.com/resources-library/webinars/machine-learning-and-text-analytics/" btn_href_ca="https://www.learningtree.ca/resources-library/webinars/machine-learning-and-text-analytics/" btn_href_uk="https://www.learningtree.co.uk/resources-library/webinars/machine-learning-and-text-analytics/" btn_href_se="https://www.learningtree.se/kunskapsbank/webinars/machine-learning-and-text-analytics/" btn_text="Watch Now!"]

In text analytics, as in any engineering enterprise, there are always tradeoffs. We can improve the effectiveness of a model by training it with a greater amount of text, increasing the number of dimensions in our abstract vector space, and increasing the number of words we consider are a "context" within a document. All of these choices come at the cost of increased computational compexity (and therefore increased time). Let's consider some examples.

Word2Vec


Uses a shallow two-layer neural network to learn about the contexts of words in a document or set of documents. Neural networks are, therefore, now being used to learn how to feed information to bigger, more powerful neural networks.

CBOW and Skip-Gram


CBOW (Continuous bag-of-words) and skip-grams attack word embedding from opposite directions. CBOW algorithms attempt to develop a neural network model that predicts the occurrence of words based on the surrounding context. Skip-grams, on the other hand attempt to predict a context based on individual words.

Remember, however, that this is not a parlour game in which the goal is to predict the missing word. It is an attempt to generate vectors that in some way encapsulate the use and meaning of words.

Concepts of Similarity


Many fields of artificial intelligence grapple with the necessity of providing a mathematical description for the somewhat vague notion of similarity. Your movie streaming service wants to quantify films that are similar to the ones you have already enjoyed. An autonomous vehicle needs to infer if objects in a video are similar to children in a school crosswalk.

In the world of text analytics a common measure of similarity is cosine similarity. Cosine similarity is closely related to the Pearson correlation coefficient of classic statistics. If two words are out there in the same direction in our abstract vector space, the cosine of the angle between those vectors is small and the cosine is close to one. If one word is straight ahead of us and the other is off our left shoulder, then the angle between the vectors is close to 90deg and the cosine is close to zero.

The definition of cosine similarity provides a direct path to the prediction of word analogies. For the text analytics "hello world" example, we would expect that the cosine similarity between "male" and "king" to be close to the similarity between "female" and "queen".

Real Language is Always More Complicated


Ultimately, however, our concern with language is a concern with meaning, not words. In English, many words have multiple meanings. In Chinese, the meaningful interpretation of a word without its context is virtually impossible.

The idea of word embeddings can be extended to include sense embeddings, that is, embeddings that include the multple senses of individual words. Indeed, the popular skip-grams algorithm has been modified and extended into the multi-sense skip-gram or MSSG.

Word embeddings more sophisticated than those described here must be enlisted to meet the needs of machine translation. Examples include ELMo, XLNet, not to mention BERT and ERNIE from Google and Baidu, respectively.

Turning Theory Into Practice


In the next blog we will look at some actual examples that apply wrd2vec techniques using the popular library Gensim.

Dan Buskirk

Written by Dan Buskirk

"The pleasures of the table belong to all ages." Actually, Brillat-Savaron was talking about the dinner table, but the quote applies equally well to Dan’s other big interest, tables of data. Dan has worked with Microsoft Excel since the Dark Ages and has utilized SQL Server since Windows NT first became available to developers as a beta (it was 32 bits! wow!). Since then, Dan has helped corporations and government agencies gather, store, and analyze data and has also taught and mentored their teams using the Microsoft Business Intelligence Stack to impose order on chaos. Dan has taught Learning Tree in Learning Tree’s SQL Server & Microsoft Office curriculums for over 14 years. In addition to his professional data and analysis work, Dan is a proponent of functional programming techniques in general, especially Microsoft’s new .NET functional language F#. Dan enjoys speaking at .NET and F# user’s groups on these topics.

Chat With Us