Using Text

This article summarizes how DataRobot handles text features using state of the art Natural Language Processing (NLP) tools such as Matrix of Word Ngram, Auto Tuned Word Ngram Text Modelers, Word2Vec, Fasttext, cosine similarity, and Vowpal Wabbit. It also covers NLP visualization techniques such as frequency value table and word clouds.

The following video explains how DataRobot uses text features for machine learning models.

Your dataset contains one or more text variables as shown in Figure 1 and you are wondering whether DataRobot can incorporate this information into the modeling process.

Figure 1. Input dataset with one or more text variablesFigure 1. Input dataset with one or more text variables

DataRobot lets you explore the frequency of the words by giving you a frequency value table, which is the histogram of the most frequent terms in your data and a general table where you can see the same information in a tabular format (Figure 2).

Figure 2a. Frequency Values Table for word frequency visualizationFigure 2a. Frequency Values Table for word frequency visualization

Figure 2b. General Table for word frequency visualizationFigure 2b. General Table for word frequency visualization

Moving to modeling, DataRobot commonly incorporates the matrix of word-gram in blueprints (Figure 3). This is a matrix produced using a widely used technique, TF-IDF values, and combines multiple text columns. For dense data, DataRobot offers the Auto Tuned Word Ngram text modelers (Figure 4), which only looks at one individual text column at a time. The latter approach uses a single n-gram model to each text feature in the input dataset, and then uses the predictions from these models as inputs to other models.

Figure 3. An example blueprint that uses a Matrix of Word Ngram as a preprocessing stepFigure 3. An example blueprint that uses a Matrix of Word Ngram as a preprocessing step

Figure 4. An example blueprint that uses an Auto Tuned Word Ngram text modelers as a preprocessing stepFigure 4. An example blueprint that uses an Auto Tuned Word Ngram text modelers as a preprocessing step

Auto Tuned models for a given sample size are visualized as Word Clouds (Figure 5). These can be found in the Insights > Word Cloud tab. The top 200 terms with the highest coefficients are shown, along with the frequency with which each term appears in the text.

Figure 5. Text visualization using Word CloudFigure 5. Text visualization using Word Cloud

In Figure 5, terms are displayed in a color spectrum from blue to red with blue indicating a negative effect and red indicating a positive effect relative to the target values. Terms that appear more frequently are displayed in a larger font size, and those that appear less frequently are displayed in a smaller font size.

There are a number of things you can do to this display:

  • View the coefficient value specific to a term by mousing over the term
  • View the word cloud of another model by clicking the dropdown arrow above the word cloud
  • View class-specific word clouds (for multiclass classification projects)

The coefficients for the Auto Tuned Word Ngram text are available in the Insights > Text Mining tab (see Figure 6). It shows the most relevant terms in the text variable, and the strength of the coefficient You can download all the coefficients in a spreadsheet by clicking on the Export button.

Figure 6. Text Mining tabFigure 6. Text Mining tab

Finally, DataRobot also offers more NLP approaches in the Repository, such as Fasttext (Figure 7a) and Word2Vec (Figure 7b). You can find these by typing ‘Word2Vec’ or ‘Fasttext’ in the search box; DataRobot will retrieve all blueprints that contain these preprocessing steps.

Figure 7a. Example blueprints with Fasttext as part of their preprocessing stepsFigure 7a. Example blueprints with Fasttext as part of their preprocessing steps

Figure 7b. Example blueprints with Word2Vec as part of their preprocessing stepsFigure 7b. Example blueprints with Word2Vec as part of their preprocessing steps


Besides all of these, DataRobot has other techniques such as cosine similarity (Figure 8a) when there are multiple text features and Vowpal wabbit-based classifiers; the latter use Ngrams (Figure 8b).

Figure 8a. Example blueprints with Pairwise Cosine Similarity as part of their preprocessing stepsFigure 8a. Example blueprints with Pairwise Cosine Similarity as part of their preprocessing steps

Figure 8b. Example blueprints with Vowpal Wabbit-based classifiersFigure 8b. Example blueprints with Vowpal Wabbit-based classifiers

Version history
Revision #:
13 of 13
Last update:
‎04-01-2020 01:26 PM
Updated by:
 
Contributors