How to Extract Keywords from Text in R?

An image illustrating the R interface with a highlighted code snippet used for keyword extraction from text.

What is Keyword Extraction?

Keyword extraction automatically extracts the most used and most important words and expressions from a text, using a text analysis technique. It helps to summarize the content of texts and recognize the main topics discussed, which is helpful for market research, sentiment analysis, and other types of data analysis. This technique leverages sentence embeddings and can analyze large sets of data in real-time.

Machine learning artificial intelligence (AI) with natural language processing (NLP) (a field of computer sciences) breaks down human language so that machines can understand and analyze it. Keyword extraction automates workflows like tagging incoming survey responses or responding to urgent customer queries. The technique uses linguistic and semantic information about texts and the words they contain. There are different machine learning algorithms and techniques to extract the most relevant keywords in a text.

Why Is Keyword Extraction Important?

Keyword extraction and keyphrase extraction are important for several reasons:

  1. Search Engine Optimization (SEO): Keyword extraction helps to identify the most important words and phrases in a document, which can be used to optimize website content for search engines.
  2. Text summarization: Keyword extraction can be used to summarize a document by identifying the most important words and phrases that represent the main theme of the text.
  3. Text classification: Keyword extraction can be used to classify text documents into different categories based on the keywords they contain. This is useful in applications such as sentiment analysis.
  4. Information retrieval: Keyword extraction can be used to improve the accuracy of information retrieval systems by identifying relevant keywords that match a user’s search query.

How to Make Keyword Extraction in R?

Here are some keyword extraction techniques and their use cases:

  • Find keywords by doing Parts of Speech tagging in order to identify nouns
  • Find keywords based on Collocations and Co-occurrences
  • Find keywords based on the Textrank algorithm
  • Find keywords based on RAKE (rapid automatic keyword extraction)
  • Find keywords based on the results of dependency parsing (getting the subject of the text)
extracting keywords from text

Extracting Nouns

An easy way in order to find keywords is by looking at nouns. As each term has a Parts of Speech tag if you annotated text using the udpipe package:


stats <- subset(x, upos %in% "NOUN") 
stats <- txt_freq(x = stats$lemma) 
stats$key <- factor(stats$key, levels = rev(stats$key)) 
barchart(key ~ freq, data = head(stats, 30), col = "cadetblue", main = "Most occurring nouns", xlab = "Freq")

Collocation & co-occurrences

Get multi-word expression by looking either at collocations (words following one another), at word co-occurrences within each sentence, or at word co-occurrences of words that are close in the neighborhood of one another.


Collocation (words following one another)

stats <- keywords_collocation(x = x,
term = "token", group = c("doc_id", "paragraph_id", "sentence_id"),
ngram_max = 4)

Co-occurrences: How frequent do words occur in the same sentence, in this case only nouns or adjectives

stats <- cooccurrence(x = subset(x, upos %in% c("NOUN", "ADJ")),
term = "lemma", group = c("doc_id", "paragraph_id", "sentence_id"))

## Co-occurrences: How frequent do words follow one another

stats <- cooccurrence(x = x$lemma,
relevant = x$upos %in% c("NOUN", "ADJ"))

## Co-occurrences: How frequent do words follow one another even if we would skip 2 words in between

stats <- cooccurrence(x = x$lemma,
relevant = x$upos %in% c("NOUN", "ADJ"), skipgram = 2)

Textrank (word network ordered by Google Pagerank)

Textrank is another extraction method for keywords. The textrank R package implements the Textrank algorithm. The Textrank algorithm allows for text summarization and keyword extraction. To construct a word network, the algorithm checks if words follow each other. The ‘Google Pagerank’ algorithm is extracting relevant words. Relevant words that follow each other combine together to get keywords. TextRank does not stem from the original text, as it is a graph-based approach.


stats <- textrank_keywords(x$lemma, relevant = x$upos %in% c("NOUN", "ADJ"), ngram_max = 8, sep = " ") 
stats <- subset(stats$keywords, ngram > 1 & freq >= 5) library(wordcloud) wordcloud(words = stats$keyword, freq = stats$freq)

Rapid Automatic Keyword Extraction: RAKE

RAKE is the next basic algorithm which is an acronym for Rapid Automatic Keyword Extraction. RAKE is a Domain-Independent keyword extraction algorithm in Natural Language Processing.

  1. Calculating a score for each word that is part of any candidate keyword, is done by
    • among the words of the candidate keywords, the algorithm looks at how many times each word is occurring and how many times it co-occurs with other words
    • each word gets a score which is the ratio of the word degree (how many times it co-occurs with other words) to the word frequency
  2. A RAKE score for the full candidate keyword is calculated by summing up the scores of each of the words which define the candidate keyword


stats <- keywords_rake(x = x, term = "token", group = c("doc_id", "paragraph_id", "sentence_id"), relevant = x$upos %in% c("NOUN", "ADJ"), ngram_max = 4) 
head(subset(stats, freq > 3)) 

Use dependency parsing output to get the nominal subject and the adjective of it

When you executed the annotation using udpipe, the dep_rel field indicates how words are related to one another. A token is related to the parent using token_id and head_token_id. The dep_rel field indicates how words link to one another. The type of relations is at


stats <- merge(x, x, by.x = c("doc_id", "paragraph_id", "sentence_id", "head_token_id"), by.y = c("doc_id", "paragraph_id", "sentence_id", "token_id"), all.x = TRUE, all.y = FALSE, suffixes = c("", "_parent"), sort = FALSE) 
stats <- subset(stats, dep_rel %in% "nsubj" & upos %in% c("NOUN") & upos_parent %in% c("ADJ")) 
stats$term <- paste(stats$lemma_parent, stats$lemma, sep = " ") 
stats <- txt_freq(stats$term) 
wordcloud(words = stats$key, freq = stats$freq, min.freq = 3, max.words = 100, random.order = FALSE, colors = brewer.pal(6, "Dark2"))

What is Text-Mining?

Text mining in R refers to the process of analyzing and extracting insights from text data using the R programming language and associated libraries and packages. Text mining involves several steps, including data cleaning and preprocessing, feature extraction, statistical modeling, and visualization.
The tm package provides functions for reading text data, cleaning and preprocessing the data, and creating document-term matrices, which are commonly used for analyzing text data. The tidytext package provides tools for converting text data into tidy data frames.
Some common tasks in text mining include sentiment analysis, topic modeling, document clustering, and text classification. These tasks involve applying statistical and machine-learning techniques to identify patterns and relationships within text data.

What are Keyword Extraction APIs?

Keyword extraction APIs are software interfaces that allow developers to extract keywords and key phrases from text using pre-built algorithms and machine learning models. These APIs provide an easy-to-use and scalable solution for automating the process of keyword extraction, without the need for developers to build and train their own models.

What is R?

R is an open-source programming language and software environment for statistical computing, data analysis, and graphics. People widely use R in academia, research, and industry for tasks such as statistical modeling, data visualization, machine learning, and data mining. It has interfaces with other programming languages such as Python and C++. Find more detailed information with online tutorials on GitHub.

Frequently Asked Questions

What is CSV?

CSV stands for “Comma-Separated Values”. It is a file format that stores and exchanges data in a plain-text format, where each row represents a record, and each column represents a field or attribute of the record. The first row of a CSV file typically stores the column headers, which provide a label for each field in the dataset.

What is TF-IDF?

TF-IDF stands for “Term Frequency-Inverse Document Frequency”. It is a numerical statistic that reflects the importance of a term in a document corpus. TF-IDF is commonly useful in text mining, information retrieval, and natural language processing applications.

What are Stopwords?

Stopwords are common words that natural language processing (NLP) tasks exclude because they carry little meaning or significance in text analysis. Examples of stopwords include “the”, “and”, “of”, “to”, “in”, “a”, “an”, “is”, and “for”.

Share the Post:

State of the art A.I.

Get Started with Eskritor Now!

Related Articles

An image of a computer screen showcasing a conversation with GPT-3, overlaid with diagrams illustrating the AI's language processing steps

How Does GPT-3 Work?

The below steps explain how GPT-3 works to generate responses: Why is GPT-3 Useful? Here’s a list of reasons why GPT-3 is useful: What is the History of GPT-3? The

A visual chart displaying data related to AI's impact on the job market for content writers

Will AI Replace Content Writers?

Yes, it is expected that AI replace content writers and certain types of writing jobs. However, they are not able to replace good writers. AI content generators generate basic content

A visual representation of ChatGPT's architecture, featuring the transformer model that enables its language understanding and generation capabilities

How Does ChatGPT Work?

At a high level, ChatGPT is a deep learning model that uses a neural network to generate human-like text. The specific version of the model, ChatGPT-3, is based on a