It seems we are witnessing a digital revolution in the field of Natural Language Processing (NLP). Recent development and use cases of LLMs (Large Language Model) have created opportunities for knowledge discovery and noble usage of data making it easier or asking human like questions and getting back generated answers from LLMs. Thanks to OpenAI chatGPT now everyone now talking about LLMs.
What is GPT?
Generative Pre-trained Transformers, commonly known as GPT, are a family of neural network models that uses the transformer architecture and is a key advancement in artificial intelligence (AI) powering generative AI applications such as ChatGPT. GPT models give applications the ability to create human-like text and content (images, music, and more), and answer questions in a conversational manner. Organizations across industries are using GPT models and generative AI for Q&A bots, text summarization, content generation, and search. (text borrowed from – https://aws.amazon.com/what-is/gpt/)
How easy to use LLM?
In this article we will use HuggingFace python framework transformers and flan-t5-large LLM model to summarise text by running it on Google Colab.
There is no need to use any user credentials like OpenAI API key. Pipeline method will simply download the model as part of its task.
Steps
- install transformers
- model_id (you can get from HuggingFace Hub website)
- import pipeline and use the library
Code Snippets
!pip install -q transformers
from transformers import pipeline
model_id = "google/flan-t5-large"
summarizer = pipeline("summarization", model=model_id)