# BERT-For-Multi-Class-Classification **Repository Path**: liuerin/BERT-For-Multi-Class-Classification ## Basic Information - **Project Name**: BERT-For-Multi-Class-Classification - **Description**: No description available - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-01-28 - **Last Updated**: 2021-01-28 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # BERT-For-Multi-Class-Classification Predicting News Category With BERT In Tensorflow Bidirectional Encoder Representations from Transformers or BERT for short is a very popular NLP model from Google known for producing state-of-the-art results in a wide variety of NLP tasks. The importance of Natural Language Processing(NLP) is profound in the Artificial Intelligence domain. The most abundant data in the world today is in the form of texts and having a powerful text processing system is critical and is more than just a necessity. In this article we look at implementing a multi-class classification using the state-of-the-art model, BERT. Pre-Requisites: An Understanding of BERT About Dataset For this article, we will use MachineHack’s Predict The News Category Hackathon data. The data consists of a collection of news articles which are categorized into four sections. The features of the datasets are as follows: Size of training set: 7,628 records Size of test set: 2,748 records FEATURES: STORY: A part of the main content of the article to be published as a piece of news. SECTION: The genre/category the STORY falls in. There are four distinct sections where each story may fall in to. The Sections are labelled as follows : Politics: 0 Technology: 1 Entertainment: 2 Business: 3 Published @ www.analyticsindiamaga.com