Skip to content

Identify and classify toxic online comments,based on kaggle dataset

License

Notifications You must be signed in to change notification settings

R-aryan/Jigsaw-Toxic-Comment-Classification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Jigsaw Toxic Comment Classification

Identify and classify toxic online comments

  • End to End NLP Multi label Classification problem
  • The Kaggle dataset can be found Here Click Here

Steps to run the project Click Here

Dataset Description

We are provided with a large number of Wikipedia comments which have been labeled by human raters for toxic behavior. The types of toxicity are:

  • toxic
  • severe_toxic
  • obscene
  • threat
  • insult
  • identity_hate

The Goal is to create a model which predicts a probability of each type of toxicity for each comment.

Following are the screenshots for the output, and the request.

  • Request sample Sample request

  • Response Sample

Sample response