Evaluates a client-side model built on top of the Universal Sentence Encoder to detect hateful content you select for the analysis.
Do you want to find out if a text fragment you see on a website in your Chrome webbrowser is offensive in any way? This extension features a deep learning model running on your computer that can confirm or refute your gut feeling about a text being (severely) toxic, a threat, sexually explicit, obscene, an insult or an identity attack. That way, you can check if it is just your opinion or if the text is actually hateful.
Apart from that, the purpose of this app is to experiment with Tensorflow.js so that every user can see what an in-built deep learning model based on a Universal Sentence Encoder is capable of doing without needing to understand anything about AI. The classification works for English only and considers the seven categories mentioned above. One can customize the highlighting colors used and the metric chosen for the highlighting. By hovering over highlighted text, the detected categories are shown.
For more information, please visit the GitHub page: https://github.com/daniel-rychlewski/hateblock