Over the past few years, public shaming has increased in online social networks and associated internet public forums such as Twitter. It is known that these occurrences have a disastrous effect on the social, political, and economic lives of the victim. Despite its known ill impacts, little has been achieved in popular internet social media to remedy this, often through the excuse of big quantity and variety of such remarks and hence the unfeasible amount of human moderators needed to accomplish the mission. In this project, we automate the job of detecting public shaming from the point of view of victims on Twitter and mainly investigate two elements, namely events and shamers. Shaming tweets are classified into six kinds: abusive, comparative, passing judgement, religious / ethnic, sarcasm / joke, and what aboutery, and each tweet is classified as either non-shaming or one of these kinds. It is noted that most of the users who post remarks in a specific shaming case are likely to shame the victim. Interestingly, it is also the shamers whose fan counts grow quicker than those of Twitter’s non-shamers. Finally, a web application called Block Shame was intended and implemented for on – the-fly muting / blocking of shamers attacking a victim on the Twitter based on categorizing and classifying shaming tweets.