Arguments do happen, sometimes related to an opinion, a movie, about politics and even over facts. Least interesting of them are the one revolving around fact because a simple lookup could settle it right away. A lot of them are purely subjective and it’s hard to change someone’s opinion but when it does happen the feeling is great.
There are ways to poke holes in someone’s arguments, using fallacies, hypothetical scenarios and whatnot but there are few things that should be kept in mind when you really want to win an argument. Saying someones opinion outright stupid or insane is certainly not the right way to go about it cause then it means that opponent has to accept that before conceding defeat.
There are various blogs, life pro tips, quotes and suggestions all over the internet that claim to guide you towards having a healthy discussion. Its true that having an open mind is necessary and one should consider losing an argument an option as well, but here we are only talking about winning. So let’s look at some actual data.
For this analysis I am using a subreddit changemyview. This subreddit is for people to share their views and ask fellow redditors to give counter to change them. To be fair this is not an ideal situation since definitely the poster is open to new ideas and clearly willing to change their view given a good enough explanation. When it happens the poster identifies those counters by awarding them a delta, a virtual award.
I scraped all time top thousand such threads and did some analysis to understand what was being said in such successful counters.
I structured the data in three groups, each thread was divided into view text, list of successful counters and list of failed counters.
Spacy Similarity Analysis
First test was to see how much the counter talks about the view itself and if they outright derail the view and talk about something else. This is a very well moderated community so I got a lot or removed posts for failed counters so this was kind of inconclusive since almost all of the counters were very much on point. I used Spacy to do the similarity detection.
nlp = spacy.load("en_core_web_md")
view_doc = nlp(view)
sim = counterdoc.similarity(view_doc)
Result was 93.34 % of successful counters and 87.06 % of failed counters were on point.
HuggingFace Sentiment Analysis
Second test was a simple sentiment analysis to see what kind of tone was used in counters. For this I used HuggingFace library
from transformers import pipeline
This is slightly misleading since counters are supposed to be negative because they are opposing.
TensorflowJS Toxicity Analysis
So the next test was to check the toxicity of those texts. For this test I used toxicity model published by tensorflow as JS library. The toxicity model detects whether text contains toxic content such as threatening language, insults, obscenities, identity-based hate, or sexually explicit language. The model was trained on the civil comments dataset.
Here the result got interesting, only 0.4 % of successful counters were toxic as compared to 4.5 % of failed counters. Also about 2.5 % of failed counters were insulting. None of the other metric had any legible output. Another noteworthy point here is that 12.5 % of failed counters I gathered were removed by the moderators, so these numbers are from those who survived and don’t include low effort or outright bad counters.
Ok, maybe the results are not that exciting but it was a good exploration.
I proved nothing.
Anywho, the exercise gave me a refresher of tensorflow toxicity analysis, spacy and hugging face, so here is the code.
Feel free to send me an email for any suggestion or feedback. Follow me on twitter and github.