A Rutgers study seeking to understand how caste hierarchy in India is reinforced through discourse on X (formerly known as Twitter) has found that the functional properties of X enable representatives of upper-caste or dominant caste communities and interests to marginalize members of lower caste communities and anti-caste efforts. The authors argue there is a need "to frame marginalization as a severe manifestation of online harm that demands the attention of researchers, practitioners, governments, and civil society."
"Despite decades of anti-caste efforts, sociocultural practices that marginalize lower-caste groups in India remain prevalent and have even proliferated with the use of social media," said SC&I Ph.D. student Nayana Kirasur, lead author of the study. "Our research examines how groups engaged in caste-based discrimination leverage platform affordances of the social media site X (formerly Twitter) to circulate and reinforce caste ideologies. Further, though mobile and internet penetration in India has increased, and social media has been used in innovative ways to challenge the caste hierarchy, the hold of caste has not weakened. Lower-caste communities continue to be relegated to the margins politically, socially, economically, and culturally."
Written with Assistant Professor of Library and Information Science Shagun Jhaver, the paper, "Understanding the Prevalence of Caste: A Critical Discourse Analysis of Caste-based Marginalization on X," won a best paper award at CSCW '25, The 28th ACM Conference on Computer-Supported Cooperative Work and Social Computing held October 18-22 in Bergen, Norway.
"Caste is a system of social stratification that ascribes one’s status at birth and dictates much of life in India and among its diasporas," Kirasur said. "Recent reports of caste-based discrimination such as the CISCO case and growing demands for caste to be recognized as a protected category in places like California, Seattle, and various universities (including Rutgers) has highlighted the globalized nature of caste and how caste is implicated in the tech world. Building upon this, our research wanted to understand how caste shapes, and is shaped by, the design and use of computing technologies. More specifically, we wanted to understand how caste hierarchy is reinforced through discourses on X, formerly known as Twitter."
"Online harm is not just explicit hate speech. Online harm also occurs when discourses (in this case on social media) continue to uphold upper-caste and dominant caste narratives while also simultaneously undermining lower caste communities and their resistance. Our paper asks, how do we include marginalization in our understanding of online harm?"
Their research showed, Kirasur said, "X accounts that represented upper-caste or dominant caste communities and interests posted content that contributed to two broad, complementary narratives. These narratives together created positive self-presentation (i.e of upper and some dominant middle castes) and negative other-presentation (i.e. of lower caste communities and the anti-caste efforts). This was enabled by the different affordances of X, including its ability to share, search, and allow content of different media types (image and video and audio), as well as potential visibility."
Further, they found that these two strands of narratives use a combination of several rhetorical and organizing strategies. "These included hate speech, online hashtag campaigns to promote their interests, asserting caste pride and superiority, and mocking and undermining anti-caste efforts," Kirasur said.
To conduct the research, Kirasur and Jhaver critically analyzed 50 community profiles that represented upper-caste interests or were in favor of the caste system (they referred to these as caste-positive accounts).
They looked at their profile information, their latest posts (including images, text, and videos), and the hashtags they used, to see if and how they reinforce a caste ideology. They sampled these profiles primarily using keyword search, and they also expanded their sample by finding more profiles in the follower, followee lists of the profiles (a method known as recursive searching).
Their findings are impactful in several significant ways, Kirasur said. "We highlight the importance of and some suggestions for studying online activities of powerful communities, such as the dominant castes. This is in direct conversation with existing research that looks at groups such as white supremacists and anti-feminist groups.
Through this research, Kirasur added, "we intend to expand the conversation about dominant communities and their mechanics of power by focusing on the category of caste. We hope that this study will trigger additional conversations on how we can frame marginalization of oppressed groups as a manifestation of online harm that all stakeholders, including platforms, the state, and civil society, must address."
"Online harm is not just explicit hate speech. Online harm also occurs when discourses (in this case on social media) continue to uphold upper-caste and dominant caste narratives while also simultaneously undermining lower caste communities and their resistance. Our paper asks, how do we include marginalization in our understanding of online harm?"
Another implication of the research, Kirasur said, regards the measures commonly used to address online harm. "The main tool for action in a punitive justice model, as embodied in content moderation that most social media platforms rely on, is punishing the rule violator. Moderation efforts to sanction individual instances of caste-based hate speech may be helpful, but such efforts are far from sufficient. Such measures do not challenge the status quo and are often incompatible with social justice goals. So, our paper insists that we need a careful deliberation on these issues, along with striving for structural change, is required as we continue to build mechanisms to address identity-based online harm."
Through this research, Kirasur added, "we intend to expand the conversation about dominant communities and their mechanics of power by focusing on the category of caste. We hope that this study will trigger additional conversations on how we can frame marginalization of oppressed groups as a manifestation of online harm that all stakeholders, including platforms, the state, and civil society, must address."
Learn more about the Ph.D. Program at the Rutgers School of Communication and Information on the website.