clear

Creating new perspectives since 2009

Text profiling is the hidden force shaping our online world 

November 5, 2024 at 2:00 pm

In this photo illustration the logos of social media applications displayed on a smart phone screen in Ankara, Turkey on September 30, 2021 [Ali Balıkçı/Anadolu Agency]

We live in an era in which our attention has become the hottest commodity. Text profiling technology has emerged as one of the most powerful, yet controversial, tools shaping our digital experiences. Every post, comment and reaction we generate is a data point that is quietly harvested, processed and interpreted among many others to understand who we are, what we believe and how we might act next.

Text profiling is not some futuristic concept; it is operating underneath the surface of every social media platform right now. As this technology grows in sophistication, we must ask ourselves: how much power are we giving to algorithms that increasingly know more about us than we know ourselves?

What is text profiling?

Text profiling involves analysing our online text to identify patterns in our personalities, preferences, behaviour and moods. By leveraging Natural Language Processing (NLP), machine learning algorithms study the large volumes of text we generate to gain insights into who we are. This information is then used to categorise us into demographic groups, predict our actions or tailor content specifically to our interests.

The potential of this technology lies within the language generation process itself. As we put words together to form meaningful sentences, we unconsciously reveal aspects of our behavioural blueprint. Recent discoveries show that during text creation, numerous demographic and social characteristics are leaked through our preferences for specific linguistic elements.

Social media platforms utilise this technology to improve the user experience by recommending relevant or interesting content. It powers advertising engines that allow companies to target potential customers precisely, enhancing the effectiveness of marketing campaigns. It even helps detect harmful behaviour, such as hate speech, harassment or misinformation, providing tools to moderate content and keep online communities safer.

However, with such power comes an inevitable dilemma.

When we hand over the keys to our digital identities, what are the long-term consequences for our privacy, autonomy and society at large?

Beyond personalisation

The idea of having a feed tailored to our exact interests seems advantageous. Text profiling makes this possible, turning endless streams of content into curated experiences that align with our tastes.

The content we consume, though, is shaped by the profiles inferred about us, which are often based on incomplete, biased or inaccurate information. Algorithms tend to reinforce existing preferences and beliefs, pushing users further into ideological echo chambers. While we may think that we are simply scrolling for leisure, we might actually be sinking deeper into entrenched worldviews and being disconnected from other perspectives.

There is also the unsettling reality that text profiling can be exploited for manipulation.

This technology was at the heart of the Cambridge Analytica scandal, where data from millions of Facebook users was used to build psychological profiles that helped target political advertising. By identifying users’ fears and biases, targeted ads were crafted to sway public opinion. The line between personalisation and manipulation became perilously thin, demonstrating just how much is at stake.

The biases behind the technology

Like any technology, text profiling carries with it the biases of its creators. Profiling systems are trained on vast amounts of data that often include biases present in society, leading them to reproduce and even amplify these biases in their decision-making. This means that marginalised groups may be misrepresented or stereotyped in the profiling process, with potentially harmful outcomes.

For example, language use varies significantly across different cultures and socio-economic backgrounds. An algorithm failing to understand these different contexts may interpret a user’s intent incorrectly or assign them to a misleading profile. A casual remark might be taken out of context, leading to real-world consequences, such as being flagged unjustly for inappropriate content or receiving irrelevant (or even offensive) ads.

As text profiling technology is adopted increasingly beyond social media, such as in hiring processes, credit assessments or policing, the potential for biased outcomes grows. If these technologies are not developed and deployed with fairness and transparency in mind, they risk perpetuating existing inequalities rather than alleviating them.

Where do we go from here?

It’s clear that text profiling is neither purely good nor evil. It’s a tool, which means its impact depends on how it is used. Several actions are needed to harness its benefits while mitigating its harms.

First, transparency must be a priority. Social media platforms and tech companies need to provide greater clarity on how they use text profiling technologies, what data is being collected and how these insights shape users’ experiences.

Users deserve to know how their digital footprints are being interpreted and used to influence what they see and do online.

Second, regulation must catch up with technology. Governments worldwide have started introducing data privacy laws, but the pace of technological advancement often outstrips policy-making. Thoughtful regulation that protects user privacy while fostering innovation is crucial to ensure that these technologies are developed responsibly.

Finally, there must be an emphasis on ethical AI development. This means training algorithms on diverse datasets, testing for biases and involving ethicists and social scientists in the design process. The creators of these tools need to ensure that their systems work fairly for all users, not just for the majority or those who fit a narrow profile.

A call for digital literacy

As a researcher who has been working in this field for the past 20 years, I often find myself struggling with the ethical implications of my work. Every time I publish new research on text profiling, I question whether it will have a positive impact on society or contribute inadvertently to harm. As we navigate the new digital landscape, digital literacy — understanding how our data is used and being critical of the content we engage with — is becoming as important as traditional literacy. We should question why we see certain ads, why particular posts are prioritised, and how our online personas are constructed through the data we leave behind.

Text profiling is a powerful tool with incredible potential to enhance our digital experiences, but without scrutiny and careful stewardship, it could easily be weaponised against us, shaping our worldviews in ways we may not even recognise. As users, we must push for transparency and fairness in the technologies that shape our lives so profoundly. Moreover, as creators, researchers and policymakers, we must ensure that the unseen hands guiding our experiences are working in service of a fairer, more informed society, not just for profit, but for the common good.

OPINION: Death by Algorithm: Israel’s AI War in Gaza

The views expressed in this article belong to the author and do not necessarily reflect the editorial policy of Middle East Monitor.