Your Data Is Powering Bad Purchases

AI and big data can manipulate consumer choices, reducing welfare. Policies must ensure data minimisation, transparency, and ethical use.

Article related image
iStock.com
Author

By Amitrajeet A. Batabyal*

Batabyal is a Distinguished Professor of economics and the Head of the Sustainability Department at the Rochester Institute of Technology, NY. His research interests span environmental, trade, and development economics.

May 12, 2025 at 11:16 AM IST

In his 1930 essay Economic Possibilities For Our Grandchildren, distinguished economist John Maynard Keynes worried that rapid advancements in technology could outpace the creation of new forms of employment, leading to widespread job displacement that he called technological unemployment.

Even though Keynes was wrong, looking at the recent advancement of artificial intelligence, many researchers and policymakers believe, once again, that using AI will give rise to massive job losses. 

As such, there is now a vigorous debate about AI's employment-reducing potential.

Surprisingly, there is no analogous debate about another insidious aspect of using AI, which concerns AI's potential to manipulate consumer behaviour. 

Only very recently has research appeared that demonstrates that AI, particularly in concert with big data, can manipulate consumer behaviour and thereby diminish consumer welfare. To formulate an appropriate policy to tackle this disquieting possibility, it is necessary to first comprehend how this negative impact on consumers can arise. 

Let us investigate.

Nobel laureate Daron Acemoglu and his colleagues have recently analysed the question of consumer behaviour manipulation. They first point out that tech companies such as Google and Meta increasingly possess vast amounts of consumer data, which they can use to their advantage and to the detriment of consumers. 

Specifically, AI and big data can empower online platforms such as Amazon or eBay to manipulate user behaviour, potentially reducing consumer welfare. These researchers developed a theoretical model to assess when a platform’s behavioural influence is beneficial or detrimental to users.

This research is particularly interesting because it uniquely emphasises behavioural manipulation through the distortion of product choice rather than price alone. In addition, the research contributes to debates on the ethics of data use, consumer autonomy, and the societal implications of AI-driven recommendation systems.

At the core of the analysis is the concept of “glossiness” —a feature of low-quality products that makes them appear more attractive than they truly are. This glossiness temporarily masks the product's true quality. Platforms use AI and consumer data with similar characteristics to identify which products will appear glossy to others, exploiting this for strategic gains. 

Consumers, however, are unaware of the extent to which their behaviour is being predicted and influenced, leading to a clear behavioural bias. 

This research distinguishes between two environments - a pre-AI setting, where platforms and consumers share the same information, and a post-AI setting, where platforms use big data to gain an informational advantage. 

The research focuses on equilibrium behaviour in both environments and compares platform profits, consumer utility, and overall welfare outcomes. Three key findings emanating from this research are worth emphasising. 

First, there are positive outcomes when glossiness is short-term. If the glossiness wears off quickly, platforms are incentivised to recommend high-quality products to consumers. AI benefits consumers and platforms in this case, leading to better product matches and improved welfare.

Second, behavioural manipulation arises when glossiness is long-term. In other words, when glossiness is long-lasting, platforms exploit users by recommending low-quality but glossy products, resulting in higher profits for the platform but reduced consumer welfare. This occurs because users do not immediately realise the true low quality of recommended products.

Finally, behavioural manipulation is magnified when there are more products. Put differently, as the number of available products increases; platforms find it easier to exploit consumers by selecting from a wider pool of glossy but low-quality products. This creates a "double whammy," which means that consumers are overwhelmed by choice and, therefore, manipulated more effectively due to the platform's superior product knowledge.

Given these disquieting findings, how might policy safeguard consumer interests? 

One way would be to mandate “data minimisation.” In other words, firms would be required to collect only the data needed for a specific purpose and be proscribed from collecting data for any other purpose unless consumers have explicitly provided their consent. 

Second, the policy could promote transparency by educating consumers to better comprehend platform strategies and protect their data. Finally, a policy could increase competition by encouraging the entry of new platforms that commit to ethical data use practices.

By addressing the above-mentioned challenges, policymakers can harness the benefits of AI and big data while mitigating the risks that consumers are likely to face.  

* Views expressed in the article are personal.