Home News
The Future Examined: Firms must consider ethical implications when using personal info or risk backlash

‘Transparency is absolutely key,’ Deakin’s Heather Round says of big data use.

Companies seeking to maximise the commercial potential of big data must do so with regard to complex ethical questions, according to academics.

The backlash that followed the Facebook-Cambridge Analytica scandal and recent concerns surrounding the proposed My Health Record platform highlight the extent of public unease about how organisations capture and use personal data.

The potential for customising products, reaching more customers and increasing business efficiency comes with significant risk if consumers perceive unethical behaviour, according to Heather Round, director of innovation and entrepreneurship at Deakin Business School.

 “The impacts can be dramatic,” she said. “For everyday organisations or retailers, they’re still seeing the positive sides of using the data and using AI in terms of their business processes. They haven’t fully thought through the implications of those actions yet and thought about the consumer more holistically.”

She said the commercial risk was similar to that of manufacturing in a sweatshop, where there might be lower costs but also the unwanted consequence of deterred consumers.

“Organisations who are basing their decisions on strong values and ethics are not going to go down that path which may have an impact on their profit margins,” she said. “I think there will be a counterbalance swing in terms of consumers selecting organisations who can show they are being ethical in their practices and transparent in their use of data.”

Marketers differ in their perspective of what’s ethical, according to Nicholas McClaren, a marketing ethics specialist at Deakin Business School.

“Some marketers will see some activities such as the use of big data or certain aspects of using big data as right or wrong in a black and white sense, and others will see it in a relative sense,” he said.

“Generally speaking, marketers are likely to balance, in their way of thinking, the amount of harm or the amount of good that is created in the use of big data and make some sort of decision based on that.”

Dr Round has called on the marketing profession to ensure that any strategy involving big data should be formulated with ethical priorities in mind, the most important being transparency.

“Transparency is absolutely key and I think that’s something that organisations are now really starting to embrace,” she said. “How is that data being used and is it being sold? In what way will this data be used?”

She said data retention and data security considerations also were of critical importance.

Matt Mount, an expert in strategy and innovation at Deakin, emphasised the role that transparency could play in navigating what was “a giant shade of grey”.

“If there’s a huge disclaimer saying, ‘This data will be used for innumerable purposes’, I think there’d be a lot less people who would actually buy into using those social media sites and putting their information on shopping websites,” he said.

“I don’t think people really realise how much data is out there to profile them and for people to be in the know about a lot of their personal life. So every time they tag in and check into a place, every click leaves that virtual footprint.”

The ethical considerations surrounding big data usage aren’t necessarily new, according to Dr McClaren.

“The use of AI and machine learning has been with us for decades,” he said. “It’s become more and more sophisticated as time has progressed. The benefits, or the downsides, have increased as computing power has increased.”

He said artificial intelligence and machine learning were always the product of a human input.

“A human has to decide what the algorithm will do; that human will be a marketing manager or IT specialist,” he said.

“The interesting question is, at what point in the decision-making process, if any, does human input occur and how is this built in?”

“We still have to have a human-centred approach to that decision-making”

Dr Round said she was a proponent of human-centred AI, so that machines ultimately didn’t make all the decisions.

“We have to be very careful in terms of how we monitor and make sure that decisions are fair and that we can explain them,” Dr Round said.

“Even if we are automating processes, even if we are integrating AI into our organisation, we still have to have a human-centred approach to that decision-making.”

When asked about the level of control humans should retain, Dr Mount said it was a bit of a paradox: “As soon as we have human control over an algorithm, does it take the power out of the algorithm to generate the insights that are needed?”

Dr McClaren said the benefits of big data were significant: it could determine patterns and be predictive, but he warned it must be paired with adequate regulation over the use and storage of data.

Originally published on The Australian.

Share
Posted in News