In data we trust - or do we? The human-machine relationship examined
As a society, we have all individually developed significant yet quiet relationships over the past decade, many of us without even realising it. It's a relationship which progresses and evolves every time you log on to your computer, turn on your smartphone, or make a purchase online. It's your relationship with data, And it's a complex one.
For years, this relationship has sat in the background as companies collected, stored, and used our data without much challenge from us. But that has changed. As the use of data becomes increasingly apparent in our digital world (for instance, from the ads and content we are served), we have naturally become more attuned to and interested in how it is used, and how we want our relationship with it to develop moving forward.
There is one big barrier to that development, however, and that is trust. It pervades any conversations around data, particularly as artificial intelligence (AI) and machines play an ever-expanding role in our lives. The question on everyone's lips is - with the advancing role of AI, is our relationship with data poised to change in relation to machines?
To help answer that question, Professor of Emergent Technologies, Dr Sally Eaves shared her perspective in Qlik's new Active Intelligence magazine.
Unsurprisingly, her answer was; it's complicated – "it's 'Yes, ' with respect to the human-machine interface evolving from information system to automation to autonomous agent (to varying degrees). In other words, a move from master-servant to teammates or partners bringing together complementary strengths. But it is 'No ' with respect to the question of intent. One could argue that, in its current state, AI is not close to having its own intentions or mental states.
Despite the duality of the answer, Eaves provides clarity around AI "trustworthiness" in the shape of three domains: "the technology, the system it is in, and the people behind/interacting with it" along with five key pillars within these domains: "the capacity for AI development and decision-making to be human-led, trainable, transparent, explainable and reversable".
There are two particular pillars that stand out. The first is transparency. Trust comes from transparency and consistency. Good governance, good lineage and good data underpin Active Intelligence, the state of continuous intelligence from real-time, up-to-date information designed to trigger immediate actions. Ultimately, you can't trust the output if you don't understand the input.
The second is the human led element, which head of Qlik Research Elif Tutuk is unwavering on. Tutuk agrees with Eaves that there is a need to get a more human element in the loop, stating "we need to get human trust into analytics and data and provide good collaboration between data producer and consumer to ultimately unlock the future of data and data analytics.
Looking ahead, the people component will become core, enhancing collaboration is the next critical step in changing our thoughts on data, and trust in data analytics.
Like every relationship, our relationship with data is a constantly changing one, and should be built on trust. By enhancing collaboration, we enhance trust in data, enabling organisations to ensure the relationship is fruitful and benefits all parties equally.
Click here to read Dr Sally 's article on trust in data in full.