IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image
Intel drafts model legislation to spur data privacy discussion
Fri, 9th Nov 2018
FYI, this story is more than a year old

Intel Corporation released model legislation designed to inform policymakers and spur discussion on personal data privacy.

Prompted by the rapid rise of new technologies like artificial intelligence (AI), Intel's model bill is open for review and comment from privacy experts and the public on an interactive website.

The bill's language and comments received should provide useful insight for those interested in meaningful data privacy legislation.

Why privacy is important: 

Data are the lifeblood for many critical new industries, including precision medicine, automated driving, workplace safety, smart cities and others. But the growing amount of personal data collected, sometimes without consumers' awareness, raises serious privacy concerns.

People need assurances that information that is shared – both knowingly and unknowingly – will be used in beneficial, responsible ways and that they will be appropriately protected. The U.S. needs a comprehensive federal law to create the framework in which companies can demonstrate responsible behaviour.

How it works: 

Intel's model data privacy bill aims to bring together policymakers and others in a transparent and open process that helps drive the development of actual data privacy legislation. Intel has launched a website where interested parties can review and comment on the model bill.

More context: 

Privacy is an important and ongoing issue in our data-centric world. In a white paper published last month, Intel's Global Privacy team laid out six policy principles for safety and privacy in the age of AI, one of the technical domains that have significant privacy implications.

These principles summarised here were among the factors that influenced Intel's draft legislation:

  • New legislative and regulatory initiatives should be comprehensive, technology neutral and support the free flow of data.
  • Organisations should embrace risk-based accountability approaches, putting in place technical or organisational measures to minimise privacy risks in AI.
  • Automated decision-making should be fostered while augmenting it with safeguards to protect individuals.
  • Governments should promote access to data, supporting the creation of reliable data sets available to all, fostering incentives for data sharing, and promoting cultural diversity in data sets.