Story image

Intel drafts model legislation to spur data privacy discussion

09 Nov 18

Intel Corporation released model legislation designed to inform policymakers and spur discussion on personal data privacy. 

Prompted by the rapid rise of new technologies like artificial intelligence (AI), Intel’s model bill is open for review and comment from privacy experts and the public on an interactive website. 

The bill’s language and comments received should provide useful insight for those interested in meaningful data privacy legislation.

Why privacy is important: 

Data are the lifeblood for many critical new industries, including precision medicine, automated driving, workplace safety, smart cities and others. But the growing amount of personal data collected, sometimes without consumers’ awareness, raises serious privacy concerns.

People need assurances that information that is shared – both knowingly and unknowingly – will be used in beneficial, responsible ways and that they will be appropriately protected. The U.S. needs a comprehensive federal law to create the framework in which companies can demonstrate responsible behaviour.

How it works: 

Intel’s model data privacy bill aims to bring together policymakers and others in a transparent and open process that helps drive the development of actual data privacy legislation. Intel has launched a website where interested parties can review and comment on the model bill. 

More context: 

Privacy is an important and ongoing issue in our data-centric world. In a white paper published last month, Intel’s Global Privacy team laid out six policy principles for safety and privacy in the age of AI, one of the technical domains that have significant privacy implications.

These principles summarised here were among the factors that influenced Intel’s draft legislation:

  • New legislative and regulatory initiatives should be comprehensive, technology neutral and support the free flow of data.
  • Organisations should embrace risk-based accountability approaches, putting in place technical or organisational measures to minimise privacy risks in AI.
  • Automated decision-making should be fostered while augmenting it with safeguards to protect individuals.
  • Governments should promote access to data, supporting the creation of reliable data sets available to all, fostering incentives for data sharing, and promoting cultural diversity in data sets.
The secret to scaling DevOps in the digital era
"Organisations around the world have learnt at a cost that while agile DevOps methodologies can result in improved outcomes within teams and projects, they have a propensity to fail miserably."
APAC FinTech network launches to encourage cross-border innovation
Nine associations formally launched the network by signing a Statement of Intent at the Asian Financial Forum event in Hong Kong.
New blockchain solution aims to keep our food ethical
OpenSC enables anyone to scan product QR codes which automatically takes them to information about where a specific product’s journey.
Avaya expands AI offerings with new partnerships
The additions to the ecosystem will enable Avaya to add prioritisation and natural language processing to its UC solutions.
Hillstone CTO's 2019 security predictions
Hillstone Networks CTO Tim Liu shares what key developments could be expected in the areas of security compliance, cloud, security, AI and IoT.
Can it be trusted? Huawei’s founder speaks out
Ren Zhengfei spoke candidly in a recent media roundtable about security, 5G, his daughter’s detainment, the USA, and the West’s perception of Huawei.
SUSE partners with Intel and SAP to accelerate IT transformation
SUSE announced support for Intel Optane DC persistent memory with SAP HANA.
Oracle Java Card update boosts security for IoT devices
"Java Card 3.1 is very significant to the Internet of Things, bringing interoperability, security and flexibility to a fast-growing market currently lacking high-security and flexible edge security solutions."