Table of Contents
Taking responsibility for cyber security in a truly virtual world
Charlotte Walker-Osborn, international head of technology sector at Eversheds Sutherland, sheds light on cyber security responsibility
Covid-19 has made virtual working a true reality in 2020. Millions of workers are engaging with each other remotely in video conferencing meetings via various online platforms, including Skype and Zoom. The surge in usage of these platforms has been dramatic; Zoom, for example, is reported to have moved from 10 million users participating in daily virtual meetings in December 2019 to 300 million in April 2020, but with this success has come threats to cyber security, and a question of responsibility.
Zoom’s cyber security issues have been reported extensively in 2020 but there were prior concerns raised in 2019 too. Whilst Zoom stated it would freeze certain features of its app to curtail the security issues and would address the issues, the PR coverage alone reminds us of culpability for cyber risk and breach.
Who is legally responsible for cyber security?
Legal responsibility varies significantly across countries. As most readers will be aware, an increasing number of countries have privacy and/or security legislation which specifically addresses responsibility for cyber security. Through 2020 and beyond, expect to see new cyber security specific laws and guidance, not least given the promulgation of artificial intelligence and the need for security in this area.
How is the European Commission governing AI and trying to gain trust?
Often, legislation puts legal responsibility on both the technology supplier and the company adopting the technology. In Europe and the UK, for example, whilst responsibility for security is embedded in security specific laws and guidance as well as upcoming laws, arguably the most impactful security-focused legislation is enshrined in privacy legislation.
EU / UK data protection laws (the GDPR and local implementing legislation) place significant responsibility on both technology companies and companies to seek to ensure security and to deal with breaches in a compliant way. Put simply, under those laws, the data controller (generally the company using the video conferencing facilities in the above scenarios) should only use processors (in this case generally the tech companies providing the video conferencing facilities) providing sufficient guarantees to implement appropriate technical and organisational measures in such a manner that processing will meet the requirements of privacy laws and protect the rights of the data subject (here, generally the employee).
There is a responsibility under such legislation for the legal contract to place obligations on the technology supplier for security too. Under article 32 of the GDPR, for example, the controller and the processor should implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including (as appropriate) encryption of personal data; ensuring confidentiality and resilience; and a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures.
Two years on: Has GDPR been taken seriously enough by companies?
As many readers know, non-compliance with the GDPR and local implementing legislation can lead to fines of up to 20 million Euros, or 4% of global turnover, and even imprisonment of certain staff. Responsibility and risk can be partially re-allocated in the legal contract between the parties. Therefore, it is paramount (currently more so than ever) to understand contractually where responsibility lies or should lie when adopting or rolling out IT.
Frequently, the actual security issue comes down to human error in usage; for example, the user or employee not closing the conference call once the meeting is finished; weak passwords; or utilising non-essential functionality, which is less protected. During the pandemic, there has been discussion around disgruntled employees also becoming an increasing insider threat to cyber security.
So, whose responsibility is this? Well, this is a big question, and the answer is: it depends. In any event, more focus needs to be placed in this area again, given the upsurge in use of technology platforms during the pandemic, acknowledging the pressure already placed on often small in-house security teams in companies. How many employees actually understand that their usage can cause the security issue, or are adequately trained on using the technology being rolled out, and how can more automation be used?
How to make cyber security intelligence-driven for a more proactive cyber defence
Rapid adoption means that due diligence around security is not always being performed to the level required by law, exposing companies to huge fines and PR pressures in the event of cyber breach. Responsibility for security is often not being adequately addressed in contractual documentation, either with the technology supplier or the employee. When there is pressure to sign-up, often the legal terms with suppliers end up being pilot agreements, or agreements which are on supplier standard terms and may not adequately address security risk, including around responsibility, rights of audit, cooperation around cyber breach reporting and security rectification. At times, this means the risk is arguably unfairly lying with the adopter/employer.
Whilst suppliers should not be forced to have unfair levels of risk and financial responsibility in the contract, the contract is a place where this fair level of risk should be set out. And, with employees, training and clear employee IT usage policies should help ensure more responsibility too. The pressure on security teams is clear; but it is, undoubtedly, time to get back to basics more than ever.
This content was originally published here.