Media & technology

Employee Data Security – A Potential Pitfall
6 December, 2017

Morrisons have hit the headlines recently after a judge ruled that it was vicariously liable for the deliberate leaking of employee data by one of its own employees.

The facts

In 2013, an internal auditor employed by Morrisons, Andrew Skelton, secretly copied a master payroll file and leaked parts of it online and to the press. Skelton has already been arrested and sentenced to eight years in prison, but the current case relates to a claim against Morrisons by a group of over 5,000 of its employees at the time who were affected by the leak. Even though Morrisons was held to be innocent in respect of the misuse itself (and so not directly liable for the breach), it was liable for Skelton’s actions. This hearing was concerned only with liability, and the actual damages which Morrisons may have to pay (they have indicated they will appeal the decision) will be determined later.

The case raises a few key points for businesses when considering personal data.

Employee data is also a risk

Many businesses fall into the trap of thinking that just because they don’t hold much “consumer data”, that they have a very low risk profile in terms of personal data. Data leaks like those suffered by Experian and TalkTalk are a clear danger for businesses, but this reinforces that employee data is still personal data, and still presents a risk if it is misused.

Don’t just worry about hackers

While high profile data leaks will often be linked with external hackers, this isn’t the only risk. Here, an employee was responsible for the breach, and all the firewalls and external-facing measures in the world would not have prevented him. A significant part of the judge’s decision was dedicated to examining whether Morrisons had in fact taken “appropriate technical and organisational measures” against misuse of the data. Although in this case the judge found that they had done so (and so were not directly liable), businesses need to consider what measures may be needed to prevent a malicious employee from misusing data.

Not every leak is a breach

The decision that Morrisons were not directly liable for a breach of data protection law demonstrates that just because a data leak occurs, this does not mean a business will always be in breach of data protection law. In this case, the judge (broadly) found that Morrisons had in place all the measures it ought to have which could have prevented the leak. Morrisons’ liability is based only on the fact that it employed Skelton, not that it should have stopped him.

The lawsuits are coming

While this is the first High Court ruling on a data leak group litigation, it’s likely that, with the new obligations and rights under the GDPR, these will become more and more frequent.

What to do now?

As part of any data protection compliance project, businesses should be evaluating their current security measures around data, including employee data. Businesses should evaluate the risk involved, and decide whether to strengthen those measures, as even if the level of security is compliant with the law, a business may still be liable for the leaks of its employees.


For more information on data protection, please contact Elliot Fry at or on +44 (0)1732 224 034

For updates from us and the latest Tech news follow us on Twitter @CrippsTechLaw

Consequences of breaching the GDPR
30 November, 2017


As the General Data Protection Regulation (GDPR) will affect most businesses, it is prudent to be aware of the consequences if you find yourself in breach of its provisions.

Who polices the GDPR?

The Information Commissioner’s Office is the supervisory authority in the UK responsible for overseeing and enforcing compliance with the GDPR. The ICO website contains extremely helpful guidance on compliance with data protection law.

Duty to notify of a breach

If a personal data breach results in a likely risk to a data subject’s rights and freedoms, data controllers must notify the ICO of the breach “without undue delay and, where feasible, not later than 72 hours after having become aware of it”. When there is a high risk to a data subject, prescribed information must be communicated to the subject as well.

The ICO must be told:

  • the nature of the breach;
  • the name and contact details of your Data Protection Officer (if applicable);
  • the likely consequences of the breach; and
  • measures taken or proposed to be taken to address and mitigate the breach.

Powers of the ICO

The ICO will have investigatory and corrective powers under the GDPR. Corrective powers include, amongst other things: issuing warnings; ordering compliance with a data subject’s requests to exercise their data protection rights; ordering compliance with the GDPR; and, ordering restrictions on data processing activities.

Whilst the ICO also has the power to impose fines, in instances of relatively minor breaches the exercise of the corrective powers above may be sufficient to deal with a data breach. Failure to provided notification of a breach, however, is one of the aggravating factors for imposing a fine.

Fines under the GDPR

Organisations may be fined up to the higher of €20,000,000 or 4% of total worldwide annual turnover for the worst kinds of breaches. However, there will be a number of factors to which the ICO must give “due regard” when deciding the imposition and level of a fine:

  • Nature, gravity, and duration of the breach;
  • Damage caused;
  • Intention or negligence;
  • Mitigation by the data controller;
  • Appropriateness of existing safeguards;
  • Relevant previous breaches, corrective action ordered, and compliance with any orders;
  • Degree of cooperation with the ICO;
  • How the ICO found out, including whether (and to what extent) the organisation gave notification; and
  • Any other aggravating or mitigating factors.


When breaches of the GDPR inevitably occur, properly reporting the breach to and working with the ICO will always be the best option.


For more information about the GDPR and how we can help you prepare for its implementation on 25 May 2018, visit our comprehensive GDPR hub. Alternatively, contact George Fahey on 01732 224 059 or by email:  

Is it all Uber for tech giant?
29 November, 2017

Uber’s toils show the importance of correctly categorising your staff and good data protection management.

Worker Status

The success of Uber’s business model is the immediacy with which customers can be picked up by a driver.

To do this, at present, Uber’s drivers log into the company’s app and wait for jobs to become available.

In short, to function as it does Uber requires a certain number of drivers to be available in each territory at all times to satisfy demand – herein lies the problem for Uber.

The Employment Appeal Tribunal recently upheld the Employment Tribunal’s decision that any Uber driver who was available/waiting and willing to accept a job, within their operating territory, was on the clock and working for Uber as a ‘worker’.

This means that Uber drivers are entitled to National Minimum Wage for the hours that they work, along with other worker rights.

Somewhat unsurprisingly, Uber has chosen to apply directly to the Supreme Court (rather than to the Court of Appeal) to appeal the decision.

This is a big moment for not only Uber but the rest of the gig-economy.

If the Supreme Court were to uphold the decisions of the EAT and the Employment Tribunal the impact could be devastating for companies with similar business models to Uber.

Uber argues that in response to such a decision it would have to have drivers on scheduled shifts rather than simply allowing drivers to work when they wish to.

The impact of such an approach would be interesting.

To stay ahead of the competition Uber needs to have drivers readily available, meaning it may need to recruit additional drivers.

However, if costs are an issue, Uber may need to offset the increase in drivers by reducing the hours drivers can work; additionally, the loss of flexibility may turn some drivers away.  

Data Protection

If worker status issues weren’t enough, Uber has also found itself in the news due to a data protection breach.

In a classic example of how not to deal with a data protection breach, it has been reported that hackers compromised the personal data of more than 57 million of Uber’s users and drivers worldwide.

Uber is said to have discovered the hack back in December 2016 but rather than reporting it to the authorities it paid the hackers $100,000 to delete the data.

In a word of warning, the ICO’s statement regarding Uber’s data protection breach stated: “Deliberately concealing breaches from regulators and citizens could attract higher fines for companies.”

With the GDPR coming into force on 25 May 2018 click here for access to the Cripps’ GDPR Hub and here for more information on data protection in a post-Brexit world.

If you require any employment law advice, please feel free to contact Chris Hovenden on 01732 224 166 or      

Twitter – the advertising veil is lifted
15 November, 2017

Twitter has recently announced the launch of a “Transparency Centre”. The purpose of the Transparency Centre is to increase visibility to Twitter’s users about who is advertising on the platform. The Transparency Centre will specifically show:

  1. All ads that are currently running on Twitter, including Promoted-Only ads (ads that are only shown to users targeted in the particular advertising campaign);
  2. How long ads have been running;
  3. Ad creative associated with those campaigns; and
  4. Ads targeted to you, as well as personalised information on which ads you are eligible to receive based on targeting.

In addition, users will be able to give feedback on any advert running on Twitter, whether they are being targeted with the ads or not. It will be interesting to see if Twitter is able to harness this potentially valuable data in respect of the appeal of particular adverts to users.

The aim of introducing the Transparency Centre appears to be an attempt by Twitter to avoid regulation following allegations of attempting to interfere with and influence the US election by unidentified political adverts on the platform. New US legislation has been introduced which would impose similar disclosure requirements on all platforms with at least 50m monthly viewers.

The announcement singles out election ads and states there will be a special section for electioneering ads. The information users will be able to access will be particularly revealing and includes:

  1. Disclosure on total campaign ad spend by the advertiser;
  2. Transparency about the identity of the organisation funding the campaign;
  3. Targeting demographics, such as age, gender and geography; and
  4. Historical data about all electioneering ad spending by advertiser.

Along with similar sponsorship and advertising announcements by Facebook and Instagram social media platforms are becoming more aware of their responsibilities and the need to be transparent with users. Watch this space to see if the changes have any perceived affects advising and elections in the future.


For more information on advertising and twitter, please contact Harry Partridge at or on +44 (0)1732 224 092

For updates from us and the latest Tech news follow us on Twitter @CrippsTechLaw

Artificial intelligence and the legal sector – what next?
8 November, 2017

Artificial intelligence continues to make headlines, this time for its application to the legal sector. With wider integration of artificial intelligence and law envisaged, lawyers and their regulatory bodies must grapple with some difficult questions.

Man vs machine

Recently, legal technology firm CaseCrunch invited lawyers from leading London firms to pit their wits against its CaseCruncher Alpha software. The software is a ‘legal decision predictor’ designed to resolve specific problems. The challenge involved each side predicting the outcomes of real, previously-decided payment protection insurance (PPI) mis-selling claims. The result was a clear victory for artificial intelligence: 86.2% of CaseCruncher Alpha’s predictions were accurate, compared to only 62.3% of the lawyers.

These figures suggest that an increasing role for artificial intelligence within the legal services sector would be beneficial. After all, accurate decision-making saves time and money.

A closer look

Despite technological advances, the advent of artificial intelligence in law should not cause too much alarm for lawyers at present.

Firstly, by CaseCrunch’s own admission its software requires precisely defined questions with binary responses. Its victory in this prediction challenge also came against lawyers with little experience of dealing with PPI claims in practice. The software was provided with a database of information to use. In contrast, its human adversaries had to research their materials from scratch in a limited timeframe.

The issues highlighted above suggest that the way forward is for lawyers to work alongside legal artificial intelligence resources. Humans can still conduct the fact finding exercise and fulfil a supervisory role whilst artificial intelligence carries out intensive data-heavy processes. This way, clients retain an empathetic point of contact who continues to provide a personal service.

The regulatory conundrum

Having established that ‘the machines’ are not imminently taking over, we have to consider how the artificial intelligence-lawyer relationship will work. The legal sector is a restricted one, with only regulated individuals permitted to provide certain reserved activities.

The Solicitors Regulation Authority (SRA) maintains a Handbook on code of conduct, but this does not currently reference artificial intelligence. This leaves several important questions unanswered. For example, should clients have a right to know of the role artificial intelligence played in the advice given to them? In litigation, should the use of artificial intelligence be revealed to opponents? Artificial intelligence predictions of case outcomes could have tactical implications in deciding whether to settle or proceed to trial.

Additionally, the SRA’s handbook uses non-binding ‘indicative behaviours’ to guide solicitors. This introduces a level of subjectivity which goes beyond the binary capabilities of software such as CaseCrunch Alpha.

What next?

The forward march of artificial intelligence is inevitable but as lawyers and clients, we have it in our grasp to determine the direction of progress in the legal sector. Until the relationship between law and artificial intelligence is more clearly defined, we must keep regulatory concerns at the forefront of our minds.


For more information on artificial intelligence, please contact Phil Bilney at or on +44 (0)1732 224 046.

For updates on the latest Tech news, follow us on Twitter @CrippsTechLaw

1 2 3 21