Cultural Differences

I have had the honour of working with the US Secret Service in the past, a role that involved moments of tension, good humour, a fair bit of coffee drinking, and some very intelligent conversations.

One related to the difference of approach to Protection work based on the cultural background of the host country. For a Presidential Visit, the USSS work with the local security teams to agree how the President will be protected – this is a balance between the expectations of the USSS and the local knowledge of the hosts. For example what is ideal in the US may be problematic for the host country and a better alternative suggested.

Most of this is a pragmatic conversation between experts, however culturally there may be fundamental differences that lead to certain responses.

  • In some countries, if a VIP is attacked they will be moved away from the threat.
  • In other countries, if the VIP is attacked they will defended at the scene.

Culturally, running away may not be seen as acceptable and to expect it may therefore meet with resistance. The planned response may not be followed.

The existence of these cultural differences also applies within companies, especially multinationals or companies formed by mergers, where different teams have different cultures that may in the event of an emergency clash with the preplanned corporate responses. In the worst cases, you can find that not only are reacting to an attacker but also your own side.

Running exercises to identify the issues is important, as is clearly defining expectations and roles in handling an incident.

Silver Cyber Security Commander is probably one of the greatest job titles I’ve ever had.

Bad things will happen

How you react to them, and whether you manage them or they control you is a matter of planning, but no one likes to plan for a risk becoming an issue.

Risk Registers are built and Controls are put in place to control the risks. A control, however, only reduces a risk rather than eliminates it completely. There is still a possibility of the events in the risk actually occurring. It is a common failing to believe that identifying a risk, and associating a Control with it makes the risk disappear.

Planning for a Controlled Risk actually happening often feels like a worthless activity, and so there is little effort or enthusiasm in performing it. There is also a view that says “We don’t know what will happen (if we did we’d have stopped if happening), therefore we can’t plan for it.” This is largely true, but a general structure and roles in addressing an event can be established.

The aim of an incident response plan is to reduce the opportunity for chaos, enabling a business to recover as quickly as possible and to reduce the losses.

What is in the plan?

  • Pre-agreed Roles and Responsibilities.
  • How an the Incident Team is triggered
  • Who owns the Incident.
  • The support they can call on: Technical and Security experts, Media Relations, Property and Transport.
  • How the Team will Communicate, both between the members of the team and with other stakeholders.
  • What records they will keep.
  • What authority they have
  • What limitations will they have on funding and resources.

Doesn’t this sound similar to a Business Continuity and Recovery Plan?

Three Common Technical Failings

A regular part of my work is to undertake technical audits of IT systems, and report on my findings.
This is different to a Penetration Test in that an audit confirms the general health of a system rather than attempting to break into it. It is similar to the difference between the local Crime Prevention Officer having a look at your house and giving advice to help you remove weaknesses, and a burglar casing the joint. There are three things that are seen in almost every audit: I could almost write part of the report in advance.

Patching is not maintained.
Implementing the vendor’s patches remains the best way of reducing the number of security issues within a system. This should be done regularly and against a defined policy.
The policy is important, as it defines the maximum window of opportunity that an organisation is willing to tolerate for their systems to be vulnerable. For systems where the protection of the information is critical, a policy of patch production immediately may be appropriate, and the risk of disruption is acceptable. This is often known as “Patch and be Damned”.
For less critical systems, then you can test the patches before a large monthly patch deployment.
Antivirus is not up to date.
Antivirus systems, despite their limitations, remain the last line of defence against malware. They are at their most effective when they are able to deal with new attacks, and to do this they must have the latest signature files and the latest versions of the detection engine.
As with the patching of software, antivirus software (properly called malware detection software) should be kept up to date in accordance with policy.
Antivirus is not fool proof, and should not be the only form of defence, but if you’ve got it, then make sure it is effective.
It is reasonable to have one policy document that covers patching and antivirus.
Firewalls are open.
The purpose of a firewall is to limit the connections that can be made through it. Restricting which devices can talk directly to others, and which protocols or ports they can use will hamper the spread of any attack. It must only allow connections that are required for the solution to work and no more. Any additional rules, however convenient they may be, reduces its effectiveness.
And if I see a bunch of rules followed with “default = allow all” I will comment on it.

Take note and consider this a free bit of advice. Following it makes you safer (and adds variety to my work).

The President’s Phone

It appears that President Trump remains committed to his elderly Android phone. This has caused a flurry of speculation on the risks of doing so. There was a similar debate about President Obama retaining his Blackberry when he took office.

This was a subject I discussed with the US Secret Service during one Obama’s State Visits; and sensibly there are limits on what can be said publicly, so I will avoid going into specifics.

So, what are the Risks here?

  • Access to sensitive information on an unencrypted device? Physical access to the device allowing access to data at rest, or account credentials – usernames and passwords. Unlikely, the device is in the jacket pocket of the US President – there are adequate physical controls. This was viewed as adequately managed.
  • Remote access to data? This is a more significant risk. If it is a stock devices, running an old version of Android there are unpatched vulnerabilities that allow an attacker to obtain information from the device. At Rest storage encryption doesn’t help here. This remains a risk, and is a good reason for retiring the device. Especially given that obtaining credentials for e-mail and Twitter accounts could be a extremely usedful for an attacker. It is quite possible that this would both as a result of a targeted attack on President Trump or an untargeted attack that just sweeps up all credentials from any device they can find.
  • Evesdropping? Again these are attacks against vulnerable, unpatched, devices, and they are available to foreign intelligence services. These attacks enable the device to become a sophisticated bug in any room. Such an attack would be of great interest to foreign governments in giving access to sensitive and non-public discussions. This is going to be a highly targeted attack by highly capable attackers and a significant security threat.
  • Location tracking. A mobile phone of any nature has to talk to a network to operate, and that, as well as any compromise of a smartphone to get it to report location. This can be used to understand where it is at any time, and any pattern of movement. It is extremely valuable information that can support both a direct attack on an individual, and to obtain information on who they meet and regular patterns of behaviour. Location can leak by many paths, data in images, social media posts, and is frequently overlooked. This location tracking threat is a critical concern to the physical protection of a VIP.

So, if your Risk Assessment indicates risks that you are unable to accept as part of your business, what controls can be used?

Require a device with patched software and applications that are still under support. Your employee may like their old Galaxy S3, but is their using is a risk to your business? It is at this point that BYOD meets security and should be addressed in the mobile working policy (“BYOD, but not any device”) – unless you are confident in securing and supporting every mobile phone made in the last fifteen years.

The use of an always on VPN, bringing all data traffic back to a point where it can be monitored for abnormal or unusual behaviour. Not all data can be monitored, unless you wish to compromise TLS connections, and users may (quite rightly) object to that on devices they also conduct personal business on. Monitoring, and acting on, unusual connections and activities is normally sufficient.

Periodic audit of the device to identify any unusual or unauthorised software or apps. Either using software continually running on the device, or by periodically inspecting the device with tools.

Controls over where devices are not allowed. If there are particularly sensitive areas in a business, or sensitive conversations, then ban devices. Provide small lockers to allow users to store them outside of the area.

Advice, guidance and training for users. And record when it has been delivered.

Remote disabling or wiping of the device.

Remote application execution: VDI for mobiles, or deliver all information through TLS browser sessions without requiring apps and data storage on the device.

The only control normally beyond the reach of a business is to use a private, closed, network with technologies to prevent RF or network based user tracking. But you would only use that in the most critical of cases…

Fake News

“Fake News”, misinformation, has been in the News recently.

2016 was the year when deliberately misleading “news” stories, used to manipulate public opinion and to directly influence events, really came to the fore. So, what has this to do with businesses, and why, as an Information Security Specialist am I interested in it?

It can be a significant threat to a business – either directly, where fake news has been made to damage a company or influence it’s share price, or as a result of a different attack.

This relates to the circulation of “incorrect information” that is a threat to the business – analogous to the use of incorrect information within the business. The key difference is that the incorrect information is not held and managed by the business.

So, how do you mitigate the risk?

  • Be able to respond quickly to fake news:
    Have a Response Plan, know in advance who will be involved in the response, and how you will co-ordinate and manage this.
  • A credible and engaging reply to the story.
    This will depend on the nature of the threat facing you.
  • You also want to ensure that any insurance you have covers such events to mitigate any significant financial loss or damage to the business.

A major problem in addressing such a fake news story is that responding and not responding may both cause an escalation by the attacker:

  • Well, it must be true because they obviously can’t correct it or deny it. Or,
  • Well, they would deny it wouldn’t they.

This means that you will need to have expertise available to support you in minimising the damage.

Cyber Insurance

There are three ways of managing IT security risks in a Risk Treatment Plan.

  • Accept the Risk – a positive decision to accept a risk to the business as being something you are comfortable with.
  • Mitigate the Risk – put a control in place to reduce the risk.
  • Transfer the Risk – move the cost of the event happening to someone else.

Transferring the Risk is commonly done by Insurance, although there are other methods of Risk Transfer.

Cyber Liability Insurance Cover (CLIC) is often overlooked as an option to help a company survive a critical loss of data or a major security incident. The market and take-up of such insurance is variable. In the US, some form of CLIC is often a regulatory requirement, so take up is high. However in the UK where there is no requirement for a business to be able to survive such an event take up is very low (approximately 1% of UK companies have some form of CLIC).

While some form of Insurance is invaluable to aid a business during a disaster, be it flooding, the loss of a critical member of staff, or a massive business crippling data loss; the devil is, as always in the detail.

I have worked with several large organisations in reviewing their compliance with the expectations of their insurers and have two key lessons:

  • The Cyber Insurance Market is relatively recent and has little historical record to generate risk profiles against, additionally, the market is relatively small at the moment giving a low spread for insurers to work against. This has a direct impact on Premiums.
  • The Cyber Insurance Policies place obligations on the policy holder to have a good level of security management and protections in place. In many places, this obligation is not complied with – leading to the possible non-payout of the insurance contract.

Insurance services are adapting to this with schemes to reduce Insurance Premiums based on the results of security audits – both to inform the Insurer of the risks they are running and to give the policy holder assurance of the validity of the policy.

Appropriate Risk

What does it take to make a Ship secure?

  • It depends on the type of cargo it will be carrying and the areas it is going to be sailing in.
  • It depends on the level of risk that the owners, and the insurers, are willing to accept.

The armour used on a battleship may not be appropriate for a ship carrying grain in coastal waters. Rounding Cape Horn requires a higher free-board than going upstream on the Thames.

Understand what the ship is for what threats it faces. Then you can make it secure.

  • Part will be the design and the equipment fitted.
  • Part will be the teaching people how to use the equipment.
  • Part will be checking that all is being operated as planned.

So:

Ask of the IT department if the Security Controls being applied are appropriate to the risks the Business faces:

  • Or are they just armour plating the rowing boat because “Armour Plate is best practice”?
  • Are they just adding technology because they find it interesting? and are they actually training people to keep the business secure?
  • And how do they know it is all working as intended? and what did they intend to achieve?

Use of Threat Intelligence

I have just read Max Hasting’s book, The Secret War; Spies, Codes and Guerrillas 1939-1945. It contains a key message on the use of Intelligence: It often isn’t used.

If you don’t use the information then the cost of obtaining it is waste.

When protecting a Organisation there are two sources of information about what the “opponent” is up to:

  • Threat Analysis. Threat Landscaping, and similar studies that identify what may happen to you.
  • Security Event Logs that identify what is happening and has happened to you.

What Max Hastings, and others, identified is during WWII information was often not believed because it conflicted with an existing mindset, or there was no ability to act on it.

For example,

  • The Japanese Intelligence services identified in 1942 that the key threat was from the USA, however the Japanese High Command were obsessed with the threat from Russia until 1944 and ignored their warnings.
  • The Germans had good information on Allied intentions in late 1944 and 1945, but did not have the capacity to exploit that knowledge.

In an Organisation which is looking to deploy information gathering tools – commissioning a Threat Landscape report, or purchasing an Security Information Event Management (SIEM) system – should consider what they are going to do with the information and how they can best benefit from their expenditure on it. It may be culturally challenging to do so.

 

 

Availability First

Availability means that Correct Data is available when required.
Correct Data requires Integrity of the data to confirm that only Authorised people have changed it.
Authorisation requires the Authenticated individuals.
Authentication requires a Shared Confidential secret.

While Compliance with regulatory standards is a cost to the business, the Availability of information will often be of critical importance to a business.

Availability can be used to drive any Information Assurance activities.

Big Data and AI

I have the pleasure of working on Advisory Group for CSP2017, the 2017 Cyber Security Practitioners Conference – The Annual York Cyber Security Conference (last year). As such, I get chance to talk with Colin Williamson, the driving force behind the conference, on a regular basis. We have a similar view that as Professionals, we should not be worrying about what is happening today, but what is likely to impact businesses over the next few years. Forewarned and Forearmed.

One recent conversation related to Big Data and Artificial Intelligence.

If you have a self-learning system being used with a large data set to make automatic decisions as part of a business process, then who is liable for the judgements that this AI makes? For example, if it acts in an illegal discriminatory way towards groups of individuals.

  • Is it the developer of the AI system?
  • Is it those that set the Learning System its objectives?
  • Is it the owner of the Big Data set that it has been taught with?
  • Is it the company blindly acting on the AI’s conclusions? (“The AI’s Employer.”)
  • Is it a particular individual that the AI is acting on behalf of? (“The AI’s line manager.”)

Clarifying the responsibility for Information, and the ownership of the business process that the information upfront, and understanding the risks of using that Information in this way, will save a lot of difficulty, and the cost of legal fees and possible penalties, later on.

Of course, such a system starts to produce challenges for “traditional” IT security methods and controls:

  • Inadvertent data leakage – can repeated questioning of an AI give an insight into the Data it was trained on?
  • How do you train and test an AI in a test environment without using real data sets? Is synthetic data good enough for the potentially subtle relationships the AI will see?
  • How do you define a test for an AI system to demonstrate correct processing?
  • How do you validate that the AI is continuing to function correctly and appropriately?

At the moment, standards for Information Management for Artificial Intelligence’s are rather thin on the ground(!).

The conversation with Colin was fascinating because it dealt the AI systems themselves having their own legal existence and rights – something that lawyers are beginning to seriously discuss.