There is a reason to protect information
This article in the Guardian caught my eye.
Two months after a visit by Chinese officials, a company in Scotland with an innovative Wave Power design was burgled. Several Laptops were stolen. The burglars went straight to the company offices on the second floor of the building, bypassing companies on the lower floors.
A couple of years later, a Chinese company with close ties to the Chinese government, started making very similar Wave Power devices.
The Scottish company is now out of business.
So, assuming that there was nefarious activity here – and that is not proven, just a series of odd coincidences – what can we learn?
Information Security is not just about protecting personal or financial information. It is also important for commercial reasons: the designs, software and business model that a company has is the heart of what the company is actually worth.
There is a cost for the loss of that information. It is not unreasonable to spend an appropriate amount of money securing that information. How much depends on the capability of any threat, and on the risk appetite of the company.
Could the information have been realistically protected against the theft of it on a laptop? Probably, thought we do not know what precautions this company had in place. Realistically, a good hard drive encryption system, tied to the TPM that is in modern Laptops, would have defeated most attackers. Good physical security, alarms, and CCTV. Lock the laptops away when the office is unoccupied. General good security hygiene.
Commercial espionage does happen.
Evolution of Cyber Attacks
There is a common view that malware is something that allows an attacker access to your computer so he can steal your data. Remote Access Trojans delivered the contaminated attachment for example, are a typical example.
The attacker then profits by selling your data to others who exploit it. However, this involves trusting a larger number of people and increases the risks to the attacker of being caught. It also involves a lot of additional work, blending details from many attacks to hide where the data was taken from, and who took it.
Ultimately, criminal activity is driven by a desire to make money. And to survive to be able to enjoy your gains.
There has been a well publicised rise in ransomware where the malware encrypts files or disables a system and money is required by the attacker to release them. This is a result of the attackers wishing to remove the risk from the monetisation of their successful compromise of a system. They are reducing the number of people needed to realise the profit, and exploiting the anonymity of Bitcoin to remain hidden
The consequences of the two types of Malware on a businesses that are unprepared for them are different. The former is an attack against confidentiality, ransomware attacks availability.
Both are Security Incidents, and both are in part mitigated by anti-malware systems, but the method of surviving the attack is different.
Confidentiality losses can be reduced by the appropriate use of encryption, ensuring that if data is compromised there is another layer of defence in place.
Availability losses can be mitigated by a suitable business continuity plan to ensure business can still operate in the absence of whatever technology is being held to ransom. It just has to last long enough for affected items to be repaired or replaced, and data recovered from backups. And keep the backups off-line, finding they have also been encrypted will not improve matters.
The likelihood of both can also be reduced by user education: Be aware of Phishing Attacks, report odd events. The impact is reduced by Incident Response planning.
At the moment, ransomware is commonly attacking traditional IT systems and more recently mobile phones and other devices. In the future ransomware will be deployed against smart connected things.
Pay One Bitcoin to get your Roomba back out from under the sofa.
Internet of Things: Confidence not Confidentiality
The Network of Autonomous Devices is forming; small things talking to each other, making decisions based on their exchanged information about how to manage the world around us.
Attacks are now being seen against these networks, both by researchers and by those with malice aforethought. In addition to using the devices to undertake tradition computer based activities, such as Denial of Service launching, many of these attacks have had an end objective: To take control of machinery.
Much has been said about security within cars, where attacks are performed by, for example, presenting fake throttle data to the engine management unit, or pretending to be the vehicle’s wheel rotation sensor to get the ABS controller to release the brakes – because if it believes that the wheels are skidding it will do what it is designed to do.
An attack against a building can be imagined where wireless temperature sensors are blocked and spoofed to mis-inform the HVAC system, which in turn will render the building unreasonably hot, or cold, making it unusable to a business. Or overheating a datacenter shutting it down. A disruption and cost to a business.
The opportunities for spoofing information to create a change are endless.
The The Internet of Things requires there to be confidence in the information being used.
- Are you confident that the device you are getting the information from is actually what it claims to be. Is it really the front left wheel rotation sensor on this car? or is it something else pretending to be?
- Are you confident that the information it is sending has not been tampered with? Is the temperature received from that sensor is sending really what it is sending?
- What do you do if you mistrust the device? What assumptions do you make? How do you re-establish trust with that device? How do you report it? and will who is being informed react to it correctly?
Yes, Confidentially is important, the data you are sending may be personally identifiable. However, the Integrity of the data, the Confidence you can have in it, is crucial.
Financial Services Information loss report
Bitglass have released their latest report into Information Breaches. It addresses the current ways in which information is being compromised. These reports are useful as they provide input into developing both the risk models for companies, and in selecting appropriate security controls to manage those risks.
Some of the results are unsurprising:
- The trend of an increasing amount of data being subject to an unauthorised release is continuing.
- Most organisations have had an incident.
- Many organisations have had multiple incidents, often repeats of the same problem.
- Attackers aim for where the money is.
The interesting part is in how information is released.
Since 2006:
- A third of incidents were directly a result of human action, evenly split between accidental and malicious action.
- A quarter of incidents related to lost or stolen devices; laptops, company phones, USB sticks, private phones and so forth.
- A fifth of incidents were caused by external attacks against the IT systems, this includes phishing attacks where the initial compromise is inadvertently aided by someone inside the company.
- The rest was a mix of mislaid paperwork, payment card fraud and a worryingly large amount of “we don’t know what happened”.
In the US, where the study was done, the average cost of a lost record (one person’s details) was $260. This is about 20% higher than the typical non-financial cost/record impact. One key reason for this is the increasing impact of regulatory fines – PCI-DSS penalties alone can reach half-a-million dollars per incident.
So:
- Information Security is a People issue, not solely an IT issue. Appropriate and relevant awareness among individuals handling the information is critical.
- Methods should be in place to ensure that data on devices that can be lost or stolen is adequately protected.
Risk Registers
Something like this always appears on a Security Risk Register from the IT department:
“High Risk: The USB ports on the Servers are not locked down.”
And the security team in the IT Department sit and shake their heads and wonder why The Business doesn’t seem to understand how important IT security is. They’ve said it there: High Risk. Probably in an Excel Spreadsheet cell with a bright red background.
This isn’t a Risk to the Business. Something like “A journalist could obtain our list of sensitive clients, which would be extremely embarrassing and lead to loss of our client base and thus income” is a Business Risk. There is an understandable Threat (A Journalist) a target (List of Sensitive Clients), there is an outcome that threat succeeding they can understand (Loss of Clients leading to a loss of business), and you can understand how keen and capable that threat source is (can Journalist hack? can they persuade someone to give them information? How interested are they likely to be?) and that gives you a measure of how probable it is that that Risk will be realised and become an issue.
Now you have a realistic risk the Business can understand, and importantly might want to do something about. Then you can look at what you could do to Control that Risk. In this case, that could be such things as “Check employee backgrounds when we recruit them to jobs with access to the List of Sensitive Clients”, “Educate Users not to plug strange USB devices into computers”, may be even: “Lock the USB ports”.
Then “the USB ports aren’t locked” becomes a Risk Control that is not currently effective, meaning that the Business Risk of “A journalist obtaining our Sensitive Client List” isn’t being controlled to the full expectation of the business. There are still other good controls, such as employees knowing not to plug stuff in, but there is a chink in the armour. Then you can now tell a story that the business will understand, and they might well want to act on it. That one Control may actually be used to mitigate a whole range of Business Risks, in which case it not being effective would be a larger concern.
Of course, you could just fill the USB Ports with a two part epoxy…
Updates to Password Standards
Password best practice is changing. Two of the important governmental standards bodies, CESG in the UK and NIST in the US have issued, or are about to issue new guidance.
The CESG Guidance is now published, and makes a series of “tips” for managing and designing password systems.
The NIST Guidance is in draft format and is more prescriptive in it’s requirements. The SHALLs and MAYs are here, but the context and reasoning is important and worth understanding.
Both revisions are focused on making Passwords usable for people and have the following characteristics:
- Length is more important than convolution. The days of “must include numbers, letters, mixed cases, and four symbols from the DaVinci Code” have gone. However, all characters should be permitted within the passwords. Longer passwords (upto 64 characters in the case of NIST) should be possible.
- Passwords proposed by the user should be checked against a Blacklist and matches rejected. There is no recommended blacklist, however using available password dictionaries may be a good start.
- Allow the use of password management tools: allow users to paste passwords into the entry fields.
- No default passwords. No sharing of passwords between users.
- Limit the rate at which users can attempt to log in, rather than the number of attempts. This reduces the possibility of brute-forcing an account, but reduces the number of account lockouts requiring an account reset – a form of Denial of Service. Combine this with threat detection based on seeing multiple failed login attempts by a user, reporting it and acting on it.
- Store passwords properly. You never store passwords, only store suitably salted, stretched and hashed value for the key.
Storing Passwords
Simple: Don’t store passwords.
If a plain text, or badly encrypted, password database is leaked, either by an attacker, or inadvertently by an employee or user, then all users are compromised.
The purpose of a password is to confirm that the user who registered with the system some time ago is the same person wishing to use the system now.
- When a user registers they submit a password. Process the password with a non-reversible calculation and store the result.
- When a user tries to log in, process that password using the same non-reversible calculation.
- If the originally stored result and the log in attempt match, the original passwords must have matched.
There are complexities in this process, and errors in logic or mathematics can fatally weaken this approach . Therefore a developer should not develop their own system, but use a published library solution.
There are three fundamental concepts in processing these passwords. Each is designed to make life difficult for an attacker trying to reverse the calculation, or pre-calculate all the possible outputs.
- Salting: The process of adding a system specific piece of data (the Salt) to the password. This makes the result of the password calculation different on different systems, a breach on one does not compromise all.
- Stretching: Increasing the length of a password. A user may only have a 12 character password, but stretching it with a suitable Salt increases the effective complexity of the password. This calculation is designed (by repetition) to take a long time to slow down an attacker, reducing the number of password calculations he can test per second.
- Hashing: A non-reversible function that takes a stretched password and turns it into a value that can be safely stored. Cryptographic Hash functions are normally designed to be very quick to perform as they are used to validate documents, cryptographic certificates and so forth, hence the stretching phase is required to slow an attacker down.
The process of handling passwords, and other vital data, is a specialism. It is worth obtaining assistance in the design of such systems.
OpenSSL End of Life
One of the most commonly used pieces of security software is the OpenSSL cryptograhic library. It is used in almost all Linux based systems and it is packaged in a significant number of Windows applications and, of course, used in the Internet of Things devices, as well as various security appliances and VPN firewalls.
OpenSSL is an Open Source Project, one which has – through good reasons – now adopted a formal release model for their various versions.
Support for version 1.0.0 and earlier has already ended. OpenSSL support for version 1.0.1 ends on the 31st December 2016 – the end of this year.
Why is this important?
The OpenSSL library manages the creation and operation of secure network connections.
- It manages the HTTPS bit for most secure web browsing.
- It manages the connections of secure VPNs connecting users into corporate networks.
- It manages the security for administrators remotely logging into linux servers.
- It manages the hashing of passwords.
There is almost nothing in the security space that OpenSSL does not impact. It is a large and complex piece of software, and has been the subject of some of the most highly publicised and critical IT security issues, such as Heartbleed, which have led to a lot of developer effort being expended on it across all of the versions. [Hence the new support plan]
As OpenSSL is used for so many functions, any upgrade has to be taken with extreme caution. Internal testing by Trusted Management indicates that a version upgrade frequently breaks dependencies requiring additional effort to implement workarounds or alternative systems.
- The current major Linux Distributions, with the exception of the latest Ubuntu release, use OpenSSL Version 1.0.1 and thus will require patching.
For supported distribution versions it is expected, though not confirmed, that these will be available through the repositories. - Applications on Windows platforms that use the OpenSSL library will need patching.
These should become available from the software vendor. - Devices, such as Firewalls and VPN terminators, will need patching.
These should become available from the vendors for supported devices. - Internet of Things devices will become increasingly vulnerable if they are not patched.
Failing to patch OpenSSL will leave systems exposed to any vulnerabilities (such as a new Heartbleed) in those systems.
Unpatched systems will also be a major non-compliance on any regulatory or contractually required audits such as PCI-DSS.
Patch and Be Damned
Why do you test manufacturer patches before releasing them? So you don’t disrupt the Business when you apply them.
What if the exposure to a Business Disrupting Security Event is greater because you haven’t patched?
What is the balance of the two risks?
Security Patches, issued by a large software company are a nicely package guide to where Security Vulnerabilities can be found. These are rapidly reverse engineered by the malware and criminal hacking community – and they work very quickly since their window of opportunity is small, but their rewards are great. This is why critical Vulnerabilities, such as Heartbleed, are announced after the major affected systems have patches available or implemented.
So, should you test Security Patches before deployment? Why not just deploy patches before an attacker uses their vulnerabilities against you?
But know you are doing it. Have an approved Patching Policy, and have a routine for it – a Patching Procedure: Backup or Snapshot systems beforehand (one of the big benefits of virtualisation) then if there is an issue you can quickly revert and start to investigate the cause of the issues.
After all, you have a business continuity plan to handle minor disruptions don’t you? Better to have a test of that than a real use of your Security Incident Plan.
(And you do have a Security Incident Plan don’t you?)
ISO27001 Mandatory Documents
The update from ISO27001:2005 to ISO27001:2013 changed the requirements in two key areas: Documents required and in the Security Management process. I will deal with the management process in a later posting.
The original six documents still apply and are absolutely needed for any ISMS. An Auditor will expect to see these before the audit, and will normally enquire about them during an audit.
- Scope of the ISMS (clause 4.3)
- Information security policy and objectives (clauses 5.2 and 6.2)
- Risk assessment and risk treatment methodology (clause 6.1.2)
- Statement of Applicability (clause 6.1.3 d)
- Risk treatment plan (clauses 6.1.3 e and 6.2)
- Risk assessment report (clause 8.2)
However, ISP27001:2013 has tightened up its expectations of the documentation needed to support the security control implementations. As these are within the Security Controls Annex (Annex A), and thus subject to the Statement of Applicability, it may be the case that these are not implemented as there is no identified risk requiring them. Again, an Auditor may ask for them prior to an audit, either expressly, or more commonly as “Can you send me all your security documents please?”. These expected Documents in Annex A are:
- Definition of security roles and responsibilities (clauses A.7.1.2 and A.13.2.4)
- Inventory of assets (clause A.8.1.1)
- Acceptable use of assets (clause A.8.1.3)
- Access control policy (clause A.9.1.1)
- Operating procedures for IT management (clause A.12.1.1)
- Secure system engineering principles (clause A.14.2.5)
- Supplier security policy (clause A.15.1.1)
- Incident management procedure (clause A.16.1.5)
- Business continuity procedures (clause A.17.1.2)
- Statutory, regulatory, and contractual requirements (clause A.18.1.1)
Additionally, specific security records are required to be kept:
- Records of training, skills, experience and qualifications (clause 7.2)
- Monitoring and measurement results (clause 9.1)
- Internal audit program (clause 9.2)
- Results of internal audits (clause 9.2)
- Results of the management review (clause 9.3)
- Results of corrective actions (clause 10.1)
- Logs of user activities, exceptions, and security events (clauses A.12.4.1 and A.12.4.3) – again depending on the Statement of Applicability requiring these controls.
There is no mandated expectation of how long these records should be kept, however there should be a policy defining what that is, and the period should be long enough to make record keeping useful. For example, two or three review cycles would be a reasonable period to keep audit reports for.
