Latest Entries »

Let me start by saying that I am fully supportive of responsible disclosure. I think it has improved the reactiveness of companies in fixing faults in their code and channelled efforts vulnerability research that might otherwise have ended up in malicious code.

However, I think there are limits.

A couple of recent postings ( on the BugTraq mailing list have highlighted vulnerabilities in (probably) bespoke code on particular websites, and included links to exploit the code. The authors of these reports have followed the normal disclosure course, having notified the site administrator, rather than the vendor. When they didn

I read an interesting article on The Daily Telegraph today: Is Facebook aiding Lawless Britain? It raised some interesting points about the fact that online behaviour seems so much more unconstrained than behaviour in “the real world”.

But why?

This prompted a discussion in the office that started off with the premise that there is no such thing as society on the Internet. The observation was that on a day-to-day basis, people live their lives by a set of social rules that are common amongst the members of that society. These rules have developed over centuries and are policed. For example, most people don’t steal things. It is quite possible for a shop to have doors open to the public, they come in and pay for those things they want to take away. Or public decency: people don’t tend to go around, on the whole, meaning to offend other people.

One of the reasons for this is that there are consequences for the individual, either through punishment via the Courts or rejection by a social group that the individual wishes to belong to.

These things don’t happen on the Internet. People are not ostracised by society for bad behaviour because they can change their identity and “start again” and the Police don’t have the resources to deal with the petty crime that takes place online, concentrating their resources where they can make the most impact.

The problem is there’s a feeling of being able to “get away with it” that only exists in the real world as an undercurrent, not the mainstream. Perhaps the recent judgements on unmasking Trolls will go some way to bring societal norms to the Internet by introducing the concept of responsibility and accountability for one’s actions.

Apple has a problem. For years, the message that many Apple users heard was that Macs don’t need anti-virus because the OS was invulnerable. In the late 90s and early 2000s, Windows suffered from dozens of vulnerabilities that were massively exploited, partly prompted by the fact that, suddenly, everyone was plugging their poorly managed PCs into broadband connection. Cue crowing from the (comparatively) small number of Apple aficionados: their view was that their age-old nemesis was getting what it deserved.

The problem was, and to some extent still is, that this perception of invulnerability was grounded in an uncomfortable fact: Apple’s share of the PC market was tiny. In 2001 it was around 3%. By 2011, it was nearly 10% (not including phones and tablets). Bearing in mind the explosion in computer ownership in this time, it amounts to a very large increase.

What all this means is that Macs are now economical to exploit.

Microsoft recognised that they had a problem in the early 2000s, resulting in a huge engineering effort to reduce the number of bugs in their code. The main crux of this strategy was to weed vulnerabilities out early using their Security Development Lifecycle (SDL) as the earlier you find a bug, the cheaper it is to fix. Sun, as was, and Oracle adopted a similar approach.

In addition, there was a real push to engage with security researchers, they promoted responsible disclosure through giving credit where it was due to those who had discovered flaws and improved their patching processes. All in all, the process has become much more transparent.

The results of these efforts are clear: while Microsoft OSs are still the most targeted, most malware no longer targets the OS, rather 3rd party apps have become the target of choice. A good example of this is the recent discovery of the “Sweet Orange” malware development kit that goes for vulnerable browser addons.

(Mozilla provide a free browser plugin checker that works for most common browsers, so you can see what’s vulnerable on your system.)

Apple have taken a different approach:

Now, we are hearing of increasing numbers of Macs infected, a user base that is confused because they think they don’t need anti-virus and seemingly poor software QA (take the recent example of Filevault’s unencrypted passwords).

Eugene Kaspersky has said what many people in the information security world have been thinking: Apple are now ten years behind Microsoft when it comes to security. Interestingly, Apple has restated that it is “open to collaboration” with Kaspersky on improving its security.

So, if anyone from Apple reads this, I strongly urge you to swallow your pride and take a look at how other people, including Microsoft, approach security. This means changing the way software is developed and how it’s patched. Transparency and collaboration is important: if people fear that they might get treated badly when they’re trying to help will quickly stop trying to help.

There is still vastly more malware for Windows but the tide is turning. Apple is running out of time to stop repeating Microsoft’s mistakes.

I’ve been meaning to write something up about the news that a group of, what the media calls “hackers”, but are probably more accurately termed scientists, who are planning on launching some satellites to provide censorship-free Internet connectivity to people in regimes that cannot get unfiltered Internet connectivity.

It raises a whole raft of interesting questions, but the one I thought of particularly is where the downlink will be. Satellites, on their own, cannot provide Internet connectivity. They are an elaborate mirror, allowing signals to be bounced between two or more points.

So, where is the ground-based Internet connection going to be? If there are a limited number of ground stations providing access to the rest of the Internet, these will be major targets for those regimes that want to control access to content. What better place to go to find out who your dissidents are?

How does a user of this censorship –busting network know:

  1. That they are connecting via a ground station that is in a friendly country, and;
  2. That the infrastructure hasn’t been compromised.

With such a concentrated amount of valuable information the temptation will be there for a number of states to covertly tap into this infrastructure and a lot of effort needs to be made to physically secure the environment that this data will go through, regardless of which country it’s in.

Security theatre is something I talk about a lot on this blog. It has a place, but often has a tendency to invade an individual’s privacy without really reducing the risk and costs a lot to implement. My favourite example of this is the millimetre-wave body scanners at airports.

The latest example is the news today that Oxford City taxis are to be fitted with sound-recording CCTV to capture all conversations in the backs of cabs. I have no doubt that the good, taxi-driving Burghers of Oxford are subjected, in some cases, to pretty awful behaviour by punters, but do they make up such a proportion that every passenger must have their private conversations recorded?

In the real world, there’s a fine line between security and privacy. Too little security and too many people take advantage, creating anarchy. Too little privacy and you suddenly find yourself living in a totalitarian state. The balance between the two is culturally subjective.

George Orwell’s 1984 describes a totalitarian society where the populace is controlled through mass surveillance, mental conditioning and fear. It is, possibly, the logical culmination of a country’s move to a security state. No-one wants to live in a world like that described in Orwell’s novel. So, how do we stop our slide into an Orwellian dystopia?

People need to realise that it is both impractical and undesirable to completely eliminate risk.

I wrote yesterday about the control systems implemented in the Boeing 787 Dreamliner, and the fact that, since the issue was reported in 2008, not much information on the way these systems interoperate, if at all. There have been references to “firewalling” the two networks from each other and this got my thinking after I posted:

  • Modern aircraft often have 30-year, or more, lifespans
  • Some element of the safety-case given to the FAA must rest on the fact that there are no inputs into the passenger entertainment system, i.e. there aren’t any network ports in the cabin
  • Some airlines are moving to implement WiFi on aircraft, like Delta and Lufthansa.
  • Over the 30-year lifespan of an aircraft, the cabin will be upgraded, entertainment system changed and services added

Thirty years is a long time to rely on an IT system. There aren’t many operational systems now running that were implemented in 1981. Those that are still running are seen to be very vulnerable to attack and treated very carefully. This is because the types of attacks have evolved massively in this time, with systems implemented just months ago vulnerable to attack.

My question is: how will these security systems be maintained? What if a vulnerability is found in the firewall(s) itself? How will the safety case change if the parameters of the entertainment system change? Does the FAA have any recommendations of the logical segregation of traffic if data from, for instance, WiFi hotspots, or GSM/3G pico-cells implemented in cabins needs to run over the same cabling infrastructure?

Again, maybe I have the wrong end of the stick, but I am concerned that, seemingly, no-one’s really looking into the implications of this and, given my own experience, unless these systems are implemented by people with a very deep understanding of process control security, it may not have been thought about.

Boeing 787

Way back in 2008 there were a number of stories floating around that the new Boeing 787, the first production airframe of which was delivered this week, had a serious security weakness. It turns out that Boeing, in their infinite wisdom, had decided to not segregate the flight control systems from the seat-back entertainment systems and would, instead, firewall them from each other.

I’ve been searching online but can’t find any up-to-date information whether this architecture was changed. Some good articles on this came from Wired and Bruce Schneier’s blog. Wikipedia’s 787 entry includes the following:

The airplane’s control, navigation, and communication systems are networked with the passenger cabin’s in-flight internet systems.In January 2008, Boeing responded to reports about FAA concerns regarding the protection of the 787’s computer networks from possible intentional or unintentional passenger access by stating that various hardware and software solutions are employed to protect the airplane systems. These included air gaps for the physical separation of the networks, and firewalls for their software separation. These measures prevent data transfer from the passenger internet system to the maintenance or navigation systems.

The reference to firewalls and air gaps leads me to suspect that these systems are not fully segregated. If this is the case, I really hope that they’ve had some seriously good information security advice.Process control systems, and this is a process control system of sorts, aren’t always as well implemented as they could be. Where there is a safety-critical element, air gaps or data diodes are the only ways to go.

Designing out the vulnerabilities has to be better than retrofitting security afterwards.

I’d welcome comments from anyone, especially those who know more about the actual implementation.

Update: I’ve added another post about this here.

Private Emails

Michael Gove is reported to have been using his private email account and won’t reply to emails sent to his official address. There are so many reasons why this is a bad idea. Here is my (almost certainly incomplete) list just in case the Rt. Hon. Michael Gove happens to pass by:

  1. It’s not based in the UK. In fact, Google pride themselves in not telling you were the data is held (just try finding out);
  2. Google is a US-headquartered company. As per Microsoft’s announcement, the US PATRIOT Act seemingly trumps EU and UK data protection law, even if the data was in the EU;
  3. You can’t encrypt the emails at rest;
  4. There’s no guarantee that the data will be there tomorrow, as this example from Yahoo amply demonstrates;
  5. While Gmail allows you to turn on HTTPS and a form of two-factor authentication, these are optional and probably turned off;
  6. The foreign governments are alleged to have already hacked into Gmail;
  7. On occasion, email accounts have been mixed up, where one person reads someone else’s mail;
  8. These emails may not be retrievable under the Freedom of Information Act.

You only risk what you don’t value. If Mr. Gove believes the emails he receives and send to be of such low importance to put them at this sort of risk, is he the best person to be a cabinet minister?

The security systems at airports are an interesting example of security “theatre”, where much of what goes on is about re-assurance rather than being particularly effective. I’ve blogged before about this and had some interesting responses, especially around the intrusiveness of current processes versus their effectiveness and where vulnerabilities lie. For obvious reasons, I won’t go in to this.

However, the TSA in the United States is rolling out a new version of their full-body scanner, apparently in response to the criticism that the old-versions were a step too far: the TSA initially denied, for example, that pictures of people’s naked bodies could be stored until several incidents emerged of security staff doing exactly that. Apparently this will be available as a software upgrade. The question is, will the UK do the same?

The new scanner overlays identified potential threats from scans over a generic diagram representing the human form and so masking who the subject is. This has to be a good thing, but like I said in my earlier post, a reliance on technology rather than using intelligence-led investigations will always lead to vulnerabilities while inconveniencing that majority of people.

I’d rather the people who would do me harm never made it to the airport.

Targeted Trojans

A very particular problem that we face is around customised malware, aka targeted Trojans. These malicious programs are written specifically to avoid detection by our current anti-virus systems and are sent to carefully selected people within the institution. The purpose of these programs can only be inferred by the recipients.

LSE uses MessageLabs to protect our inbound email, primarily to reduce  the flood of spam to as small a trickle as possible. One of the systems that MessageLabs use is something called skeptic, that tries to identify previously unseen malicious software and to block it.

We think that this has been quite successful, although it is impossible to know how many attacks have managed to get through. Using the information we get from this system, we can discuss the implications of being on the list with the people being targeted.

The uncomfortable facts are that:

  1. LSE is a major target
  2. academia is being systemically attacked by a number of groups
  3. the threat is growing all of the time

There is no foolproof way of blocking every attack, but the intelligence gained from knowing the areas of interest of the attackers allows us to focus our efforts of the people at highest risk.

If you want more information on this or are at LSE and want specific advice, please contact me.

UPDATE: Martin Lee and I are proposing doing a talk about this at the RSA Conference 2012 in San Francisco. See the teaser trailer here.