For a certain definition of Secure….

Rory recently spoke at a conference about ‘cargo cults’ in security. To summarize, these are ‘security best practices’ which people follow, as a kind of religious belief without ever really thinking about whether they are really valid in the context of today’s threat landscape. We don’t just see these implemented by info sec policies – but actually included as part of commercial products.

I came across a good example recently – I won’t mention the name of the product, but we were asked to review it as part of an external infrastructure, and it made me wince, not technically (it did after all run over SSL aka ‘military grade encryption’), but from the perspective of user account security.

So to start on a reasonable footing, it required a strong password with at least 8 characters, a special character and a numeral. It wouldn’t take my 20 character passphrase password (which frankly will be brute forced the day hell freezes over) because of these rules – and that started me getting annoyed with it. Then just in case you forgot your strong password, it also has a secondary secret which will be used in the password reset process. I noticed that the questions are not stellar, one of them is ‘name of first pet’ and another one is ‘favourite food’. The account locks after four incorrect attempts at the password which in my opinion is low for email – but again – ok so far.

So what is wrong with a system where a user has to have a good password and there is a reasonable lockout policy? Well in this case – the password reset process. Having forgotten the strong password and locked the account after three tries, the user clicks on the ‘forgotten password’ link. This takes them straight to a page where they are asked for the secondary secret. Entering a correct secondary secret allows them to set a new password, and after this they are logged in. So the secondary secret is exactly equivalent to the password – but instead of having a complexity requirement – it has no restrictions at all – it will in fact accept ‘p’ or ’1′ as a valid entry. And there is no lockout on it. So instead of attacking the strong password with account lockout, an attacker can just go for the one character secret with no lockout. Or better still, he can just go for a few guesses of favourite foods (chocolate anyone?). And of course because it is an email system, the username is the email address which is trivially easy to discover. There is no attempt at any out of band solution once the secondary secret has been entered (sending a password reset link to a backup account for example) – you just enter the secondary secret and a password of your choice and you are straight in.

But the thing that annoyed me the most about this system, was that having used this extremely insecure mechanism to let me login using my favourite food as a password – it then had the unmitigated gall to refuse to let me reuse my previous password. I’d love someone to explain to me where the danger of password reuse stands on a scale of 1 to 100 compared to alloing a one character account password which does not lock.

This was a cargo cult if ever I saw one and the perpetrators should have their souls devoured by the Great Old Ones….

Surface Pro as Server

Now having bought the Surface Pro 2 – I was at a bit of a loss to know what to use my original Pro for. It basically is a lovely device – but with a couple of ‘if at first you don’t succeed – call it version 1.0′ flaws. The worst of these is that the battery life (about five hours at reasonable utilization) is just a bit too short to be able to use it for travelling anywhere you are not likely to be able to plug in. So last year if I was flying anywhere, I would take the pro and the RT, use the RT as a tablet and save the battery on the Pro for when I really needed to do some work. But the Pro 2 has fixed that – with more than 7 hours battery time it lasts for any journey I am likely to make with no available power.

So I had a few ideas for what to use the original Pro for. Firstly I had an idea that it might be useful for doing wireless tests – but for this we would be better off with Linux – partially because the kit for doing this in Windows is very expensive, but also because the access to low level networking libraries is better. So we put Ubuntu on it – but it proved quite unsatisfactory. The basic OS was there and worked in a basic sort of way, but it was unstable and the touch screen was hardly usable. I am not a Linux fan at the best of times – but on the Surface it really turned a lovely tablet into a downright unpleasant user experience.

I then had another idea – I was much in need of a development server which could live in my office, but also be accessible when I was out and about (and even on site). So I put Windows Server 2012 R2 on the Surface. I wasn’t sure how well this would work – but as it turns out – it was surprisingly easy to install, and works well and smoothly now it is on.

For anyone interested – these were the stages.

a) Make a bootable USB stick with the server OS on it.
b) From Update and Recovery -> Recovery -> Advanced Startup start the Surface off the USB
c) Install the OS as usual – there were no problems or glitches with this.
d) Once the base OS is installed. Go to add Programs and Features and under Features, add the wireless service. Then start it (this shouldn’t really have thrown me but I was so used to this just working in Windows 8 that I didn’t ask myself when I last saw a server running on WiFi).
e) Also add the Desktop Experience Feature – this enables various ‘non-server’ bits such as access to the Store which are handy for a tablet.
f) Set the power management to your liking – obviously while it is just working in the back ground it makes no sense to have the screen fired up.

I then put Hyper-V and IIS on it. With all this installed (but no VMs running), it is at about 30% memory utilization and its CPU is not straining at all. I think it should have no problems with one or two smallish VMs and being a development web server. But the other good thing about it is that it is still a nice tablet and without close inspection you would never know it from a consumer device. All the drivers work perfectly, the screen is just as good as in Windows 8.1, and you can even install Apps from the store, look at your photos and play Mahjong. But behind all this is the power of a full on Server OS.

Let’s see anyone do this on an iPad….

Security Testing Windows Store Apps

Rory and I recently presented at Securi-Tay again. This was the third conference organized and led by the students on the ethical hacking course at Abertay University in Dundee. As usual it was well set up and attended and it is good to see that the professional Scottish testers of the future can arrange a conference which is as good as (if not better than) many of the professional ones we have attended. We had an enjoyable day – even though it was a very long drive there and back.

I spoke about Windows Store Apps and how to test them. We often find ourselves in a situation where we are asked to test things that we are not particularly familiar with – and it is very useful to be able to find some material on the Internet that gives us somewhere to start. I am going to start trying to write a few posts on things we have come across which may be unusual or difficult to test in some way – as usual from the perspective of a professional tester in UK trying to achieve good coverage for a customer in the timescales given in a typical test rather than something done as a hypothetical exercise in hacking.

Metro (down the Tube) – Security Testing Windows Store Apps

So my presentation covers what the purpose of these apps is, how they are architected, developed and certified for the Windows Store. I then talk about where to find them, what software you need to test them and how to install and configure it. I outline how you would typically go about testing them and how they tie in with the OWASP top ten and mobile top ten. Finally I consider whether Microsoft have managed to achieve one of their goals with these apps and improve security and confidence for the average non-technical Windows user.

Of Human Stupidity

For a number of years, I have felt that tech companies must be seriously lacking in acumen to take the policies they do with regard to their customers.   Yesterday I noticed however that it is not restricted to tech companies, and it makes an interesting study in human stupidity to see this in operation.

So for example, a search for property management companies in the UK came up with this

http://www.reviewcentre.com/reviews117367.html

I have no personal knowledge of the company involved, or whether the reviews in question are in fact accurate,  but I find it inconceivable that any commercial entity would allow their customer service (or their marketing department) to be so egregiously bad as to have that kind of review show up in search engines.  I mean surely they must realize that in the modern world, potential customers are going to look for reviews – and that it is not going to be a positive thing if you have a consistent 1 star rating.

But then I got to comparing it to the behaviour of tech companies – and I am afraid that three immediately jump to mind.  I used to administer Lotus (heard of them?) technologies.  At one point they had a massive chunk of the Office Suite, email, collaborative website and instant messaging market.  Then they were bought by IBM and everything went downhill from there.  There is no doubt in my mind that Domino was a superior product to early versions of Exchange.  Equally QuickPlace was there before SharePoint was really more than a twinkle in Microsoft’s eye.  I see that support was recently finally dropped for Lotus SmartSuite – but in its day it was way ahead of MS Office.

The same can be said of Novell Netware.  I loved that product back when, and I would defend NDS as a better directory service to AD until about ten years ago.  Today who under the age of 30 has even heard of Novell?  And AD runs most of the world’s internal networks.

One final example….  Sadly I think Blackberry is going the same way.   Considering that they have been relegated to fourth place in the mobile market, I think it unlikely that they will still be in the race (as an independent company) in a year’s time.  And yet not so long ago they had the vast majority of the smartphone market.

And the common factor in all of this….   Intellectual arrogance and the complete inability or unwillingness to listen to their customer base, or in any way to acknowledge that their product was not automatically superior just because it was the current market leader.

So two things I would take from this…..  Firstly if I were a property management company I would seriously do something about my customer service before I allowed a simple Internet search to make me look that bad.  Secondly, I would bet a lot of money against the tech press who are writing Microsoft off as a major force compared to Apple and Google.  Consistently over the last 20 years they appear to have been the only company who genuinely care about customer service, and also one of the few who try to adapt to changing times.    They have never made any claims about ‘doing no evil’ – but it consistently appears that caring for your customers on a day to day basis, and being seen to do so, makes good commercial sense.

Windows Azure Backup

I just configured the preview version of Windows Azure Backup.  It is very nice looking and easy to use once you get it up and running – but the instructions to install it are difficult to find and a bit patchy.

First you have to create a certificate for your vault.  You use a utility called makecert.exe which is part of the Windows SDK (the link in the documentation to TechNet doesn’t work – so you can get it here.

http://msdn.microsoft.com/en-US/windows/desktop/aa904949

For reasons that are not clear to me the utility doesn’t seem to be available as a standalone – but downloading just the tools part of the SDK contains it.

Then the documentation that actually works is here (there are several wrong versions in different places dotted across their sites).

http://msdn.microsoft.com/en-us/library/windowsazure/dn169036.aspx

The key thing is to follow the instructions exactly – you need both the .cer file and the .pfx file (the public and private keys).

Once you have followed all the instructions and configured your vault you can go ahead with the local software install.  If you have had the Beta version of the agent installed, you need to uninstall it and then install the new one.  Once the software is installed and the agent started, you can then register your server – this took a few minutes to do on my machine – so be patient with it.

Once all this is done – configuring your backup and restores is a snip.  I hope that when it comes out of preview the documentation is improved a bit.

 

Your Framework Will Fail You – Part 2 – Network Controls

This post is part of a series based on a presentation I did for the Scottish Ruby Conference in May 2013 (part 1 here) which was around defense in depth and some of the controls companies should be looking at to help protect them when something goes wrong.

The first segment to cover is Firewalling. Network firewalls get quite a bit of flack in the security world, mainly because people tend to rely too heavily on them for protection without really understanding where they are and are not useful.

The “low-risk” setup option that I covered is around the use of egress filtering on firewalls.

One of the main limitations that I see on practical firewall deployments is that they don’t take a “default deny” position for all interfaces. In the typical Internet facing firewall setup almost every one will have a default deny rule from the untrusted network (e.g. the Internet) to the more trusted network (e.g. An internal network) but in many cases the other direction (from internal to Internet) will have a default allow rule set-up.

Setting a default deny on connections from trusted–>untrusted networks can be a really useful control in making an attackers life more difficult for them and hindering their post exploitation activities. So in an e-commerce environment it might be possible to have rules on the firewall that restrict all servers from initiating any connections to the Internet except for a couple of hosts for package updates. This means that someone who has access to the server and who trys to connect to any other system on the Internet will get blocked.

If you consider an attack on a web application, once the attacker has compromised a server (e.g. via SQL Injection or command injection) one of the first things they might try to do is make a connection back to a system under their control to download more tools and also to make a shell connection to the compromised system. So with egress filtering this could be considerably trickier to pull off.

One thing if you do intend to do this is, I would recommend putting it in place when you’re designing the network. Retro-fitting more restrictive firewall rules can be quite difficult as things like periodic connections that only happen once a month might not be noticed, leading to unexpected failures after the firewall rules have been put in place.

The “high risk” setup option looking at the area of network segregation. One of the setups I’ve seen quite commonly is that only one firewall is used, with all Internet facing systems in a single DMZ and then potentially all back-end systems on either the Internal network or perhaps in another single DMZ network. It’s a setup I call the “warm smarty” approach to security crunchy on the outside but soft and gooey once you get past the shell.

The problem with this approach is that once an attacker has compromised a single server it’s much easier for them to attack other systems in the environment and expand their access. The reality is that most internal networks are pretty easy for a dedicated attacker to compromise as there’s always some system that doesn’t get patched somewhere, so once they’re in, it’s pretty much game over.

Addressing this isn’t cheap or easy but effective network segmentation can make attackers lives much more difficult.

There are a variety of approaches that can be used for network segmentation. One is to segment individual Internet facing applications so that they are in their own DMZ, this can reduce the risk of onwards compromise, although it does depend on the firewall ruleset being suitably restrictive.

This approach obviously will increase management costs, for example requiring more management servers and potentially less automation of maintenance, so there is a trade-off between the desired level of security and the cost involved, but it’s something that should be considered rather than just going for the default one firewall approach.

Scottish Ruby Conference Talk – Your Framework Will Fail You

I was presenting yesterday at the Scottish Ruby Conference, and given that the talk is relatively high-level as it covers a lot of ground, I thought it would be a good idea to do a series of blog posts to provide some more details and resources (link to the presentation here.)

The title of my talk was “Your framework will fail you”. I had the idea for it when reading about some of the security bugs in Rails came up earlier in the year and led my to think some more about defence in depth. Anyone in security will know this as one of those things that we think is a good idea but which can be a bit of a hard sell as when someone pays for a security control (e.g. Anti-Virus, Firewall) it can be tricky to say “yep that will fail sometimes so we need to buy some other things as well”.

However if anything has been proven by the increase in public vulnerabilities, exploits and compromises, it is that all security controls fail and you will be well served by having a fall-back control or detective control to notice when the main one has failed.

The way I structured the presentation was in two halves. The first looks at the important topics of threat modelling (e.g. who’s going to attack you) and a bit about why defence in depth is important. After that it looks at various layers of a solution and talk about controls for a low-risk/budget scenario and a high-risk/budget one. The focus on the low risk option was to look at controls which can be put in easily/cheaply. They may not be super-effective all the time, but they have their uses. On the high-risk end I looked at things which can provide more protection but will take more resource to manage, alternative in some cases it’s the same control as the low-risk version but with more time dedicated to managing it (e.g. a lot of the detective controls are only really good if well managed).

The blog posts will be coming out every other day or so looking at the solution layers and hopefully I’ll get to the end of the series without interruption :)