Welcome to ScotSTS Blog
Welcome to the ScotSTS Blog
For a number of years, I have felt that tech companies must be seriously lacking in acumen to take the policies they do with regard to their customers. Yesterday I noticed however that it is not restricted to tech companies, and it makes an interesting study in human stupidity to see this in operation.
So for example, a search for property management companies in the UK came up with this
I have no personal knowledge of the company involved, or whether the reviews in question are in fact accurate, but I find it inconceivable that any commercial entity would allow their customer service (or their marketing department) to be so egregiously bad as to have that kind of review show up in search engines. I mean surely they must realize that in the modern world, potential customers are going to look for reviews – and that it is not going to be a positive thing if you have a consistent 1 star rating.
But then I got to comparing it to the behaviour of tech companies – and I am afraid that three immediately jump to mind. I used to administer Lotus (heard of them?) technologies. At one point they had a massive chunk of the Office Suite, email, collaborative website and instant messaging market. Then they were bought by IBM and everything went downhill from there. There is no doubt in my mind that Domino was a superior product to early versions of Exchange. Equally QuickPlace was there before SharePoint was really more than a twinkle in Microsoft’s eye. I see that support was recently finally dropped for Lotus SmartSuite – but in its day it was way ahead of MS Office.
The same can be said of Novell Netware. I loved that product back when, and I would defend NDS as a better directory service to AD until about ten years ago. Today who under the age of 30 has even heard of Novell? And AD runs most of the world’s internal networks.
One final example…. Sadly I think Blackberry is going the same way. Considering that they have been relegated to fourth place in the mobile market, I think it unlikely that they will still be in the race (as an independent company) in a year’s time. And yet not so long ago they had the vast majority of the smartphone market.
And the common factor in all of this…. Intellectual arrogance and the complete inability or unwillingness to listen to their customer base, or in any way to acknowledge that their product was not automatically superior just because it was the current market leader.
So two things I would take from this….. Firstly if I were a property management company I would seriously do something about my customer service before I allowed a simple Internet search to make me look that bad. Secondly, I would bet a lot of money against the tech press who are writing Microsoft off as a major force compared to Apple and Google. Consistently over the last 20 years they appear to have been the only company who genuinely care about customer service, and also one of the few who try to adapt to changing times. They have never made any claims about ‘doing no evil’ – but it consistently appears that caring for your customers on a day to day basis, and being seen to do so, makes good commercial sense.
I just configured the preview version of Windows Azure Backup. It is very nice looking and easy to use once you get it up and running – but the instructions to install it are difficult to find and a bit patchy.
First you have to create a certificate for your vault. You use a utility called makecert.exe which is part of the Windows SDK (the link in the documentation to TechNet doesn’t work – so you can get it here.
For reasons that are not clear to me the utility doesn’t seem to be available as a standalone – but downloading just the tools part of the SDK contains it.
Then the documentation that actually works is here (there are several wrong versions in different places dotted across their sites).
The key thing is to follow the instructions exactly – you need both the .cer file and the .pfx file (the public and private keys).
Once you have followed all the instructions and configured your vault you can go ahead with the local software install. If you have had the Beta version of the agent installed, you need to uninstall it and then install the new one. Once the software is installed and the agent started, you can then register your server – this took a few minutes to do on my machine – so be patient with it.
Once all this is done – configuring your backup and restores is a snip. I hope that when it comes out of preview the documentation is improved a bit.
This post is part of a series based on a presentation I did for the Scottish Ruby Conference in May 2013 (part 1 here) which was around defense in depth and some of the controls companies should be looking at to help protect them when something goes wrong.
The first segment to cover is Firewalling. Network firewalls get quite a bit of flack in the security world, mainly because people tend to rely too heavily on them for protection without really understanding where they are and are not useful.
The “low-risk” setup option that I covered is around the use of egress filtering on firewalls.
One of the main limitations that I see on practical firewall deployments is that they don’t take a “default deny” position for all interfaces. In the typical Internet facing firewall setup almost every one will have a default deny rule from the untrusted network (e.g. the Internet) to the more trusted network (e.g. An internal network) but in many cases the other direction (from internal to Internet) will have a default allow rule set-up.
Setting a default deny on connections from trusted–>untrusted networks can be a really useful control in making an attackers life more difficult for them and hindering their post exploitation activities. So in an e-commerce environment it might be possible to have rules on the firewall that restrict all servers from initiating any connections to the Internet except for a couple of hosts for package updates. This means that someone who has access to the server and who trys to connect to any other system on the Internet will get blocked.
If you consider an attack on a web application, once the attacker has compromised a server (e.g. via SQL Injection or command injection) one of the first things they might try to do is make a connection back to a system under their control to download more tools and also to make a shell connection to the compromised system. So with egress filtering this could be considerably trickier to pull off.
One thing if you do intend to do this is, I would recommend putting it in place when you’re designing the network. Retro-fitting more restrictive firewall rules can be quite difficult as things like periodic connections that only happen once a month might not be noticed, leading to unexpected failures after the firewall rules have been put in place.
The “high risk” setup option looking at the area of network segregation. One of the setups I’ve seen quite commonly is that only one firewall is used, with all Internet facing systems in a single DMZ and then potentially all back-end systems on either the Internal network or perhaps in another single DMZ network. It’s a setup I call the “warm smarty” approach to security crunchy on the outside but soft and gooey once you get past the shell.
The problem with this approach is that once an attacker has compromised a single server it’s much easier for them to attack other systems in the environment and expand their access. The reality is that most internal networks are pretty easy for a dedicated attacker to compromise as there’s always some system that doesn’t get patched somewhere, so once they’re in, it’s pretty much game over.
Addressing this isn’t cheap or easy but effective network segmentation can make attackers lives much more difficult.
There are a variety of approaches that can be used for network segmentation. One is to segment individual Internet facing applications so that they are in their own DMZ, this can reduce the risk of onwards compromise, although it does depend on the firewall ruleset being suitably restrictive.
This approach obviously will increase management costs, for example requiring more management servers and potentially less automation of maintenance, so there is a trade-off between the desired level of security and the cost involved, but it’s something that should be considered rather than just going for the default one firewall approach.
I was presenting yesterday at the Scottish Ruby Conference, and given that the talk is relatively high-level as it covers a lot of ground, I thought it would be a good idea to do a series of blog posts to provide some more details and resources (link to the presentation here.)
The title of my talk was “Your framework will fail you”. I had the idea for it when reading about some of the security bugs in Rails came up earlier in the year and led my to think some more about defence in depth. Anyone in security will know this as one of those things that we think is a good idea but which can be a bit of a hard sell as when someone pays for a security control (e.g. Anti-Virus, Firewall) it can be tricky to say “yep that will fail sometimes so we need to buy some other things as well”.
However if anything has been proven by the increase in public vulnerabilities, exploits and compromises, it is that all security controls fail and you will be well served by having a fall-back control or detective control to notice when the main one has failed.
The way I structured the presentation was in two halves. The first looks at the important topics of threat modelling (e.g. who’s going to attack you) and a bit about why defence in depth is important. After that it looks at various layers of a solution and talk about controls for a low-risk/budget scenario and a high-risk/budget one. The focus on the low risk option was to look at controls which can be put in easily/cheaply. They may not be super-effective all the time, but they have their uses. On the high-risk end I looked at things which can provide more protection but will take more resource to manage, alternative in some cases it’s the same control as the low-risk version but with more time dedicated to managing it (e.g. a lot of the detective controls are only really good if well managed).
The blog posts will be coming out every other day or so looking at the solution layers and hopefully I’ll get to the end of the series without interruption
We had a great time doing our workshop at BSides London recently. In fact we had a great time in general – the conference was lots of fun.
This was the first long(ish) workshop I had ever prepared for a conference, and I was surprised at how much work was involved in it (compared to an ordinary presentation). We not only had to create the presentation, but build the infrastructure, create Kali builds on USB sticks, set up the demos, prepare a worksheet for the participants and prepare the two ‘test reports’ I had promised in the description of the workshop. Then we had to test, test, test in an attempt to appease the dark god of demos!
We were coming down from Scotland to London for the event and quickly discovered a major drawback – we had a lot of kit…. We needed two PC laptops to be the Nessus Servers and host the vms for the demos. We also had a Surface Pro for running the demo, a Surface RT (just for kicks) and a MacBook Air to run Rory’s presentation. Add in a switch and cables (because we didn’t like the idea of trying to run eight sets of Nessus scans over wireless) plus a fourway and sundry hardprint materials for the participants and we had three huge rucksacks full of stuff. Going down the stairs at the station I was scared I was going to tip over backwards.
Anyway after a brief panic when we thought the five hundred bottle openers we had ordered for the conference swagbags had not turned up, we found them and then got set up. We were a bit nervous that after all the effort, no one would be interested. When I checked the subscription sheet, we had ten signups for 8 slots (the room was on the small side). And then people started to turn up – and there were loads more than on the sheet. Unfortunately, we did not have room to let everyone participate in the demos, but we managed to fit everyone (16 people!) into the room although it was a bit on the hot and crowded side.
So the purpose of the workshop was to show people who were technical, but not professional testers, to prepare for a review by eliminating all easily correctable faults in advance of the test. This would enable the tester to focus on serious issues rather than finding and documenting things such as missing patches and SSLv2. The example given was of the imaginary UWC company – and we showed off two mock reports an ‘before’ with 58 vulnerabilities, and an ‘after’ with 6. The ‘sting’ was supposed to be that amongst the six – was a critical SQL vulnerability which the tester had not had time to investigate in the first scenario, but found in the second.
We did four demos:- nmap, Nessus, and two Metasploits. The second Metasploit was the classic which really impressed me when I started testing – using an unpatched workstation to steal an Admin’s token and use it to add a user to the Domain Admins group on a fully patched DC. The dark god did not really visit us – and everyone seemed to get on well.
We hope everyone enjoyed the workshop and thank you for coming. Hopefully we will be able to reuse the materials in the future.
I’ve attached our presentation. Conducting a DIY Security Review – latest
I was reading the security page for another new product today and it struck my how amazingly disappointed I am that we’re still at the stage that the best companies can say about their security is “Trust us we hold all your data securely, and we use military grade SSL” or words to that effect.
Not to say that SSL isn’t a good way of protecting data in transit, but this is the equivalent of someone building a bank and saying “trust us, this is secure, we use the same rivets as they do in battleships”.
It’s ridiculous to expect users to be able to make an informed decision about security with the amount of data provided.
So what would be a better option? Well if you’re developing a product how about putting some information about the Security steps in your development process (you do have those right?).
- We provide all our developers with secure development training (for optional bonus, here’s the areas we covered and how we assessed our developers awareness of security topics)
- All our products have threat modelling and security architecture reviews (for optional bonus, here’s the output of our threat model and what controls we put in place)
- We have external consultants complete a security focused code review before release (for optional bonus, here’s the report and what we did to address the findings)
- We complete security testing on all our products (for optional bonus here’s the report and what we did to address the findings)
now this is far from a comprehensive list and doesn’t address the problem of how to ensure it’s all true, but surely it’s better than just SSL!
We were at B-Sides London yesterday. It all went really well and had a great turn out. The new venue was good as well. We didn’t get to see too many of the talks unfortunately as we were delivering a Workshop in the morning and I had my talk in the afternoon.
As with most of the talks I do, I find the questions the most interesting piece, as you get feedback on what problems other people have had with the topic at hand and also new ideas for developing the presentation.
Some of the conversations had after my talk centered around choosing a licence for code that you release. It’s a important area to consider if you’re releasing your own code as you should always put some form of license up there so people know how they can and cannot use your code. My feelings for most of the scripts I do is that a GPL based license is the best option, as it encourages other people who want to use your code to contribute back to the community.
Another point on licensing that was brought up is the flip side which is that you should always check the license of code that you make use of, to ensure that you’re within the terms specified.
We’ve decided that the results/recommendations coming out of most of the Internal Security Reviews we do can be summarised in three lines.
a) Patch everything. Not just Windows – everything.
b) Change default credentials. Don’t leave your main router with creds of admin/admin
c) Get rid of clear text protocols. Ditch telnet for SSH and ftp for sftp
It doesn’t require Ninjas, Red Teams or Zero days to compromise most organisations, given access to their internal networks. In fact why bother with anything fancy, when the most basic of techniques uncovers such glaring faults.
As a bit of a tech geek I have a tendency to pick up a variety of pieces of hardware and software to see if they’ll be useful on tests. One of my more successful purchases has been a USB powered Ethernet switch that handles PoE pass-through and has a couple of mirrored ports. It’s pretty compact so it goes easily in a bag for on-site jobs and can be useful in a number of scenarios
- Lack of Ethernet ports on-site. I’ve had this on more than one occasion, you get to site and there’s no free ports, so a switch and a couple of patch cables can be very handy
- Monitoring Thick Client Apps. Not all apps are particularly proxy friendly, so having an easy way to see all the traffic on the switch is very handy
- VOIP assessments. A standard part of VOIP assessments is looking at the traffic between the phone and the management servers, so having something that supports PoE pass through is handy as that’s how most VOIP phones operate.
All-in-all if you’re a tester, I’d recommend getting something like this. The one I’ve got is the Dualcomm DCGS-2005L although I’m sure there’s others that meet the bill.
As well as Rory’s talk on pentest automation at BSides London – we will both be doing a workshop “Performing a DIY Security Review”. It is aimed at IT Professionals and shows the basics of how to prepare for a Security Review (“pentest”). This is something that is dear to our hearts because writing about SSLv2 over and over again is not something which either excites us greatly, or provides a great deal of value to customers. We think people should do a preparatory review themselves and let the tester concentrate on the specialized stuff – giving better value for money and a shorter, more focused report.
So the workshop is all about using free or low cost tools to look at a network and remove glaring faults from it prior to having a test done. We don’t cover web application testing – but if this one proves of interest we may do something along those lines in the future.
I’ll post the slides and documentation here after the event.