Thursday, December 31, 2009

Adobe's "0 Face"

As you may already know, Adobe acknowledged another public security vulnerability in their products on December 15, 2009. APSA09-07 affects all current and earlier versions of Adobe Acrobat and Reader with JavaScript enabled and is currently being exploited in the wild. There is no doubt Adobe products have been in the cross hairs of attackers over the past two years and Adobe's use of JavaScript seems to provide an easy opportunity for exploitation.

Upon reading the advisory, it was no surprise that disabling JavaScript was the mitigation. Many users in my environment do not use this functionality and it can easily be turned off via the Windows registry. The problem is it does not remain off. When opening an Adobe JavaScript enabled .pdf the user is presented with a prompt to re-enable JavaScript. To date Adobe does not provide any way to permanently disable JavaScript via the Adobe Reader preferences menu or the registry. We all know how useful warnings are for end users right? <insert self-signed ssl certificate here> But I'll save the use of a warning as a form of mitigation of badly thought up functionality for a later blog post.

<my rant>

So Adobe products are increasingly being targeted and although Adobe seems to have picked up the pace with their security stance, I have often questioned if they have enough internal resources to do anything but be reactive. Once again, a zero day leveraging JavaScript in an Adobe product is flying around and the patch for this vulnerability will not be available until January 12, 2010. In my opinion, this is unacceptable. Adobe seems to be struggling with putting out the fires and are not being preventative by fixing their code or providing systems administrators with the tools or patches they need to properly mitigate. I can personally tell you my corporate IDS and Antivirus have been lighting up like a Christmas tree (tis the season) with attacks using this exploit.

Soon after the advisory dropped, I listened to Dennis Fisher and Ryan Naraine interview Brad Arkin on the Digital Underground podcast. Brad Arkin is currently Director of Product Security and Privacy at Adobe and has held previous positions at Symantec and @stake. Now Brad seems like an intelligent guy and I applaud him for taking on such a challenge. I became annoyed while listening to the interview, however. Ryan Naraine repeatedly queried Brad during the podcast on what I have suspected for quite some time. Does Adobe have enough resources in place for dealing with the current trend of attacks targeting their products? Brad seemed to repeatedly side step the question. He attempted to explain the complexity of dealing with such vulnerabilities with such a large and diverse install base.

<disclaimer> While I may have no experience dealing with what Brad has stepped up to do, I do have a lot of experience mitigating vulnerabilities in the corporate environment and my opinions here are based on that experience. </disclaimer>

Now while I have no doubt that this is a challenge indeed, maybe Adobe needs to stop, glance around, and take a cue from the company that has the largest and most diverse install base I know of. That company would be Microsoft. While far from perfect, Microsoft seems to have made some significant advances with their security program over the last 5-6 years. When MS08-067 dropped in October 2008 (for those not familiar, that’s the vulnerability used by the Conficter variants), Microsoft did what any responsible software vendor should do. They released an Out-Of-Band patch!  So what gives Adobe?

I almost jumped out of my skin when Brad stated Adobe often needs to shift resources off of other security projects and research to handle an exploit such as this. So to answer Ryan’s question, I guess you do not have enough resources then? My point is if you have to shift all your resources to handle each and every fire and it still takes you a month to put out the fire, then you will never be preventative. Maybe I am being naive here but I don't believe so.

</my rant>

Ok so with my ranting out of the way, I did state that I thought Adobe was making improvements. One such improvement is their implementation of the JavaScript Blacklist Framework mentioned during the podcast. It is still reactive but it is at least something. Thank you to Dennis, Ryan, and Brad for bringing this to my attention. To quote Adobe’s tech note located here;

“The Adobe Reader and Acrobat JavaScript Blacklist Framework introduced in versions 9.2 and 8.1.7 provides granular control over the execution of specific JavaScript APIs. This mechanism allows selective blocking of vulnerable APIs so that you do not have to resort to disabling JavaScript altogether.”

Brad admitted during the interview that this is only effective for specific vulnerabilities and it may break legitimate uses of functionality in Adobe Acrobat and Reader. He further stated Adobe has many more improvements coming during 2010. I can only hope this includes some preventative improvements to their code base and internal resources dedicated to the current target on their back.

More can be found on using the blacklist framework to mitigate the vulnerability in APSA09-07 here.

For an entertaining and informative Adobe rant (that puts mine to shame) checkout the latest post on the Sourcefire VRT Team blog, entitled Matt's Guide to Vendor Response

Happy New Years to Everyone!

Update:

More reports of sophisticated Adobe exploits have been appearing this week. Some have little to no coverage by the AntiVirus vendors. I noted the following article describing Adobe's plans to begin testing a silent Adobe updater. Someone needs to tell Adobe an updater only works if you actually provide the update and explain to them the basics of enterprise change control.

Details of the attacks can be found here and here.

Another Update:

Adobe has release patches for the Acrobat/Reader vulnerability as well as another vulnerability in Illustrator.  The Advisories can be found here:

http://www.adobe.com/support/security/bulletins/apsb10-02.html
http://www.adobe.com/support/security/bulletins/apsb10-01.html

I also found a great ADM template for tuning Adobe Acrobat and Reader JavaScript settings on the Praetorian Prefect Blog. Again, just note that the user will be prompted with a warning when opening a .pdf containing JavaScript.

OK Last Update

The Sourcefire VRT team posted an excellent article this week on the using the Acrobat JavaScript Blacklist Framework on common exploited functions within Adobe Acrobat and Reader. An example taken from their post for Adobe Acrobat 9 would be as follows:

[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Adobe\Adobe Acrobat\9.0\FeatureLockDown\cJavaScriptPerms]"tBlackList"="Collab.getIcon|DocMedia.newPlayer|Util.printf|Spell.customDictionaryOpen|Doc.syncAnnotScan|Doc.getAnnots"

 Additionally, they provide benign Adobe Acrobat files using each of these functions to test with.

Didier Stevens also pointed out during a recent interview on PaulDotCom Security Weekly that the new version of Adobe Reader and Acrobat has changed the way it warns users that JavaScript is disabled. While not quite the administrative control I had hoped for, it is a slight improvement as it renders the .pdf regardless of the action taken by the user.

Tuesday, December 29, 2009

Yet Another Update on the Symantec Vulnerability

It looks like DSHIELD has picked up on an increase in probes for port 12174 associated with the Symantec Advisory covered previously on this blog here and here. In some cases of upgrading from previous versions of Symantec Corporate Antivirus to 10.1 MR8, servers are still vulnerable to this exploit. So make sure AMS and Intel File Transfer service (xfr.exe) is not running and listening on TCP Port 12174.

Thursday, December 10, 2009

Update on Symantec Vulnerability

So I wanted to give everyone an update on the Symantec Antivirus vulnerability I outlined in my previous post entitled; Lessons Learned: Vulnerability and Expectations Management. It appears that the exploit code has been published to the Exploit Database and has also been added to the Metasploit Framework. If you have not read my previous article, please make note here. In some cases of upgrading from previous versions of Symantec Corporate Antivirus to 10.1 MR8, servers are still vulnerable to this exploit.

The problem is due to the fact that AMS2 does not get removed in all cases of upgrading from version 9 to 10. If the Intel File Transfer service (xfr.exe) is running and listening on TCP Port 12174 then you are still vulnerable. Disabling the service or completely uninstalling and reinstalling Symantec Antivirus were the two options given to me by support at Symantec. I use the term "support" loosely here as I'm the one that told them disabling the serviced mitigates the issue.

I have attempted to get Symantec to edit their advisory with this information without success. So make sure you verify your patches with the attached code or favorite vulnerability scanner. Tenable Nessus does have a plugin available here.

Tuesday, November 24, 2009

CMD.EXE Incident Response Cheat Sheet

Recently, I have been putting together some incident response tools and documentation for our systems administrators and wanted to provide an easy to use reference of Windows command line tools available at their disposal. There is a a lot of great information and resources available but I could not find a single one page cheat sheet of all the cmd.exe commands one might use during incident response. The closest thing I found to containing all the commands I wanted to cover was Russell Butturini's Hak5 U3 Switchblade which is an awesome resource but my aim was to teach what each command does. Consequently, I began creating a cheat sheet myself using Jeremy Stretch's popular PacketLife.net cheat sheet template he recently made available here.

I have attached v1.0 and am hoping others can find some value with using it or maybe make some suggestions or additions to it. I would love to do one for Linux and maybe a more detailed one on WMIC. Let me know what you think.

Monday, November 23, 2009

RBS Worldpay: It's Not Child's Play

I have found the RBS Worldpay ATM heist fascinating. Although the dollar amount stolen cannot compare to some larger compromises in recent history, the coordination the attackers and thieves displayed is unprecedented. Moreover, it appears the corporation of law enforcement spanning three continents was able to bring an indictment on November 10, 2009. A copy of that document can be found here. Not much is known about the technical details of the compromise but I recently decided to put together a diagram of what is known about the heist for a training I am scheduled to do next month. I used the Crayon Network Visio Stencil found here to create it and though some might find it amusing.


More articles and coverage on the compromise and arrests can be found here:

http://www.fbi.gov/page2/nov09/atm_111609.html
http://atlanta.fbi.gov/dojpressrel/pressrel09/atl111009.htm
http://www.veracode.com/blog/2009/11/we-need-to-learn-more-about-the-rbs-worldpay-atm-attack/

Monday, November 16, 2009

Only You Can Prevent Forest Fires - A Smokey The Bear Approach to Security

A few weeks back Larry Pesce from PaulDotCom posed the following question on Twitter:

"Hmm. If you had to deploy ONE security technology in your organization, what would it be? What is the risk reduction vs, total effort?"

Many people quickly replied. Some answers included: a comprehensive patch management solution (my pick), Security Information Management (SIM) system, network based firewall, Intrusion Prevention System (IPS), incident response plan, and my personal favorite "a very large dog..." . Larry quickly followed up asking what would the second technology be and why?

I struggled with that question. After all it is a "no win" situation. A proper incident response plan would certainly be needed but is reactive. Network defenses would be beneficial but do not take in account a mobile workforce. I finally settled on some sort of central system that would facilitate the system hardening of the end nodes. The reasoning for my answer is the result of experiences I had early in my information systems career.

During my time as a desktop support tech, I spent most days putting out fires. The lack of centralized patch management, host based firewalls, build procedures, and asset management was the source of chaos for the desktop and systems administration teams. Worm outbreaks, improper configuration, and end users running with local administrator rights were the norm not the exception. Consequently, the team was too busy chasing their tail around to be proactive. Those experiences resonated heavily with me and ever since I have insisted in being proactive whenever possible.

Would have proper incident response or a SIM solution have helped my former employer? Maybe. Incident Response procedures and SIM's are important parts of any defense infrastructure but they are reactive, not preventative. Consequently, I would certainly place them in my top five but only after implementing the basics of defense.

While Larry's hypothetical situation is enough to give any security practitioner nightmares, I found it to be a great source of self reflection. Larry discusses the replies in more detail during Episode 172 of PaulDotCom Security Weekly, so check it out when you get a chance. I'm interested to know what you would choose and how fast you would update your resume if you found yourself in the same situation.

Friday, November 13, 2009

DojoCon 2009

I have had several things I have been meaning to post but my day job has been keeping me crazy busy lately. However, I did manage to find a few hours to check out some of the talks streaming live from DojoCon 2009. For those not familiar with DojoCon, it was created by Marcus J. Carey this year and was held November 6-7, 2009 in Maryland. Marcus not only coordinated the conference but also donated a large amount of the proceeds to Hackers for Charity (HFC). I had the opportunity to watch several talks including the keynote from Marcus Ranum, a great talk by Matt Watchinski of Sourcefire VRT, and a fantastic breakdown on lock picking by Deviant.

I haven't had the opportunity to watch the remaining talks yet but I am looking forward to it. I recommend you check out some of the recordings, drop Marcus a thank you note, and donate to HFC. Marcus did a great job with the con and HFC is a great cause.

Thank you Marcus!

Monday, October 26, 2009

Don't be the Smelly Kid

Often I find security professionals and management treating security as a project or series of projects. While there may be security related projects within an organization, I would argue security as whole should not be treated as such. Securing ones environment does not have defined start dates, end dates, or even budget. It needs to be part of every information system project and baked in from the beginning. Security should be part of your regular scheduled maintenance and support structure. By treating security as one would treat personal hygiene, security becomes part of the daily routine. Lather, rinse, and repeat.

I have eluded in previous posts that security products, while sometimes helpful, can also cause more overhead and issues. Specifically, products designed to provide a "band aid" to improperly designed or implementation information systems would be the equivalent of splashing some cologne on everyday and not taking a shower. Eventually, there will not be enough cologne in the world to hide the stench. So don't be the smelly kid! Lather, rinse, and repeat.

Tuesday, October 13, 2009

The Detrimental Effects of Compliance Auditing on the Security of Small Business

Many argue that regulatory compliance with PCI, SOX, MA 201 CMR 17.00, and others help establish the minimum baseline for security in organizations. I think the point may be valid in organizations that initially had little to no security but I would argue that it has the opposite effect on a company that has the basics and beyond covered. To be specific, smaller companies which have one or two security professionals running the gambit from configuring Group Policy to writing Policies and Procedures are often already overwhelmed (note I fit into this category). Such professionals may quickly find themselves concentrating on out dated, incomplete regulations and laws rather than concentrating on reducing the risk of data loss by keeping up with current attack vectors, vulnerabilities, patches, and system logs.

I recently had a discussion with some colleagues on the subject of extending the compliance auditing of SAS providers to include data beyond financial or personal identifiable information. Initially it sounds like a valid and justifiable cause. But what is the end game? If it is mountains of one hundred page SAS70's with no regulation or law behind them, then it might be a worthy cause. But stacks of paper may show nothing about the security of the data being stored by the provider and will certainly distract from other effective methods of reducing risk. Honestly, if I could spend some time shooting the shit with the solution providers security team about current security trends and attack vectors, I would probably have a more accurate assessment of their ability to secure the data.

I am not suggesting we ignore current laws or regulations. We have an obligation to follow them. I am also not suggesting we do not review the hosted solutions outside vendors are providing for non-regulated data. I do believe that the review process should not mimic compliance audits, however. The time spent during the review process should match the amount of risk involved and assurance we achieve from the security review. If the security of such data is absolutely crucial, one might consider not storing the data there in the first place.

Monday, October 12, 2009

Lessons Learned: Vulnerability and Expectations Management

As an information security professional, a large portion of my work day is spent with vulnerability and patch management. So when I saw a security advisory addressing multiple vulnerabilities in both Symantec's Corporate Antivirus and Endpoint Security Solution products last June, I immediately investigated. You can read the security advisory here. I became concerned because other Vendors also use the Intel File Transfer service so I thought it be prudent to investigate.

I began looking around and noted that Tenable Network Solutions had a Nessus plugin. You can find the plugin here. So like any true geek with nothing to do on a Saturday evening, I began scanning. I was surprised at what I found.

The systems running the Intel File Transfer service from other vendors were not vulnerable but systems patched with Symantec 10.1 MR8 were still be vulnerable. The solution table in Symantec’s Advisory states that the issue with AMS2 was fixed in this version.

I contacted someone I knew at Tenable and asked for assistance in verifying the vulnerability. The plugin actually contains remote execution code but it is commented out by default. With instruction from Tenable I uncommented the cmd = "calc"; line in the NASL script and ran a nessusd -R to perform a reload of the Nessus Database. Sure enough, the next scan verified that cmd.exe would execute without authentication on the vulnerable machines.

So what gives? Is Symantec's advisory incorrect? Not entirely, although it may be misleading. This became a case of reading the fine print. Further down the advisory we find this information:

"AMS2 is installed by default with Symantec Antivirus Server 9.0. AMS2 is an optional component in Symantec Antivirus Server 10.0 or 10.1. These vulnerabilities will only impact systems if AMS has been installed."

And further down, under mitigation section:

"Reporting has replaced AMS2 as the recommended method of alerting. Symantec Endpoint Protection Central Quarantine Server 11.0 MR3 and later no longer include AMS2. Symantec recommends that customers who are still using AMS2 switch to Reporting to manage alerts in their environments. If the customer is unable to switch to reporting immediately then Symantec recommends that the customer either disables AMS2 as a temporary mitigation or completely uninstall AMS2."

All the systems vulnerable had all been upgraded from an earlier version of Symantec Antivirus Corporate Edition 9.X. During the remote upgrade process there seemed to be no way to specify if AMS2 was to be installed or not. Symantec support seemed unable to instruct me on how to remove or disable AMS2 from the affected systems and I have spent the last several months trying to get them to change the advisory so that the solution table listed at the top of the document noted this tidbit at the bottom. To say the least I have not been successful in this endeavor and feel a bit frustrated. Although the Sales Executive has been nice enough to try and sell me their Endpoint Protection v11 product and recommended I start with a fresh install.

If you do want to mitigate the vulnerability, I determined disabling the Intel File Transfer service works well and does not seem interfere with my configuration. I recommend you test this in your own environment however.

So Lessons Learned:

Read Security Advisories carefully.
Scanning is an important part of any vulnerability management plan.
Manage your expectations when dealing with vendors.

Updated December 29, 2009

Posted two updates on the release of the POC for this vulnerability and a report of the exploit being used in the wild by SANS ISC.

Friday, August 28, 2009

Holy Cheat Sheets Batman!

I found this gem of a blog post yesterday via Twitter. John from http://blog.securitymonks.com posted a massive list of Security Cheat Sheets that are available for free. Check out the post here. Thanks to John!

Thursday, August 20, 2009

All’s Fair in Love, War, and Hacking.

Last Month, I had to opportunity to participate in the NYC Infraguard Capture the Flag event provided by WhiteWolf Security and sponsored by Tenable Network Security .

The Capture the Flag (CTF) was made up of two teams. The red team (attackers) and the blue team (defenders). The Blue team was given an unprotected network with unpatched hosts and was asked to defend them to the best of their ability. To complicate matters, business injects were used to simulate the real world (i.e. – The CEO wants a website up and running by the end of business). A mock FBI field office was available to report a compromise and loss of data. The blue team was not allowed to use commercial products during the event. The red team's goal is to gain access to those systems and steal the data. Points are given for each compromise and data theft. As you might expect the odds are in the attackers favor. One could argue that this is true in the real world too.

The winning competing blue team was organized, well versed, and remained calm. Each team member seemed to have expertise in a particular area or operating system. They coordinated their defense and when they did get compromised they went into incident response mode, and gathered the logs and proof they needed for reporting the compromise to the FBI field office. By the afternoon of the first day, they were completely locked out of their own systems. They chose to restore their systems from backup and all of their systems were up and running again within an hour. Because of this they won the competition.

It demonstrated the importance of not only defense in depth but having good Incident Response and Disaster Recovery plans in place. It is not a question of if the attackers get in, it is a question of when, so be ready!

It was a great experience and learning opportunity. If you have not had the opportunity to participate in a CTF, I fully recommend it!

Monday, June 29, 2009

Two Pounds of Crap in a One Pound Box.

When I was 16 years old, my Dad decided he wanted to purchase a Ford Mustang 5.0. It was his first new car purchase since the 1967 Mustang he bought after high school. I of course insisted he get the GT with all the options. I drove to the dealer with him and watched as he haggled with the salesman. Having worked in the automobile sales and service industry for 20 years, he knew what he wanted and seemed to know what it should cost. The dealer offered him option packages and upgrades which my father promptly turned down. When I asked why, he said that the sports package and power windows were not going to make a muscle car perform better and it was just something that would break. As a result he purchased a stripped down LX that was several thousand dollars less expensive than the GT and was .3 seconds faster from 0-60 mph. He had the car for almost 15 years before a tractor trailer took it out.

I recently, realized how much that experience affected me. It’s been over 20 years since that visit to the dealership and I just realized how deeply the idea of simplicity has steered my decisions with technology. In my previous post entitled the The Risk of Complexity I wrote about the difficulties of securing complex technologies and mentioned the importance of the fundamentals of security. I wanted to expand on that thought here and outline some simple things that one should look for when evaluating technology solutions. Some fundamental features every solution should include are; reliability, detailed logging, ease of systems administration, complete and accessible documentation, and a proven support history. It is important to research the software provider’s track record on addressing functional bugs and security flaws.

It seems absurd, but many solution providers offering advanced technologies and features seem to fail terribly at basic functionality and stability. To summarize, it does not matter how sexy a security solution is if it fails open, crashes, or has unaddressed bugs in it. Moreover, if descriptive logs and documentation are not available and you cannot obtain an intelligent response from product support on an issue, then you have put the data you are assigned to protect at risk.

I recently had a conversation with the sales executive of a security solution on issues I have experienced with their product. His purposed solution was to purchase the new model with the extended warranty (also known as an upgrade with premium support). When I asked why I needed premium support to report an unpatched remote code execution vulnerability in a supported version, he attempted to sell me another solution his company offers.

So I wanted to offer this suggestion to those test driving solutions: The next time you are evaluating a product, ask some of questions regarding the aforementioned matters. Kick the tires and listen to the sound the door makes when you slam it. Test drive the product and make sure the suspension system is tight at high speeds. If Hyped Solution Inc. keeps pushing the limited edition report package or pie chart upgrade, then it may be time to drive up the street and find another dealer.

I would like to thank @Beaker for a recent tweet about his rental car and the recent blog post by @Jack_Daniel for jolting this memory out of my subconscious. Both individuals have remarkable ideas that they openly share with the security community and I fully recommend following their work.

Friday, June 26, 2009

The OWASP Podcast Series

While working on my next blog post, I happened upon episode 27 of the OWASP (Open Web Application Security Project) podcast interview with Rafal Los. If you have not subscribed to the OWASP podcast let me recommend it now!

Rafal gets pretty fired up during the interview on the direction that web application development has headed. He notes the importance of simplicity when developing web applications and condemns complexity. His arguments are convincing and it is worth a listen. Unfortunately, I am not convinced that what needs to happen will ever happen but one can hope.

In episode 28, an interview Ross John Anderson, Ross discusses the axiom of functionality, scalability, and security. He proposes any information system cannot have more than two of these at a given time. Again the interview is worth a listen.

Monday, June 15, 2009

Special Webcast: SANSFIRE 2009: Geekonomics

I recently discovered the book Geekonomics: The Real Cost of Insecure Software by David Rice after listening to his AusCERT 2009 talk on risky.biz. David is a fantastic speaker and makes some very convincing points of the role of economics, psychology, and sociology in the security inadequacies that plague software. I am still reading his book and hope to post a review once I am done, but I wanted to point that SANS will be offering a special live webcast of David Rice's talk from SANSFIRE 2009 this Wednesday evening, June 17, 2009 at 7:00 PM EDT. If you have an hour to spare I recommend checking it out! You can register for the webcast here: https://www.sans.org/webcasts/show.php?webcastid=92538

Sunday, June 14, 2009

The Risk of Complexity

It is human nature to desire a shiny new technology based on marketing claims and feature promises. But many times during my career in information technology and security I have really questioned the “value add” of a particular solution or system. Will it really lower costs, improve employee performance, and facilitate collaboration? Will it provide the seamless interoperability between complex systems as advertised? Will it do all this and still provide stability and security? Or are we just attempting to throw complex technology at managerial, organizational, and performance issues as a fix?

Often, adding more complexity to technology will only make the issues associated with that technology more complex. These issues include security. Generally speaking, with more complexity comes less security. This is not necessarily because the ability to secure the technology does not exist but because it becomes out of reach due to resource limitations. These resource limitations include limitations in finances, time, and expertise. Complexity can increase the attack surface area of a network hence decreasing its security posture unless the proper training, planning, and defensive resources have been budgeted and obtained. Unfortunately, this is often not the case. Moreover, much of the technology used to secure and defend such solutions can increase the complexity of one’s information systems even further, potentially causing an endless loop of new features and defensive solutions.

Virtualization is a great example of this. The ability to virtualize operating systems, resources, and applications has many advantages in IT infrastructure and business. But the ease of virtualizing systems, combined with a lack of planning and available expertise in these products has the potential of creating an out of control scenario of misconfiguration and mismanagement. Proper change control, build procedures, code review, monitoring, disaster recovery planning, and documentation still need to be addressed. The security risk associated with virtualization needs to be assessed, managed, mitigated, and re-assessed on a regular basis. This can be a daunting task without the proper resources. Such resources may not have been factored in during the budgeting and planning process or may no longer exist during economic downturns.

I am not downplaying the incredible benefits of virtualization. I use virtualization too. However, much like any technology, it has its place and I don’t believe the “lets virtualize everything” mantra. The idiom of “don't put all your eggs in one basket” comes to mind. Doing so can be a serious mistake with dire consequences in assuring the confidentiality, integrity, and availability of data. I only use virtualization as an example, due to its prevalence in our industry and the complex baggage that often comes with it. There are dozens of other examples that could be used, but like most, I cite examples that I am familiar and comfortable with.

The recent compromise of Vaserv.com, a UK ISP, has been reported to affect over 100,000 hosted web sites which may never recover. Some have reported the attack was a result of vulnerability in the virtualization technology the web hosts were running on while others claim bad administrative practices are to blame. Some have questioned Vaserv’s disaster recovery and incident response procedures, or lack thereof. Most likely, it is a combination of these factors that contributed to this colossal failure. Was the complexity of the technology to blame? Was Vaserv.com naïve to think they could increase their profit margin by decreasing engineering and administrative costs through the use of virtualization? Or was the company putting all its “eggs in one basket” and ignoring the fundamentals of security?

These are only speculations on my part as I am, like most, not privy with the details of the compromise. The irony of this example is Vaserv.com was marketed as a low cost hosting solution. One may speculate that many companies and individuals chose their hosting services to save money only to incur a substantial financial loss associated with the incident. Some may feel I am simplifying the issue at hand but sometimes that is all that is needed.