When I was 16 years old, my Dad decided he wanted to purchase a Ford Mustang 5.0. It was his first new car purchase since the 1967 Mustang he bought after high school. I of course insisted he get the GT with all the options. I drove to the dealer with him and watched as he haggled with the salesman. Having worked in the automobile sales and service industry for 20 years, he knew what he wanted and seemed to know what it should cost. The dealer offered him option packages and upgrades which my father promptly turned down. When I asked why, he said that the sports package and power windows were not going to make a muscle car perform better and it was just something that would break. As a result he purchased a stripped down LX that was several thousand dollars less expensive than the GT and was .3 seconds faster from 0-60 mph. He had the car for almost 15 years before a tractor trailer took it out.
I recently, realized how much that experience affected me. It’s been over 20 years since that visit to the dealership and I just realized how deeply the idea of simplicity has steered my decisions with technology. In my previous post entitled the The Risk of Complexity I wrote about the difficulties of securing complex technologies and mentioned the importance of the fundamentals of security. I wanted to expand on that thought here and outline some simple things that one should look for when evaluating technology solutions. Some fundamental features every solution should include are; reliability, detailed logging, ease of systems administration, complete and accessible documentation, and a proven support history. It is important to research the software provider’s track record on addressing functional bugs and security flaws.
It seems absurd, but many solution providers offering advanced technologies and features seem to fail terribly at basic functionality and stability. To summarize, it does not matter how sexy a security solution is if it fails open, crashes, or has unaddressed bugs in it. Moreover, if descriptive logs and documentation are not available and you cannot obtain an intelligent response from product support on an issue, then you have put the data you are assigned to protect at risk.
I recently had a conversation with the sales executive of a security solution on issues I have experienced with their product. His purposed solution was to purchase the new model with the extended warranty (also known as an upgrade with premium support). When I asked why I needed premium support to report an unpatched remote code execution vulnerability in a supported version, he attempted to sell me another solution his company offers.
So I wanted to offer this suggestion to those test driving solutions: The next time you are evaluating a product, ask some of questions regarding the aforementioned matters. Kick the tires and listen to the sound the door makes when you slam it. Test drive the product and make sure the suspension system is tight at high speeds. If Hyped Solution Inc. keeps pushing the limited edition report package or pie chart upgrade, then it may be time to drive up the street and find another dealer.
I would like to thank @Beaker for a recent tweet about his rental car and the recent blog post by @Jack_Daniel for jolting this memory out of my subconscious. Both individuals have remarkable ideas that they openly share with the security community and I fully recommend following their work.
Monday, June 29, 2009
Friday, June 26, 2009
The OWASP Podcast Series
While working on my next blog post, I happened upon episode 27 of the OWASP (Open Web Application Security Project) podcast interview with Rafal Los. If you have not subscribed to the OWASP podcast let me recommend it now!
Rafal gets pretty fired up during the interview on the direction that web application development has headed. He notes the importance of simplicity when developing web applications and condemns complexity. His arguments are convincing and it is worth a listen. Unfortunately, I am not convinced that what needs to happen will ever happen but one can hope.
In episode 28, an interview Ross John Anderson, Ross discusses the axiom of functionality, scalability, and security. He proposes any information system cannot have more than two of these at a given time. Again the interview is worth a listen.
Rafal gets pretty fired up during the interview on the direction that web application development has headed. He notes the importance of simplicity when developing web applications and condemns complexity. His arguments are convincing and it is worth a listen. Unfortunately, I am not convinced that what needs to happen will ever happen but one can hope.
In episode 28, an interview Ross John Anderson, Ross discusses the axiom of functionality, scalability, and security. He proposes any information system cannot have more than two of these at a given time. Again the interview is worth a listen.
Monday, June 15, 2009
Special Webcast: SANSFIRE 2009: Geekonomics
I recently discovered the book Geekonomics: The Real Cost of Insecure Software by David Rice after listening to his AusCERT 2009 talk on risky.biz. David is a fantastic speaker and makes some very convincing points of the role of economics, psychology, and sociology in the security inadequacies that plague software. I am still reading his book and hope to post a review once I am done, but I wanted to point that SANS will be offering a special live webcast of David Rice's talk from SANSFIRE 2009 this Wednesday evening, June 17, 2009 at 7:00 PM EDT. If you have an hour to spare I recommend checking it out! You can register for the webcast here: https://www.sans.org/webcasts/show.php?webcastid=92538
Sunday, June 14, 2009
The Risk of Complexity
It is human nature to desire a shiny new technology based on marketing claims and feature promises. But many times during my career in information technology and security I have really questioned the “value add” of a particular solution or system. Will it really lower costs, improve employee performance, and facilitate collaboration? Will it provide the seamless interoperability between complex systems as advertised? Will it do all this and still provide stability and security? Or are we just attempting to throw complex technology at managerial, organizational, and performance issues as a fix?
Often, adding more complexity to technology will only make the issues associated with that technology more complex. These issues include security. Generally speaking, with more complexity comes less security. This is not necessarily because the ability to secure the technology does not exist but because it becomes out of reach due to resource limitations. These resource limitations include limitations in finances, time, and expertise. Complexity can increase the attack surface area of a network hence decreasing its security posture unless the proper training, planning, and defensive resources have been budgeted and obtained. Unfortunately, this is often not the case. Moreover, much of the technology used to secure and defend such solutions can increase the complexity of one’s information systems even further, potentially causing an endless loop of new features and defensive solutions.
Virtualization is a great example of this. The ability to virtualize operating systems, resources, and applications has many advantages in IT infrastructure and business. But the ease of virtualizing systems, combined with a lack of planning and available expertise in these products has the potential of creating an out of control scenario of misconfiguration and mismanagement. Proper change control, build procedures, code review, monitoring, disaster recovery planning, and documentation still need to be addressed. The security risk associated with virtualization needs to be assessed, managed, mitigated, and re-assessed on a regular basis. This can be a daunting task without the proper resources. Such resources may not have been factored in during the budgeting and planning process or may no longer exist during economic downturns.
I am not downplaying the incredible benefits of virtualization. I use virtualization too. However, much like any technology, it has its place and I don’t believe the “lets virtualize everything” mantra. The idiom of “don't put all your eggs in one basket” comes to mind. Doing so can be a serious mistake with dire consequences in assuring the confidentiality, integrity, and availability of data. I only use virtualization as an example, due to its prevalence in our industry and the complex baggage that often comes with it. There are dozens of other examples that could be used, but like most, I cite examples that I am familiar and comfortable with.
The recent compromise of Vaserv.com, a UK ISP, has been reported to affect over 100,000 hosted web sites which may never recover. Some have reported the attack was a result of vulnerability in the virtualization technology the web hosts were running on while others claim bad administrative practices are to blame. Some have questioned Vaserv’s disaster recovery and incident response procedures, or lack thereof. Most likely, it is a combination of these factors that contributed to this colossal failure. Was the complexity of the technology to blame? Was Vaserv.com naïve to think they could increase their profit margin by decreasing engineering and administrative costs through the use of virtualization? Or was the company putting all its “eggs in one basket” and ignoring the fundamentals of security?
These are only speculations on my part as I am, like most, not privy with the details of the compromise. The irony of this example is Vaserv.com was marketed as a low cost hosting solution. One may speculate that many companies and individuals chose their hosting services to save money only to incur a substantial financial loss associated with the incident. Some may feel I am simplifying the issue at hand but sometimes that is all that is needed.
Often, adding more complexity to technology will only make the issues associated with that technology more complex. These issues include security. Generally speaking, with more complexity comes less security. This is not necessarily because the ability to secure the technology does not exist but because it becomes out of reach due to resource limitations. These resource limitations include limitations in finances, time, and expertise. Complexity can increase the attack surface area of a network hence decreasing its security posture unless the proper training, planning, and defensive resources have been budgeted and obtained. Unfortunately, this is often not the case. Moreover, much of the technology used to secure and defend such solutions can increase the complexity of one’s information systems even further, potentially causing an endless loop of new features and defensive solutions.
Virtualization is a great example of this. The ability to virtualize operating systems, resources, and applications has many advantages in IT infrastructure and business. But the ease of virtualizing systems, combined with a lack of planning and available expertise in these products has the potential of creating an out of control scenario of misconfiguration and mismanagement. Proper change control, build procedures, code review, monitoring, disaster recovery planning, and documentation still need to be addressed. The security risk associated with virtualization needs to be assessed, managed, mitigated, and re-assessed on a regular basis. This can be a daunting task without the proper resources. Such resources may not have been factored in during the budgeting and planning process or may no longer exist during economic downturns.
I am not downplaying the incredible benefits of virtualization. I use virtualization too. However, much like any technology, it has its place and I don’t believe the “lets virtualize everything” mantra. The idiom of “don't put all your eggs in one basket” comes to mind. Doing so can be a serious mistake with dire consequences in assuring the confidentiality, integrity, and availability of data. I only use virtualization as an example, due to its prevalence in our industry and the complex baggage that often comes with it. There are dozens of other examples that could be used, but like most, I cite examples that I am familiar and comfortable with.
The recent compromise of Vaserv.com, a UK ISP, has been reported to affect over 100,000 hosted web sites which may never recover. Some have reported the attack was a result of vulnerability in the virtualization technology the web hosts were running on while others claim bad administrative practices are to blame. Some have questioned Vaserv’s disaster recovery and incident response procedures, or lack thereof. Most likely, it is a combination of these factors that contributed to this colossal failure. Was the complexity of the technology to blame? Was Vaserv.com naïve to think they could increase their profit margin by decreasing engineering and administrative costs through the use of virtualization? Or was the company putting all its “eggs in one basket” and ignoring the fundamentals of security?
These are only speculations on my part as I am, like most, not privy with the details of the compromise. The irony of this example is Vaserv.com was marketed as a low cost hosting solution. One may speculate that many companies and individuals chose their hosting services to save money only to incur a substantial financial loss associated with the incident. Some may feel I am simplifying the issue at hand but sometimes that is all that is needed.
Subscribe to:
Posts (Atom)