Warning: Creating default object from empty value in /home/www/blog.appcraft.org/wp-content/plugins/wp-to-buffer/wp-to-buffer.php on line 25
Defensive Cyber Security

Defensive Cyber Security

This report is for everyone who likes a little bit of tech-read.

by Sal Paladino


A great deal of attention has been paid recently to security through better software, and rightfully so. As a result, organizations are becoming increasingly committed to security through sound software development practices.  The shift in thinking that has turned software security into a priority rather than an afterthought has done much to limit the buffer overflows, string format problems, cross-site scripting bugs, and race conditions that have made SQL injections, data validation attacks, stack smashing, and countless other attacks possible for far too long.  Those who develop software have adopted formal development models such as Capability Maturity Model Integration (CMMI) in order to refine processes and improve quality and the (ISC)2 [1], a respected authority in certification, even announced a certification program called the Certified Security Software Lifecycle Professional (CSSLP).

These changes all send a positive message; however, while a great deal of entities develop software, nearly all organizations acquire, implement, use, or otherwise rely on software in executing their critical operations.  As a result, the secure design of software has to be viewed as just one of many steps on the ladder of information security.

This article presents a collection of ten important security points for organizations that rely on software in any form or fashion.  It speaks to the fact that many must acquire their software from third parties and may have little control over its design.  It acknowledges that the insecure implementation of an application, whether inherently secure or not, will ultimately lead to compromise.  It concedes that people will make mistakes, and without proper training, will undoubtedly expose their employers to an array of security threats by using software in a manner for which it is not intended.

Establish an Information Security Awareness Program

User error and abuse is still a major cause of adverse incidents, and organizations are doing very little to educate employees about security.  According to the Computer Security Institute’s Computer Crime and Security Survey, 42% of respondent agencies reported that awareness and education comprise less than 1% of their security budget.  Further, the percentage of agencies reporting significant losses from insider abuse has hovered around 50% since 2004, and is much higher when insider error or negligence is factored in [2].

To effectively respond, organizations need to educate all of their employees in the proper use of information systems, software, and networks.  This program should not only raise awareness of contemporary threats, but identify the location of relevant policies, discourage high-risk behaviors, and clearly define the consequences for non-compliance.  The best programs provide both general and role-based (software developers, system administrators, or anyone with elevated access rights) training and deliver mandatory refresher sessions at least once per year. Many organizations are even encouraging employees to pursue third party security certification and education programs.  If anyone is not sure which certifications pass muster, he or she can review the Department of Defense Directive (DoDD) 8570, which includes a list of certifications deemed appropriate for enhancing information assurance aptitude among government employees and contractors [3].

Virtualization Software  is Not a Total Security Solution

Many professionals have looked to virtualization software to boost their security posture, claiming that while virtual environments are capable of emulating an array of hardware and software technologies, they do not carry with them the same set of vulnerabilities.  Others have even elected to use virtual machines as firewalls. While it is true that virtual machines can offer cost savings, flexibility, and an added layer of security, these systems have recently come under attack by several creative analysts.  The primary focus of their testing is measuring detectability and escapability; that is, how easily attackers can determine which virtual technology is being used and whether or not they can effectively escape from it into another operating system.  The results of some of these assessments are not overly positive [4].  Further, implementing virtual systems can inhibit redundancy and availability if multiple virtual machines are running on the same physical hardware.  The bottom line is, while virtual machines can be secure, they should not be viewed as a security panacea or as a substitution for a layered security approach.

Beware of Transitive Trust Issues in Acquired Software

Regardless of whether you are developing software or purchasing it, proper steps must be taken to avoid transitive trust issues.  Transitive trust is a trust relationship that develops beyond the two parties that are directly interacting with one another.  It becomes problematic when one party obtains code from a second party, only to find out it was developed by a third party or perhaps multiple unknown parties [5].  Transitive trust first got the attention of the DoD in 2004 when the Government Accountability Office (GAO) published a report citing concerns that DoD was not doing enough to prevent the use of code acquired from unknown or foreign developers [6].  The primary audience was the software development community, but the lesson is relevant in the acquisition and use of all software applications.  From payroll systems to office suites to Web 2.0 technologies, employees may be using applications from untrusted sources or those from trusted sources that rely on questionable third parties.  This is likely to get worse as organizations rely more on Web 2.0 and social networking technologies, as these applications are riddled with mashups, widgets, advertisements, and other potentially unfamiliar components.  Figure 1 illustrates.fig1.     .

Control Downloads and Filter Web Traffic

Organizations have generally become better about blocking access to websites believed to be obscene, offensive, or otherwise inappropriate for their employees.  The common belief is that sites containing such material are a source of malicious code.  While these sites certainly pose a threat, the fact remains that malicious code is frequently downloaded from sites believed to be wholly legitimate.  In fact, a 2009 Websense report indicated that 70% of the most popular websites hosted some form of malicious code or contained links to known malicious sites [8].  While commercial web filtering solutions offer the capability to block nearly all of these pages, blocking is typically scaled back to bare minimum levels.

The solution is for organizations to take a more aggressive approach, blocking any content deemed unrelated to normal business operations.  Further, policies should prohibit the download and/or installation of content not explicitly approved by management.  This can be enforced technically through desktop management systems, web proxies, and even anti-virus solutions.  Management can then periodically formulate and distribute lists of acceptable applications based on sound risk analysis.

Test Early, Often, and Using Different Methods

Testing is a priority of software developers and engineers and has its rightful place in all phases of the development process, but what kind of testing are most organizations performing on third-party applications?  Acquisition personnel can make the mistake of placing too much faith in the quality control efforts and security-mindedness of vendors or suppliers.  The National Institute for Standards and Technology (NIST) tells us that third party applications must be tested just as rigorously (or more so) that those developed in-house.  Applications should be tested for coding errors, malware insertion, and backdoors.  If the source code is not made available, then static binary analysis tools should be utilized to test compiled code.  (NOTE: You may have to add a provision to contracts with suppliers in order to avoid violating the Digital Millennium Copyright Act.)  Automated, remote application scanners can be used to test web apps, and a thorough review of security capabilities should be executed against the original stated requirements [9].  Integration testing must be done to determine how the software will work in your environment.  Lastly, some form of acceptance testing should be done to determine how users will interact with the software.  This can go a long way in predicting otherwise unexpected or unintended behaviors that can lead to various security vulnerabilities.  Also, it can serve as food for thought in putting together application training programs for end-users.

Focus Patching on Applications as well as Operating Systems

The number of vulnerabilities associated with applications has grown substantially and now exceeds the number of network and operating system vulnerabilities.  This is, in part, attributed to the fact that while organizations have automated the process of patching operating systems, applications are often patched manually and sometimes overlooked entirely.  The result is that hackers are crafting more exploits designed to target applications.  Specifically, those with malicious intent are targeting browsers and other client-side applications [10].

Organizations can minimize the risk of running applications by instituting a formal patch management policy by developing manual patching strategies.  It is also prudent to install web application firewalls and other specialized application firewalls when possible.  Further, all in-house or third party software should have filtering capability based on white-lists.  White-lists offer greater security than black-lists because they define only the data and/or applications that are expressly permitted, rather than forcing the administrator to compile a list of everything that is not.  This can go a long way considering the number of attacks attributed to the passing of malformed data.  For an even higher level of assurance, the organization should employ (or contract with third party suppliers who do so) the services of penetration testers with specialized experience in testing web and other applications against the Common Weakness Enumeration’s (CWE)’s collection of vulnerabilities. Particular attention should be paid to “The 2010 CWE/SANS Top 25 Most Dangerous Programming Errors list” [11] [12].

Use Security Templates to Configure the Environment

IT administrators struggle mightily with configuring the hundreds of settings that will ultimately have implications on their organization’s information security posture, and even more importantly, what employees can or cannot do on their systems.  In dynamic environments, administrators may quickly lose control of computer and network settings and find it daunting to harden systems to an acceptable level.

One solution is the use of security templates.  These templates allow administrators to configure settings such as password policies, event logging, user rights, file system and registry permissions, and services in a centralized manner.  Moreover, they provide the opportunity for technical staff to streamline the process of getting baselines approved by upper management and referring back to those baselines when users seek out exceptions.  Microsoft allows IT professionals to utilize Active Directory and Group Policy Objects (GPOs) to quickly and effectively deploy security templates.  It even offers a default set of templates to find a logical starting point [13].

Enforce Least Privilege on the Desktop

IT security administrators will tell you that far too many users demand administrative privileges on their local machines, leaving those machines much more vulnerable to malware infection.  Requests for these privileges may be particularly common among highly technical people as they may need to run certain programs or install applications in the course of their duties.  Once given, it can be difficult to take these rights away, resulting in serious privilege creep.   IT professionals may also have difficulty in deciding who legitimately needs elevated rights and who is abusing the system.  An incorrect decision, they fear, could lead to a program not running properly or perhaps an angry call to the boss.  The solution is twofold.  First, a privilege audit should be conducted in order to ensure that only those who absolutely need administrative rights are granted them.  Second, the IT staff should provide employees with user accounts that are utilized for ordinary operations and local administrator accounts that can be utilized when elevated rights are necessary.  In Windows XP, this can be accomplished using the “Run as…” feature.  Windows 7 makes things a bit easier with User Account Control, and in nearly every Unix implementation, the “sudo” command negates the need for a second account altogether.

Control the Use of Mobile/Portable Devices for Storage and Transfer

With storage capacities getting larger and form factors getting smaller, flash drives and other portable media are becoming a favorite of users looking to store or transfer a lot of data, or even run portable software.  The problem lies in the fact that these devices are more susceptible to theft, easier to surreptitiously transport in and out of security zones, and can be used as an effective vehicle for propagating viruses and other forms of malware.  Policy should dictate the types of devices that are authorized as well as mandate encryption for data classified at appropriate levels.  Policies should cover issues such as portable software to ensure that users are not running unauthorized operating systems or applications from their mobile devices.  Users should be made aware of the implications of using the same devices on their work and home systems, as well as basic physical safeguards such as not leaving items unattended in public places.  Lastly, policy must instruct employees never to connect abandoned or lost devices in order to identify their contents.  This could be an attack referred to by the hacker community as the “USB Switchblade,” whereby a storage device is intentionally left unattended with the hopes an employee will use it.  This is fast becoming a common technique for reconnaissance and malware propagation [14].

Look Before You Dive into Cloud Computing and Web 2.0 Technology

In the interest of cutting costs and reducing the need for hardware, many organizations are purchasing services such as cloud computing, thereby entrusting some of their critical operations to third parties.  These decisions may be good for the immediate bottom line, but bring about some serious and potentially costly security vulnerabilities.  For one, service providers may not guarantee customers an exact location where potentially sensitive data will be stored.   This presents some jurisdictional issues in the event that a cybercrime is perpetrated against the service provider or the data owner.  Second, as Cisco CEO John Chambers admitted in 2009, cloud computing entirely contradicts many proven security principles such as security perimeters, zones, and data compartmentalization.  This requires those offering cloud computing services to rethink their security design—something that may take some time [15].

There is also a push among managers of information-based organizations to move to Web 2.0 and social networking sites for knowledge management and even collaboration on projects.  These technologies bring about an array of vulnerabilities to XML poisoning, cross-site request forgeries, and authentication attacks.  Worst of all, they afford users the opportunity to post sensitive or inappropriate content without proper review, leading to a series of problems with data aggregation or disclosure.  As such, a thorough review of Web 2.0 technologies should be executed prior to their implementation.  Considerations of who will be using the system need to be made and appropriate user access levels need to be decided upon.  Someone needs to be charged with managing the system and removing content that is not appropriate for consumption.  Presenting the system to users as a mechanism for less formal communication and project planning, as opposed to a platform for exchanging technical information, can go a long way in avoiding data management issues.

[1] The International Information Systems Security Certification Consortium.
[2] Richardson, R. CSI Computer Crime and Security Survey.  Computer Security Institute. 2008. http://www.cse.msstate.edu/~cse6243/readings/CSIsurvey2008.pdf
[3]Department of Defense.  Information Assurance Workforce Improvement Program. April 10, 2010. DTIC. http://www.dtic.mil/whs/directives/corres/pdf/857001m.pdf
[4]Liston and Skoudis. Thwarting VM Detection.  2006.
[5]Grimes, R. InfoWorld. April 25, 2008.
[6]Defense Acquisitions; Knowledge of Software Suppliers Needed to Manage Risk. 2004. United States General Accounting Office. http://www.gao.gov/new.items/d04678.pdf
[7]This figure appears on Page 1-2 in an Information Resources Management College research paper, “Software Assurance in Acquisition: Mitigating Risks to the Enterprise”, by Mary Linda Polydys and Stan Wisseman , Washington, DC, National Defense University Press, February, 2009.  That figure is based on an article by Ellen Walker, . “Software Development Security: A Risk Management Perspective”, in The DoD Software Tech News–Secure Software Engineering. (Vol (8) No (2), July 2005), Rome, NY: Data & Analysis Center for Software.
[8] http://investor.websense.com/releasedetail.cfm?ReleaseID=360276
[9]National Institute of Standards and Technology. Twenty Critical Controls for Effective Cyber Defense. Version 2.1. August 10, 2009. http://csis.org/files/publication/Twenty_Critical_Controls_for_Effective_Cyber_Defense_CAG.pdf
[10]SANS. The Top Cyber Security Risks. September 2009. http://www.sans.org/top-cyber-security-risks/
[11]National Institute of Standards and Technology. Twenty Critical Controls for Effective Cyber Defense.Version 2.1. August 10, 2009. http://csis.org/files/publication/Twenty_Critical_Controls_for_Effective_Cyber_Defense_CAG.pdf
[12]Department of Homeland Security’s National Cyber Security Division.  Common Weakness Enumeration Strategic Initiative.  http://cwe.mitre.org/top25/
[13]Melber, D. Understanding Windows Security Templates. Windowssecurity.com.
[14]Hak5.org. USB Switchblade. 2010.
[15]MacMillan, R. Cloud Computing a “Security Nightmare” Says Cisco CEO.  PCWorld. April 22, 2009. http://www.pcworld.com/businesscenter/article/163681/cloud_computing_a_security_nightmare_says_cisco_ceo.html

About the Author

Salvatore C. Paladino is a Cyber Security Analyst with ITT Corporation’s Information Protection and Sharing Department in support of both the Department of Homeland Security’s Science & Technology Directorate and ITT’s IT Security Operations Center.  His areas of expertise include technology evaluation, transition, and deployment, information security policy development, information assurance training and awareness, and the identification of emerging cyberthreats.  He has authored numerous technical papers and has testified before the New York State Commission of Investigation as an expert witness specializing in cybercrime.

Mr. Paladino holds a BS with a concentration in Computer Security from Utica College of Syracuse University and an MBA in Technology Management from the State University of New York.  In addition to being a Certified Information Systems Security Professional (CISSP), he is a Network+, Security+ and A+ Certified Professional.  He is also an Adjunct Lecturer in the School of Business and Justice Studies at Utica College.

Author Contact Information
Email: Sal.Paladino@itt.com

Tags: ,

I'm webmaster and CEO of APPCRAFT, a software web site which offers numerous digital computer software applications of different streams like video editing, developer tools, games, and educational software etc. all available at one place.

Leave a Reply


Fusion 5