Is it important to account for or acknowledge risks that may not apply to an organization or system? What if you identified a risk that you would typically consider for but would not use due because of the context. Say, for example, your organization is not in a floodplain however it is usual to consider for the flood risk for all locations of your organization. What if you have validated with FEMA 100 Year Flood Zones that the total risk facing the organization is very low since it is not in a location that requires flood insurance? Do you still need to acknowledge the possibility of the threat occurring?
I believe it is essential to acknowledge the risk. We need to document it as very low risk, and very minimum safeguards are required as part of risk assessment. The building code of the location would define minimum safeguard. However, there could be situations where the asset value of the site is very high that you cannot ignore the risk altogether. Say the location is the primary data center for the organization. In such situations, the organization must implement all appropriate controls required to protect from the flood. The assessment needs to be revisited periodically to determine if the risk is significant or not at that time. Each evaluation must be based on current facts and numbers at the time. Continue reading “Acknowledging Non-Applicable Threats”
To determine what are the emerging threats, we need to know what are emerging technologies. According to World Economic Forum, the following are emerging technologies:
- Nanosensors and the Internet of Nanothings
- Next Generation Batteries
- The Blockchain
- 2D Materials
- Autonomous Vehicles
- Perovskite Solar Cells
- Open AI Ecosystems
- Systems Metabolic Engineering.
Correlating this with 2016 Verizon Breach Report, I believe malware to command and control (C2) and then to compromise accounts and devices would be the top in coming years especially for the merging technologies like nanosensors with IoT, the blockchain, autonomous vehicles and open AI ecosystems. A compromise of these technologies would impact privacy. That doesn’t mean other technology would not have an impact. The same techniques would be used for cyber espionage and intellectual property theft. Most of the attack would happen over the Internet. Continue reading “Emerging Threats”
We cannot attribute the beginning of cloud computing to a particular person or time. It evolved with the evolution of Internet and enterprise computing. We may be able to trace its roots all the way back when Dr. Larry Roberts developed the ARPANET in 1969. (Whitman & Mattord, 2016)
While the evolution of ARPANET, to Ethernet and then to Internet happened, enterprises were discovering new ways to compute from mainframes to multi-tier computing. During the early stages of enterprise computing, enterprises were purchasing hardware and software to host internally. Though not in the form that we see today, enterprises had an early version of cloud in the form of networked mainframe systems with dumb terminals. They then slowly began to outsource their information systems to Internet Service Providers (ISPs) and Application Service Providers (ASPs).
The concept of using computing, as a utility was probably first proposed by Professor Noah Prywes of the University of Pennsylvania in the Fall of 1994 at a talk at Bell Labs. “All they need is just to plug in their terminals so that they receive IT services as a utility. They would pay anything to get rid of the headaches and costs of operating their own machines, upgrading software, and what not.” (Faynberg, Lu, & Skuler, 2016). It came to fruition when Amazon launched its limited beta test of Elastic Cloud Compute Cloud (EC2) in 2006. Meanwhile, Salesforce.com has already mastered how to deliver an enterprise application using a simple website. Continue reading “Cloud Computing and Data Security”