Do organizations need hardware firewalls when the network already has host-based software firewalls? Wouldn’t it add cost and complexity to networks? Wouldnt system protected by host-based software firewalls just as secure as having a hardware firewall if they are implemented appropriately?
“Firewalls actually come in two distinct flavors: software applications that run in the background and hardware devices that plug in between your modem and one or more PCs. Both types hide your PC’s presence from other systems, prevent unauthorized access from external sources, and keep tabs on network traffic across the firewall.” (Desmond, 2004)
The host-based software firewalls are good for the host; but not for the network that the host is connected to. A hardware-based firewall is required for:
Network address translation (NAT) to prevent exposure of internal IP addresses,
Port management to close unsolicited access to your host,
Stateful packet inspection (SPI) to inspect for unsolicited incoming traffic,
Virtual private network to support connection remote connection and the host,
Activity logging and alerts
Content and URL filtering
The hardware-based firewall is easy to implement and saves computing resources on the host. Malware on the host can bring down the firewall on the host, but not the hardware firewall.
While the hardware-based firewall can protect threats from outside the network, the software-based firewall helps to protect from attacks within the system. Software-based firewalls help to detect unauthorized outbound traffic from the host. A user can pick and choose which application can talk to peer hosts as well as external systems and may not be able to do this with hardware-based firewalls. Continue reading “Hardware or Host Based Firewalls”
We cannot attribute the beginning of cloud computing to a particular person or time. It evolved with the evolution of Internet and enterprise computing. We may be able to trace its roots all the way back when Dr. Larry Roberts developed the ARPANET in 1969. (Whitman & Mattord, 2016)
While the evolution of ARPANET, to Ethernet and then to Internet happened, enterprises were discovering new ways to compute from mainframes to multi-tier computing. During the early stages of enterprise computing, enterprises were purchasing hardware and software to host internally. Though not in the form that we see today, enterprises had an early version of cloud in the form of networked mainframe systems with dumb terminals. They then slowly began to outsource their information systems to Internet Service Providers (ISPs) and Application Service Providers (ASPs).
The concept of using computing, as a utility was probably first proposed by Professor Noah Prywes of the University of Pennsylvania in the Fall of 1994 at a talk at Bell Labs. “All they need is just to plug in their terminals so that they receive IT services as a utility. They would pay anything to get rid of the headaches and costs of operating their own machines, upgrading software, and what not.” (Faynberg, Lu, & Skuler, 2016). It came to fruition when Amazon launched its limited beta test of Elastic Cloud Compute Cloud (EC2) in 2006. Meanwhile, Salesforce.com has already mastered how to deliver an enterprise application using a simple website. Continue reading “Cloud Computing and Data Security”
When first discovered in 2010, the Stuxnet computer worm posed a baffling puzzle. Beyond its unusually high level of sophistication loomed a more troubling mystery: its purpose. Ralph Langner and team helped crack the code that revealed this digital warhead’s final target — and its covert origins. In a fascinating look inside cyber-forensics, he explains how.
Ralph Langner is a German control system security consultant. He has received worldwide recognition for his analysis of the Stuxnet malware.