tag:phoneboysecurity.posthaven.com,2013:/posts PhoneBoy's Security Theater 2016-07-08T07:46:34Z Dameon D. Welch-Abernathy tag:phoneboysecurity.posthaven.com,2013:Post/790229 2015-01-02T07:09:59Z 2015-01-02T07:09:59Z Moving the Security Theater to 10 Centuries

If you're seeing this, the DNS hasn't updated yet.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/773995 2014-11-23T16:42:39Z 2014-11-23T16:42:39Z The Department of Yes

I was listening to Episode 395 of the Security Weekly podcast when I heard one of the guys say that Information Security is often The Department of No. As in, no, you can't use that device. No, you can't use this new technology. Why? Because we can't secure it or control it. Because the risk of data loss is too great. Because NSA. Or whatever boogeyman The Department of No can come up with.

There are points on both sides of this debate. Anytime a new device connects to a network, regardless of what it is, there is always additional risk. If that device is controlled by the company, the risk is significantly less. If they don't control it, and there are inadequate security measures built into the network they connect to, it's another potential member of a botnet or worse. 

Despite all these risks, people want to use whatever device they want wherever they want and still get to and work with corporate data. Most users have zero clue about the potential information security implications of this, and even those that do--often those folks who sign paychecks--don't care because they want what they want, implications be damned.

So what's a poor information security professional to do? How can they become The Department of Yes while minimizing the associated risks these new technologies present?

First, we should ask the question: what is it we're trying to protect, anyway? Certainly an unmanaged devices presents a risk and should be connected to an untrusted network, such as a guest WLAN. You have your network properly segmented to allow this, right?

Next, you should ask the question: what is it that device needs access to? Certainly not the entire internal network, but some subset of it. Common items include Email, some intranet sites, their home fileshare, and maybe Salesforce. 

Of course, the minute you provide access to that data to an unmanaged device, there's a risk for that data to be lost somehow, which should raise the hackles of any information security professional. The common reactions to this phenomenon are to not allow access to any data, only allow access to very limited, harmless data, or to allow it only on condition of managing the device (i.e. with a Mobile Device Management solution). Users, on whole, like none of these options.

Another option is available, and that's the concept of a secure container. This allows an unmanaged device to access sensitive data within a container that is secured and controlled by the business. The user can do what they need to do, the data only stays inside that container, that data is stored securely on the device, and the business can revoke access to that data at any time without managing the entire device. 

The challenge with any container-type solution, of course, is doing this without compromising the end user experience. The users still need to be able to access and potentially manipulate this data in an intuitive manner. If adding a container-type solution degrades the end user experience, end users won't accept it and will demand a less secure solution. 

There are a number of different solutions that operate on this principle. The ones I am most familiar with are the ones Check Point sells. Given I work there, that should be no shock to any of you. They include:

  • Mobile Access Blade, which provides a way for unmanaged desktop/laptop systems to access corporate and manipulate documents within a Secure Workspace. This workspace is encrypted and disappears from the local system when the end user disconnects from the gateway.
  • Capsule, which enables similar access on mobile devices, as well as provides a clean pipe to mobile and desktop systems that are off your corporate premises by routing traffic to Check Point's cloud where traffic is inspected using Check Point's Software Blades using the same policy you've already defined for your enterprise environment.

Regardless of whose solutions you choose, they allow you to be The Department of Yes. Yes, you can access that data with your chosen device without the enterprise losing control of the data. Everyone wins.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/772843 2014-11-20T21:50:10Z 2014-11-20T21:50:10Z What is Palo Alto Networks Afraid Of?

While I don't typically read End User License Agreements, I was pointed to the following phrase that exists in the End User License Agreement for Palo Alto Networks equipment:

1.3 License Restrictions: End user shall maintain Products in strict confidence and shall not: [...] (d) disclose, publish or otherwise make publicly available any benchmark, performance or comparison tests that End User runs (or has run on its behalf by a third party) on the Products;

I trust you can think through the implications of this restriction on your own. It certainly adds additional color to the recent public spat between NSS and Palo Alto Networks.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/735700 2014-09-04T20:08:52Z 2014-09-06T16:00:02Z "Hacking" iCloud for Deleted Celebrity Nudes

A lot of noise has been made recently about some nude celebrity photos that have made their way around the Internet, such as those of Jennifer Lawrence. Initially it was thought to have been a compromise of Apple's iCloud service, but, really it was because Apple failed to properly rate-limit an API that allowed rapid brute-force guessing of passwords. Apple claims it was a targeted attack on these specific users and tells users to enable two-factor authentication and use a strong password.

Getting someone's iCloud credentials, either by brute force or guessing the answers to security questions, is all well and good. It certainly explains these photos were acquired and posted. It doesn't adequately explain how supposedly deleted photos from a celebrity's iPhone were posted online. 

Moti Sagey, whom I work with at Check Point, actually came up with the idea of how this might have happened. He discovered a tool called dr.fone that can "recover data from iOS devices, iCloud backup, and iTunes backup." Curious, Moti and I downloaded and installed the tool, attempting to recover data from our own iCloud backups. Lo and behold: there they were. Up to 3 iCloud backups for each device we own

A note about the way iCloud backups work: they will only happen from your iOS device if the device is on WiFi and is plugged in--usually at night. They could also be triggered manually, but let's face it: how often do regular people do that? Almost never. 

While the majority of my devices have 3 consecutive days of backups, one in particular had quite a gap in their backup dates. Which, for most celebrity-types, may be pretty common. Movie stars are like normal people: they rarely charge their phone and are rarely on WiFi outside of their own home. Given how busy some celebrities are, it doesn't seem inconceivable that they could go months without being at home on their WiFi and charging their iPhone at the same time.

What's in an iCloud backup, you ask? Quite a lot, as it turns out, including the camera roll:

A popular celebrity with easy to guess iCloud username and password plus a rarely backed up iPhone plus a tool like dr.fone equals access to potentially naughty photos the celebrity thought were deleted months ago. Voila!

Of course, this is just the tip of the iceberg of what criminals do in order to get illicit photos and videos from celebrities, but let's be honest: why go through the trouble of social engineering, installing illicit remote access tools, and stealing authentication keys when you can take the much easier route of brute force guessing a weak password and/or easily answer their password reset questions?

People are blaming Apple here, when, last we checked, you had to opt-in to iCloud backups. Even if you choose to opt-in, you can choose what elements of your phone to back up. The fact that Apple keeps a few of them on-hand is, in fact, a good thing. Good backups are just as important as choosing a strong password or using two-factor authentication when its available. While there is some inherent risk in backing stuff up to the cloud, doing so is a greater good than never backing up your device, which is what would happen for most people without iCloud doing it for them.

Since it appears iCloud only keeps the last 3 device backups, ensuring that iCloud backs up your device regularly will also reduce the risk someone might "recover" an old nude photo you thought you deleted. Of course, if you want to reduce the risk even further, don't take the photos with your mobile phone, or at least don't do it with the default camera application. Use an ephemeral messaging app like Wickr or Telegram, at least until phones get an incognito mode similar to browsers.

Authorship Note: If Posthaven would let me mark more than one author, I would credit Moti Sagey as one. Especially since this was his idea, which I put a few refinements on.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/735111 2014-09-02T16:00:11Z 2014-09-02T16:00:11Z Fun with Check Point Dynamic IP Gateways in R77.20 with Gaia

Back when I worked for the Nokia TAC support Check Point Security Gateways, I actually ran a Nokia appliance as the headend for my network. One could call it "keeping my skills sharp for my daily work." For a variety of reasons, I discontinued this practice and instead opted for more traditional home routers that I could install DD-WRT on.

When I started working for Check Point, I alternated between either running a DD-WRT router or using a UTM-1 EDGE appliance. When Check Point announced the SG80 (and later the 1100 series appliance), I moved to using those. 

And while I probably could continue to use the 1100 series appliance for some time, I finally got my hands on a 2200 Appliance and decided that it should be the headend to my home Internet connection. I wanted to be able to use all the latest Check Point features and get my hands a bit dirtier than I have in a while. What better way to do that than with my family, which is very good at generating lots of real-world, Internet-bound traffic :)

DHCP and DNS Server on the Gateway

Every gateway I've used in the last several years had a DHCP server. In most cases, it also included a DNS server. Gaia does have a built-in DHCP server, and should you use it, should make sure to configure your rulebase to permit the traffic correctly as described in sk98839.

That said, it's not currently possible to make the Gaia DHCP server authoritative for a given subnet without manually hacking the dhcpd.conf file (see sk92436 and sk101525). I did this, which worked.

I still wanted a local DNS server too where I could define local DNS entries for items on my subnet. I could have easily used dnsmasq on one of the many DD-WRT routers I still have on my network as WiFi access points, but I wanted it on the Security Gateway, just like I had it on my 1100, UTM-1 EDGE, and others. I poked around a bit on my Gaia R77.20 installation and discovered that it too had dnsmasq.

I want to be clear: dnsmasq is not formally supported on Gaia. That said, it appears to work quite nicely. The configuration file /etc/dnsmasq.conf is not managed through Gaia and, as far as I know, it should survive upgrades and the like and not be affected by anything you do in Gaia, except maybe enable the DHCP server in Gaia, which obviously do you don't want to do if you're going to use dnsmasq for this.

The dnsmasq in Gaia appears to take most of the common configuration options, so I set about configuring it as described in the dnsmasq documentation. Starting it up is pretty simple: a service dnsmasq start from Expert mode gets it going.

But, of course, that only works until the next reboot. Now I could have hacked /etc/rc.local or something like that, but I know how to make Gaia's configuration system actually start this process for me, monitor it, and restart it if it dies.  

[Expert@animal:0]# dbset process:dnsmasq t
[Expert@animal:0]# dbset process:dnsmasq:path /usr/sbin
[Expert@animal:0]# dbset process:dnsmasq:runlevel 3
[Expert@animal:0]# dbset :save

I guess all those years of supporting IPSO paid off :) If you want to turn off dnsmasq in the future, issue the following commands from Expert mode:

[Expert@animal:0]# dbset process:dnsmasq
[Expert@animal:0]# dbset :save

One other thing you might want to do if you're running a local DNS server is have your Security Gateway actually use it. The problem is: dhclient (which obtains settings from a DHCP server) actually overwrites the configuration in Gaia for the DNS domain and the DNS servers to use. You should be able to prevent this by changing the /etc/dhclient.conf configuration file not to use this information.

Specifically, it means changing this line:

request ntp-servers, subnet-mask, host-name, routers, domain-name-servers, domain-name;

to this:

request ntp-servers, subnet-mask, host-name, routers;

To be absolutely clear, Check Point does not support using dnsmasq at all for any purpose on Gaia. While I am using it successfully in R77.20, it could easily be removed in a future version at a future date.

DHCP and the Default Route on the Internet Interface

When I put the 2200 in my network, I noticed that Gaia was not accepting the default route from Comcast's DHCP server. To get my home network oeprational, I manually set the default route to my Internet-facing interface, which seemed to work for a time.

One persistent issue I was having, that I initially was unable to figure out, was that it would take some time in order for sites to load. I wasn't quite sure why, but it was only happening occasionally and not something I could easily track down. I ignored it for a couple of weeks, but last night the problem really came to a head.

I thought Comcast was having a problem. I even went so far as to call Comcast and they said there was some signal problem on their end. The problem still persisted. The only thing "different" was that we had visitors who brought a laptop with him and was playing League of Legends with my son.

I tried a number of things, and couldn't figure out what was going on. I did start seeing some peculiar traffic on my external interface, and decided on a fluke to check my arp table. Rather than the 20-30 entries I'm used to seeing, I was seeing over 700 entries. For IPs on the Internet. That didn't look right. And sure enough, I was seeing arp whois requests for Internet IPs coming from my gateway.

Arping for every IP you access on the Internet is not terribly efficient. Comcast was, understandably, not responding to these ARP requests quickly, though they eventually did. Furthermore, due to the transient nature of ARP entries, they were timing out every minute or so, which meant that as long as I was actively passing traffic, connections to websites worked great. When I stopped and reconnected? It would take forever to connect.

Which brings me back to the original problem I had: getting Gaia to take a default route from a DHCP server. Turns out, there's an option in Gaia called "kernel routes" that you need to enable to make this work, and enabling it is pretty straightforward. If I had search SecureKnowledge earlier, I probably could have saved myself learning (and documenting) this lesson. It is documented in sk95869.

Now the Internet at home is working great once  No delays in loading websites at all. At least ones caused by configuration errors on my part :)

Dynamic IP Gateway Security Policy

One of the challenges of using a Dynamic IP Gateway is that you don't know what the external IP is of your security gateway. Not knowing it could make it difficult to, say, make "port forwarding" rules or rules that relate to allowing traffic to the Security Gateway itself.

Turns out this is not an issue. When you check a gateway object as Dynamic IP as shown below, two Dynamic Objects come into play: LocalMachine (which always resolves to your gateway's external IP address) and LocalMachine_All_Interfaces which resolves to all of the IP addresses associated with your gateway. These dynamic objects are simply placeholder objects that get their actual definition updated on the local gateway. For these two objects in particular, this happens periodically without any action on the administrator's part.

Dynamic Objects have a couple of important limitations. Rules using Dynamic Objects must be installed on a specific installation target, i.e. you cannot simply leave the Install On field for the rule the default Policy Targets. Not doing so will cause a policy compilation error. The second issue is that rules with Dynamic Objects disable SecureXL termplates at that rule. This isn't a huge deal as I'm coming nowhere near the capacity of my 2200, and most of my traffic is hitting rules with acceleration, but it is a consideration you must make when constructing your rulebase. 

Another important limitation: Identity Awareness is not supported on Security Gateways that have a Dynamic IP. I do not believe this is an issue on the 600/1100 appliances, but it is for the 2200 and up. 

Final Thoughts

It's been a while since I posted something technical on my own blog about using Check Point. My role is such that I'm not always involved in the technical side of things like I was back when I wasn't quite Internet famous :)

In any case, it may be something I'll do again in the future. I'm sure that pieces of this article will also show up in SecureKnowledge--the supported parts of the above at least.

And, of course, these are my own thoughts and experiences. Corrections and feedback are welcome.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/734764 2014-09-01T17:47:15Z 2014-10-09T06:58:28Z A More Balanced View of Check Point ClusterXL Load Sharing

A blog post written by Greg Ferro, a.k.a. EtherealMind, had come to my attention on a review of ClusterXL in Check Point R77 based on the documentation. I'm familiar with Ferro from the Packet Pushers Podcast where he and Ethan Banks talk nerdy about networking. They occasionally dip into the security arena and when Check Point does come up, it's not in a positive light.

When I actually read Ferro's post about ClusterXL, I was not surprised that the end result was a negative review: not only of ClusterXL but of Check Point itself. What surprised me was how incomplete the review was, technically speaking, because I know how nerdy he can get.

Ferro only used one document for his paper review: the Check Point R77 Versions ClusterXL Admin Guide dated 28 July 2014. He could have easily asked the customer he supposedly did this review for to obtain additional documentation, such as the ClusterXL Advanced Technical Reference Guide in sk93306. Because he didn't have direct access to said documentation as an independent consultant, he chose not to use it.

What is ClusterXL anyway?

Even with the documentation Ferro chose to use, the more I read his Tech Notes Check Point Firewall ClusterXL in 2014 piece, the more I conclude his review was not very thorough. Clearly he read some of the documentation as there are quotes from it throughout the piece, but it's obvious he missed the part that defined what ClusterXL actually is:

ClusterXL is a Check Point software-based cluster solution for Security Gateway redundancy and Load Sharing. A ClusterXL Security Cluster contains identical Check Point Security Gateways.

  • A High Availability Security Cluster ensures Security Gateway and VPN connection redundancy by providing transparent failover to a backup Security Gateway in the event of failure.
  • A Load Sharing Security Cluster provides reliability and also increases performance, as all members are active

Ferro's definition of ClusterXL?

There seems to be two modes of active/active High Availability for CheckPoint Firewall one. Both of which are called ClusterXL.

Ferro then talks about the requirements for ClusterXL, which he gets mostly right: sync is a Layer 2 protocol, ClusterXL requires appropriate licenses (which every current Check Point appliance has, as do many current software-only firewall SKUs), and you should use NTP between members. Actually NTP is a good idea for reasons other than state sync (e.g. VPNs and SSL).

State Synchronization

Then Ferro comments on State Synchronization.

  • by defaults all connection state is synchronised across the cluster.
  • you can decide not to synchronise. Why ? Does this imply a performance issue in the devices, network, speed or other issues ? What would I not sync all state ?
  • “Synchronization incurs a performance cost” – implies that CheckPoint performance remains a problem for “many customers

For the vast majority of Check Point customers, state sync does not have performance issues. Where issues have been observed is in environments where the connection rate is far higher than the throughput would suggest. You can reduce the load by disabling sync for services with connections that are transient in nature (e.g. DNS, short-lived HTTP connections) and can be done easily in SmartDashboard.

But that's not all he has to say about State Synchronization:

  • Synchronisation Network requires elevated security profile, suggests CCP is insecure protocol, isolated network.
  • Recommends using an isolated switch – this is stupidly impractical who the heck wants to waste money and power on a dedicated switch in the DMZ.
  • Using a crossover cable is equally stupid and impractical. What are they thinking ?
  • Appears that sync is ONLY supported on the lowest VLAN ID on an interface.

The Synchronization Network requires an elevated security profile: yes, this is stated in the documentation and is no different than every other major vendor. A dedicated switch or a crossover cable allows this elevated security profile to be maintained better than a dedicated VLAN, which by the way, if you want to use a dedicated VLAN, yes, this is supported; it's a common configuration in customer networks. As to whether a dedicated switch or crossover cable is "stupidly impractical," customers have asked for support for all of these options, which is why they are tested and supported options (and thus listed in the documentation as such).

Ferro's next complaint? You can't use more than one synchronization interface. This actually used to be a supported configuration once upon a time but in recent versions, the supported approach is to let the operating system handle interface redundancy via bonded interfaces (i.e. with LACP). I asked in the comments of the post why he felt using LACP was an inferior solution to multiple independent synchronization interfaces. No response.

ClusterXL Load Sharing

Ferro never once explicitly mentions the sort of configuration he was using as the basis for his paper review of ClusterXL. Based on the fact he only briefly mentions High Availability (or Active/Passive, as he said) and spends the majority of his time talking about Load Sharing (Active/Active), I can only assume that is the configuration he's most interested in, ignoring the fact that High Availability configurations are the far more common customer configuration.

Of the two ClusterXL modes that relate to Load Sharing, Ferro only talks about one: Load Sharing Multicast. This mode, according to Ferro, "should be avoided at all costs" because it involves implementing workarounds that are, in his words, "complex and hard to maintain/operate." These workarounds, static multicast mappings and/or disabling IGMP, are necessary because some networking equipment out there does not properly support RFCs or has issues with frame replication (needed for Multicast). 

ClusterXL has a mode that requires none of these workarounds: Load Sharing Unicast mode. This mode was only mentioned once, but no analysis of this mode was offered by Ferro.

Ferro's opinion about VPN and Load Sharing is that "there are many conditional statements about VPN connections" and "would have little confidence in running Load Sharing Multicast Mode and running VPN services on the same physical unit." I couldn't find the conditional statements he was referring to, but what I did find was one particular statement in relation to third party peers and load sharing (repeated in a couple of different ways):

The third-party peers, lacking the ability to store more than one set of SAs, cannot negotiate a VPN tunnel with multiple cluster members, and therefore the cluster member cannot complete the routing transaction.

The problem isn't Check Point, per-se, the problem is with interoperating with non-Check Point devices. The workaround: using Sticky Decision Function, which is listed in the documentation where this "conditional" statement is listed. This, along with the other so-called conditional statements he refers to are curiously absent from his paper review.

Performance and Cost

Ferro writes in several digs at Check Point's allegedly "punishingly expensive" products. In most of the competitive situations I've been engaged with during my tenure at Check Point, pricing compared to the competition is rarely a factor. That said, I have seen numerous situations where competitors have purposely specified lower-end (and thus cheaper) solutions that, in reality, will not meet the customer's stated requirements while maintaining the same security posture.

Ferro also states "forwarding performance is and price/performance is very poor" which is provably false. Independent third parties have verified Check Point's performance and pricing and have found Check Point in-line with the competition. It's also something being validated internally at Check Point on an ongoing basis.

Ferro is correct that performance decreases when additional security features are enabled, however this phenomenon is by no means unique to Check Point products. The expected performance for a given Check Point appliance is listed right on the data sheet along with the unrealistic "lab" numbers every networking vendor since the dawn of time has included on their datasheets. Unlike Check Point's competitors, the "real world" performance is evaluated with real-world traffic flows, multiple software blades (security functions), a real-world rulebase, and logging/NAT enabled. Your local Check Point account team can provide more refined appliance recommendations based on your specific requirements.

In Conclusion

As noted before, it's not really clear what use cases Ferro was evaluating ClusterXL against. Even his conclusion is non-specific about this:

Check Point doesn’t have strong solution for clustering and the weak documentation suggests that it’s not fit for critical use cases.

Perhaps Ferro does have a use case that ClusterXL would not be a good fit for. I'd love to know what those "critical use cases" are in more detail so his conclusion could be evaluated more thoroughly. 

Ferro's conclusion is based on an incomplete review of the available information. Only one document was reviewed when others exist, and I question how thoroughly even the one document was reviewed. Only one of the load sharing modes of ClusterXL was reviewed in any detail and he completely ignores the other mode as well as high availability configurations. Ferro also ignores the fact that practically all of Check Point's 100,000+ customers use ClusterXL successfully in their networks, which includes 100% of the Fortune 100 and Global 100 customers across many different industry verticals.

In short: Ferro's review of ClusterXL is incomplete and his conclusion is not supported by the facts.

Disclaimer: I work for Check Point Software Technologies. That said, the above are my own thoughts.


]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/728022 2014-08-15T22:26:31Z 2014-08-15T22:26:31Z URL Change for PhoneBoy's Security Theater

I decided that I'm going to let the phoneboy.net URL expire when it comes up for renewal in a couple of months. As a result of that, the URL for this blog (such that it is) will change to http://securitytheater.phoneboy.com

Please update your RSS readers, bookmarks, and the like.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/698607 2014-06-01T05:52:36Z 2014-06-01T05:52:36Z Should You Fret About Mobile Security?

From Stop fretting about mobile security, says Palo Alto Networks founder:

“What I often hear from customers is that 'users have a mobile and they have corporate email and they have Dropbox and I'm afraid they will upload a PDF via Dropbox to their personal account'. Well, what about your Windows users? They've been doing that for the last ten years! Nobody stopped them using Dropbox on their browser for the last ten years.”

So says Nir Zuk, founder and CTO of Palo Alto Networks.

And you know what: he's right. Not necessarily about Dropbox since Dropbox hasn't been around for ten years, but because if you've given people access to a web browser in your organization, you've basically had little to no control over the “applications” they can run. Because even ten years ago, you could run a lot of “applications” organizations so desperately want to control today.

Of course we had URL filtering ten years ago, which can be used to control what people can use with a web browser. But it wasn't as widely used and unless you were using explicit proxies, HTTPS was a pretty big blind spot. And, really, that's only a partial solution since you might want to allow some parts of a web-based application and not others. Doing that solely based on URLs might not always be possible.

But I disagree that you have no control over what end users do on their PCs. Things like the “dead but not going anywhere anytime soon” Anti-Virus/Anti-Malware, firewall, Application Whitelisting, Media Encryption and Port Protection, and a host of other tools, if properly deployed and are monitored, give you something to protect yourself from the malicious things your users get inadvertently from the Internet.

And, of course, segmentation helps too. Not putting your user machines and servers on the same network, using a firewall to media and control access by user, application, service and yes, Nir Zuk, ports.

In fact, once you remember that the browser has made you liable to these kinds of threats for a long time, mobile devices start to look like an attractive option. Zuk claims “mobile devices open up a lot of opportunities for being more secure than today because they do allow the opportunity to control movement of data between devices, and because of the way they're built, the operating system and the controls – especially in iOS 7 and hopefully soon in Android.”

He's absolutely right here. Mobile operating systems are built to be more secure from the ground up. However, you're assuming the device is not rooted or jailbroken, which removes many of the protections these operating systems have in place.

And then there's the data these devices can access and use. What are you doing to ensure data remains protected on these devices? Nir's right there is opportunity to do this better on mobile devices but right now it's an “all or nothing” approach. VDI, Mobile Device Management and secure container technologies are all variations on this approach and users are adverse to all of them.

And then there's the whole lack of visibility over what's going on with the mobile device. At least with a PC you get some, on a mobile device? Not so much.

“You can have a firewall that denies all incoming traffic and bad things still come in,” [Zuk] points out, because Web apps and cloud services mean “the firewall doesn't control access into the network.” Even more bluntly, he's prone to suggesting that “I strongly recommend you take your firewall out and replace it with an Ethernet cable – it will improve the performance and improve the management. And no, I’m not joking.”

Again he's right insofar as replacing a firewall with an Ethernet cable will improve performance and improve management (if you consider removing something to manage an improvement). However, this advice is utterly clueless as it ignores decades of evidence to the contrary, not to mention the fact Nir Zuk's company Palo Alto Networks sells firewalls.

You know when Windows XP dramatically improved security? In Service Pack 2 when the built-in firewall was enabled by default. Yes, the attacks moved up the stack as a result but a properly configured firewall–even one that only blocks on ports and IPs–is better than no firewall at all.

So should you do about mobile device usage in your enterprise? Depends on your policy and depends on what your critical assets are. Should you “fret” about it? No more than anything else. Just realize mobile devices present unique challenges–and opportunities.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/662101 2014-03-09T07:23:25Z 2014-08-15T16:03:52Z If Only App Control Were Around When Pointcast Was A Thing

Back when I first got into IT and just started working with FireWall-1, Pointcast was a thing. For those who weren't around back in the mid to late 1990s, Pointcast had a very popular screensaver that displayed news and other information delivered periodically over the Internet to PCs. The problem was: it used an excessive amount of bandwidth on corporate networks, especially if more than a couple of people used it.

The result was, of course, corporations wanted to block access to Pointcast. The problem: how to do it. All we had in the mid 1990s was the traditional firewall which could control access based on IP and port. So we should be able to block the port or IPs it communicates with, right?

Pointcast used good old HTTP. Even back then, no one in their right mind would block HTTP. Of course, everything uses HTTP or HTTPS to communicate these days, and with a traditional firewall with the ability to control traffic only by IP or port, leaving HTTP or HTTPS wide open is tantamount to leaving the barn door open. 

Pointcast didn't exactly publish their list of servers, but users of the PhoneBoy FireWall-1 FAQ contributed a list of IPs plus a couple of other clever solutions to the problem, which I've made available after the break if you're curious.

Of course, with things like content delivery networks, Amazon Web Services, and a host of other ways to serve up an application to users that are available today, attempting to control access to these applications merely by port and IP address is crazy. 

Fortunately, there are a number of solutions to this problem. Check Point's solution is the Application Control Software Blade, which can allow/block access to an application regardless of the ports and destination IP users, and even limit the bandwidth these applications use. New applications or changes to existing applications are made available to the gateway periodically so you can see that you're users are using it and, when it kills you bandwidth or worse, you can block it. 

If only tools like App Control were available back in the day, security admins could have spent more time on more important issues rather than figuring out how to block Pointcast and other applications and I would have a few less FAQ entries on "how do I block X application."

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/655049 2014-02-16T03:26:07Z 2014-02-16T03:26:08Z No Good For Workloads? Depends on the Workload.

From Chris Hoff's (a.k.a. Beaker) NGFW = No Good For Workloads:

NGFW, as defined, is a campus and branch solution. Campus and Branch NGFW solves the “inside-out” problem — applying policy from a number of known/identified users on the “inside” to a potentially infinite number of applications and services “outside” the firewall, generally connected to the Internet. They function generally as forward proxies with various network insertion strategies.

If you look at the functionality Check Point and its various competitors provide, this is precisely what a large chunk of the "next generation" functionality is geared towards--protecting a number of known/identified users from the dangers they might encounter from a potentially infinite number of application and services. There are differences in how the different security solutions perform this task, as well as how well they perform, but that's their overall goal.

That is, as Beaker continues, very different from what a Data Center firewall needs to do:

Data Center NGFW is the inverse of the “inside-out” problem.  They solve the “outside-in” problem; applying policy from a potentially infinite number of unknown (or potentially unknown) users/clients on the “outside” to a nominally diminutive number of well-known applications and services “inside” the firewall that are exposed generally to the Internet.  They function generally as reverse proxies with various network insertion strategies.

In other words, we're not always sure who is coming in, but we know what they are going to and (hopefully) what applications and services they are going to connect to. 

What kinds of protection do you need in these scenarios? Usually very different. Can every next generation firewall provide just the right protection? 

First, let's take a step back and realize that the Data Center itself is very different from what it used to be a decade or two ago. Whereas we started with a number of servers hosting resources in one or two physical locations with users mostly in known physical locations, we now potentially have services, data, and users all over the place, with a mix of physical and virtual servers where traditional methods of segmentation and protection are not practical. 

The "core" of the enterprise network--where all the necessary resources ultimately connect together--is quickly becoming the Internet itself. How do you protect your resources in this reality?

We go back to one of the fundamental tenets of information security, our old friend segmentation. This means grouping together resources with like function and like information confidentiality levels, placing a enforcement point at the ingress/egress point where you can enforce the appropriate access control policy. The goal for that enforcement point? Let the authorized stuff in and keep the unauthorized and bad stuff out. 

Of course with virtualization, end user PCs, and mobile devices, the boundaries become more difficult to apply but with virtualized security solutions, integrated endpoint security on the end user PCs, trusted channels (VPNs), and secure containers on mobile devices, more is possible than you think. Check Point and other companies have various solutions for this. 

Once the network is segmented and enforcement points are in place, then you can decide what protections and policies should be applied. In some cases, like on User Segments, you want lots of protection as users could go anywhere on the Internet and unknowingly bring in some malware to run amok in your network or send company secrets to their Gmail account. For your data center? Maybe you just want to make sure authorized users can reach specific applications and you want to sanity check the traffic to make sure it's not malicious. Or maybe you just need a simple port-based firewall with low latency for a given app.

The idea of putting a firewall as the core of your network--especially a next generation one-- is silly, as Beaker rightfully points out. Really, your core should be a transit network with enforcement points--those things we typically call firewalls--at the ingress point of the various network segments. This way, just the right policy and just the right protections can be applied without applying them to traffic that doesn't need it. 

This is where I think Check Point's portfolio shines. In the Security Gateway space, the Software Blades architecture is flexible enough to allow you to be very granular about what protections are applied to a specific enforcement point, whether a physical gateway, or a virtual one either in a Check Point chassis (e.g. VSX) or in a VMware or Amazon Web Services environment. This means you can scan a random MS Word document from the Internet for malware on one gateway close to the users while not impeding the flow of traffic in and out of your Data Center that flows through a different Security Gateway. And yes, if you have a 5 microsecond transmission requirement, Check Point has a solution for that with the Security Acceleration Module in the 21000 series of appliances. 

Does an NGFW solve every problem? No, and anyone that tells you it will is flat out wrong. It's not always the right tool for the job, as Beaker points out:

Show me how a forward-proxy optimized [Campus & Branch] NGFW deals with a DDoS attack (assuming the pipe isn’t flooded in the first place.)  Show me how a forward-proxy optimized C&B NGFW deals with application level attacks manipulating business logic and webapp attack vectors across known-good or unknown inputs.

While an Enforcement Point needs to be hardened for DDoS--especially if it is exposed to the Internet--no Enforcement Point is going to completely mitigate a DDoS. There are a number of mitigation strategies that include on-premise DDoS-specific appliances as well as external services, which I know Check Point has advised customers to utilize in various scenarios as part of their Incident Response Services.

Likewise, business logic and webapp attack vectors are outside of the wheelhouse of all NGFWs. You still need to properly secure your web applications, even with an NGFW in place. In addition, there are dedicated, Web Application Firewalls for this purpose and if you've properly segmented your network, you can make sure only those resources are protected by them.

At the end of the day, a Next Generation Firewall, whether it is from Check Point or someone else, is not a panacea. It can be a powerful tool, but like all tools, it needs to be applied properly as part of a comprehensive security strategy that begins with proper segmentation and a well-defined policy. From there, you can apply just the right protection to just the right resources.

Disclaimer: It should be obvious from my last post I work for Check Point, but this is my own opinion. 

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/621097 2014-02-14T18:00:00Z 2014-02-18T17:57:48Z Growing Up With Check Point

Note: I've released a podcast of this article if you prefer. 

The 20 Year Anniversary of Check Point's founding has a special place in my heart. Mostly because it is how I personally made my career. How I got involved in Information Security. How I, unbeknownst to me at the time, helped a lot of people get into Information Security.

18 years ago, I had no idea what Information Security was. I was a systems administrator working for a contracting agency fresh out of college. I did some odd programming jobs which, quite frankly, I was never that great at, and eventually, an interesting contract: doing tech support for a company out of San Mateo, CA.

The product: Qualix HA, a high availability product for Sun Workstations based on a Veritas product. One of the products we also sold along with it and provided high availability for was a product called Check Point FireWall-1.

That contract turned into a full-time job and eventually, as the other people in the group kept getting hired out to do "professional services" or whatever, I had to learn FireWall-1 the hard way: by supporting customers calling for help without much of a backstop.

Back in those days, Check Point did all of their support out of Israel. SecureKnowledge didn't exist. They had a mailing list, which had a lot of questions asked on it, but not a lot of answers.

On a hidden page on the Qualix website, there was an FireWall-1 FAQ started by one of the developers at Qualix. I started writing entries on it. Eventually, I got permission from Qualix to take the content and put it on my website--phoneboy.com.

Qualix became Fulltime Software and got bought by Legato Systems in 1999. Before that happened, I got a job at Nokia in their IP Routing Group--the guys who make the firewall appliances that ran Check Point's firewall. 

PhoneBoy's FireWall-1 FAQ existed for the better part of 8 years as a publicly available resource containing the knowledge I collected about the Check Point products from the mailing lists and my own work with the product as a technical support guy. Obviously a lot of that knowledge also migrated itself into Nokia's Knowledge Base, which I more or less maintained during my tenure there. It also made its way into two books that I published with Addison Wesley (now Pearson Education).

In parallel, I created a moderated mailing list on FireWall-1 in June of 2000, first called FireWall-1 Wizards, then renamed to FireWall-1 Gurus after the folks who own the Firewall Wizards trademark suggested I should change the name. The mailing list lasted for about 9 years.

Around 2003 or so, I started burning out. Technical Support is a difficult job to do long term in general and I had done more than my share. I ended up moving onto other things inside Nokia's Enterprise Solutions or whatever it was back at that time. In 2005, I agreed to let Barry Stiefel take the content on phoneboy.com and copy it onto cpug.org.

I kinda thought I was done with Check Point stuff by then, but I was wrong. I kept working with Nokia's Knowledgebase for the Enterprise Solutions group, which had a lot of Check Point content in it. This meant, for me, reading, writing, and re-writing this content. I kept mentoring folks in the TAC when they had issues with Check Point or just general network troubleshooting. I kept supporting other products that were somewhat Information Security related (VPN and Remote Access product as well as Sourcefire on Nokia).

When the Check Point acquisition of Nokia's Security Appliance business was announced, I wasn't sure what to expect: for a platform that I spent 10 years of my life supporting as well as my own career. When it became clearer that I had a home at Check Point, I began to start looking a bit more closely at the Check Point products again.

What I discovered was that the product hadn't changed all that much. Sure, there was NGX, the rise of Secure Platform and Check Point's own appliance offerings, and many refinements along the way, but the fundamentals of the product were basically the same.

But change was happening: I could see it before I was officially part of Check Point as I was told about the new IPS Software Blade in R70. As I started visiting the Check Point headquarters in Tel Aviv, I got to hear in more detail from the people who develop the product. I got to see the changes up close and personal. App Control, URL Filtering, Anti-Bot, the new (and old) SMB products, DLP, appliances, Gaia, I got to see it all before it was released.

Also, Check Point made a couple of key acquisitions prior to Nokia's Security Appliance business: Pointsec, which was a well-known disk encryption solution, and Zone Labs, which made the ZoneAlarm desktop firewall product. Both of which ultimately became part of Check Point's Endpoint Security offering along with the later acquired Liquid Machines to provide Document Security along with Dynasec to provide Compliance solutions to Check Point's overall product portfolio.

It's been a beautiful thing that I'm proud to say I've been a part of since nearly the beginning. And, of course, there is a lot more to come.

Let's face it: the threats to our networks have only gotten more complex, more dangerous. A lot of the fundamental issues in Information Security haven't changed, either. End Users still do unwise things. Companies don't invest enough time or money in doing the basics in security practices like segmentation, user education, changing default passwords, and a whole host of other practices.

The Information Security market has many players. Check Point plays in many spaces with different competitors in different segments but continues to grow and innovate year over year and continues to remain independent and focused on the goal of securing the Internet in a sea of acquisitions by larger, less security focused companies. 

Here's to another 20 years, Check Point. 

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/615715 2013-11-04T07:54:12Z 2013-11-04T07:54:12Z Blast from the CHKP Past: Can't Talk to Translated IP from Internal Net

It's pretty obvious from looking through the number of 404s I'm seeing in Google's Webmaster tools that a lot of pages still link to old stuff I wrote about Check Point FireWall-1. I'm actually trying to "fix" these 404s now by resurrecting some of the old content.  Not updating it, of course, but at least making the links point to something semi-useful, if historical.

This is one of those articles, obviously not at it's original URL, but the original URLs will point here. What amazes me about this particular article is that it's still relevant today as NAT really hasn't fundamentally changed in the Check Point products for some time. The basic concepts are still the same, too, and other than the implementation details, is probably relevant for other security products, too. 

Bottom line: NAT only works if the firewall is in the path of the communication. How do you know? Follow the bouncing packet, otherwise known as Troubleshooting 101. 

Hit the break to see this old FAQ in all its ASCII network mapped glory.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/610584 2013-10-19T08:40:52Z 2016-07-08T07:46:34Z The TSA and A Lesson in Data Loss Prevention

From PhoneBoy Speaks Ep 305: Threat? What Threat?

In sealed court documents accidentally leaked on a US Federal government website, the US government basically admits that there has been no attempted domestic hijackings of any kind in the 12 years since 9/11. Furthermore, at least as of mid-2011, terrorist threat groups present in the Homeland are not known to be actively plotting against civil aviation targets or airports.

It gets better. Jonathan Corbett, the guy who has been challenging the feds on the constitutionality and the effectiveness of the TSA in the courts, was told by the US Justice Department to take down his comments about a sealed document he wasn't allowed to talk about, but the government themselves posted the unredacted version on their site and other sites made it public.

The same is true for Corbett's comments about those documents, the Justice Department asked him to remove from his site, but they apparently forgot to tell Google as it was still in their cache!

This is a clear demonstration of what can happen if confidential data inadvertently leaves your organization. Clearly this unredacted, sealed legal brief should have never been made public. Thankfully, in this case, it was, but surely some folks in the US Government weren't happy with this lapse. Neither would your employer if it's your company's secret plans for world domination that got leaked.

Likewise, if you inadvertently leak information on your own website, or someone else does, even if you can manage to get it taken down, Google takes a while to forget it saw it. The rest of the Internet won't necessarily forget, though. Ever.

So what are your plans for Data Loss Prevention in your organization? Are you even employing any DLP technology? Is it actually catching real incidents of data loss or is everyone going around it? 

I'm not saying DLP is a panacea or 100% effective, but if you're not doing it, then you don't have any idea where your corporate information might end up. 

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/592147 2013-08-04T05:32:20Z 2013-10-08T17:27:58Z FireWall-1 User Interface from 3.0
If you ran Check Point FireWall-1 on SunOS or Solaris, you probably remember this screen:
This is a screen capture from fwui, the management interface for FireWall-1. Actually FireWall-1 2.0 and 2.1 looked like this too.

Meanwhile if you ran the management interface on Windows, you were presented with something like this:
You could actually run the firewall on Windows NT as of 2.1, though it took until 2.1c until it was reasonably stable.

Actually you could run something that looked like the Windows UI on Solaris with Motif, which Check Point charged extra for. Eventually the Motif and the OpenLook GUI were deprecated and what we now refer to as SmartDashboad only runs on Windows.

And, of course, there are far more GUI clients these days--about a dozen or so if I remember correctly. But then again, the product does so much more today than it did back in the mid 1990s.

And yes, I remember those days quiet fondly.
]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/591550 2013-07-31T14:27:13Z 2013-10-08T17:27:52Z Blast from the CHKP Past: Can't Get Putkeys to Work

For those of you who never had to work with Check Point FireWall-1 4.1 and earlier, you're lucky you likely never EVER had to use putkeys.

The fw putkey was used to establish authentication between the Management and Firewall modules. The problem was: the authentication would frequently break. Especially in larger, distributed environments. Often for reasons that no one outside of a few developers in Tel Aviv never fully understood.

The eventual replacement for fw putkey was SIC, which was added in FireWall-1 NG (5.0). It is based on certificates and is way easier to set up. It's also far less prone to random breakage. 

The following classic article is presented for nostalgia purposes only. Hopefully no one in their right mind still needs this article, which was very popular on my FireWall-1 FAQ back in the day. It was the collective wisdom of my peers and my own experience as of around 2000 or so.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/590989 2013-07-29T09:00:00Z 2013-10-08T17:27:45Z Moving The Security Theater Elsewhere

I decided to move PhoneBoy's Security Theater off of Wordpress onto Posthaven

On the plus side, it's one less instance of Wordpress to maintain. On the minus side, I had to copy/paste the old articles in since there is no way to import Wordpress as of yet. But it was under 100 articles and it was kind of nice to see my thoughts on security from the last several years again.

The reality is, the threats haven't changed. We've had to evolve the tools in order to address increasingly complex threats. You can look at Check Point's own product evolution to get a sense of that.

Also, the basics haven't changed. A lot of security risks can be sufficiently mitigated by properly segmenting your networks, using sufficiently complex passwords that are unique, proper monitoring of the logs of your security devices, and keeping your software up-to-date with the latest security patches. 


]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/590987 2013-06-22T00:03:00Z 2013-10-08T17:27:45Z Fun with Compliance

Earlier this week, I hung out with Jeremy Kaye, one of our in-house compliance experts at Check Point:

http://www.youtube.com/embed/uvL6HdlrW08

While I've been doing InfoSec for a while, or at least working in companies that sell InfoSec products, compliance isn't something I've had a ton of direct experience with. Sure, Check Point customers used our products to help meet various compliance regulations, but until Check Point acquired DynaSec in 2011, there wasn't a team inside Check Point dedicated to this topic.

While we had some technical challenges with the Google+ Hangout itself (and it was the first one we did at Check Point), I think the conversation with Jeremy went fairly well. The questions I asked where ones I've always wanted answers to. Like, what good is compliance? Why does it seem like compliance is in the eye of the auditor? Why so many regulations anyway?

The big takeaway for me from this conversation is that security should drive your compliance efforts, not the other way around. Because chances are, if you have a strong information security program in place already, compliance is pretty straightforward, no matter which regulations you have to comply with.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/590986 2013-05-07T10:14:00Z 2013-10-08T17:27:45Z Check Point 600 Appliance: Big Security for Small Business

Ok, this is a completely shameless plug for my employer. But it's really big. And really small at the same time. And my take on it, which wasn't cleared with the marketing folks, and thus my, albeit biased, opinion.

The Check Point 600 Appliance, which was announced today at Interop, represents Check Point's refreshed entry into the SMB Security space. It provides the same security functionality you'd find in Check Point's larger appliances in something that fits into an SMB--both in terms of form factor and price. This includes Check Point's award-winning IPS, App Control, URL Filtering, Anti-Virus, Anti-Spam, VPN, oh and don't forget the firewall :)

If you're familiar with the SG80, which Check Point launched a couple years back, the new 600 Appliance looks a bit like that, though the internals are slightly different from the SG80. There are standard USB ports, Express Card and SD-card slots in the 600 as well as optional WiFi and ADSL ports. It also includes a revamped Web Interface that incorporates functionality from the UTM-1 EDGE and Safe@ appliances allowing full management of the security policy across all Software Blades.

Under the hood? It's nearly the same code that runs in the larger Check Point appliances--Check Point R75.20 running Embedded Gaia, to be exact. When you SSH or serial console into the appliance, you are presented with clish, which functions similar to how it does on one of the larger appliances. You can also drop into Expert mode for more advanced debugging, which again, works very similar to how its done on the larger gateways. 

The main differences between the 600 and the Check Point 1100 Appliance, which was announced a few weeks ago are:

  • Lower price: List price of a 600 is roughly $200 cheaper than the comparable 1100 model.
  • Chassis color: Bright orange, like the old Safe@ boxes.
  • Central Management: While the 1100 can be centrally managed with standard R75.46 or R76 management (standalone or Provider-1), the 600 can only be centrally managed by Check Point Cloud-Managed Security service.

In any case, I am truly excited about this as finally, SMBs can finally get the same Enterprise-grade security that the Fortune 100 relies on for a fraction of the cost--starting at $399.

Check Point's SMB Portal has information about the new appliances as well as how to acquire them.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/590985 2013-04-03T07:10:00Z 2013-10-08T17:27:45Z Check Point 2013 Security Report

When I was in Israel at the end of 2012, I was talking with the folks putting the finishing touches on the Check Point 2013 Security Report. Of course, since then, the report has been formally released and you can now read it for yourself. Here's a video preview of what you'll find in it:

Some of the data gathered for this report was related to the 3D Security Reports Check Point generated for customers during 2012 where we took a Security Gateway and either ran it in-line (in bridge mode) or plugged it into a mirror port on a customer's switch. It's worth pointing out that, in many cases, a competitive security solution was already in place and the Check Point Security Gateways were seeing stuff the other solutions were missing.

Other data for this report also came from SensorNet, ThreatCloud, and results from our Endpoint Security Best Practices Report, which is a great way to find out if your Windows PC is configured according to our Best Practices.

The most surprising statistics?

  • 63% of the organizations surveyed had at least one malicious bot in their network. 
  • 43% of the organizations surveyed had traffic to/from an anonymizer service.

Of course, if you're knee deep in the security space, 0% of this is news to you.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/590984 2013-02-19T12:50:00Z 2013-10-08T17:27:45Z How to Not Be Like Burger King. Or Jeep.

On today's episode of PhoneBoy Speaks, I discuss how to prevent your Twitter account from being hacked like Burger King's account was. And today (after I recorded this episode), Jeep's Twitter account was also hacked. Of course, I can only do so much in a 5 minute podcast, and the topic itself of choosing strong passwords--and getting users to actually do it--has been covered ad-infinitum elsewhere.

The fact is, passwords are not very secure. To be secure, they must be both long (number of characters) and high-entropy (more random, the better). Humans, as a lot, are not able to remember passwords that meet both of these requirements, so they cheat. They either write the passwords down, they use password management tools like LastPass or 1Password, or they just choose stupid passwords--usually the latter.

The best compromise I've seen is actually the Password Haystacks method that Steve Gibson came up with. All other things being equal, as long as you use all 4 different types of characters in your password, length wins. Because when it comes to guessing passwords, there is no such thing as "close."

Of course, if the password itself can't be guessed, surely you can compromise the password reset process, as was done with Mat Honan's widely publicized pwnage. Hopefully we can strengthen that too, but companies--especially ones that cater to non technical people--rarely err on the side of secure.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/590983 2013-02-13T12:31:00Z 2013-10-08T17:27:45Z Securing Data in the Cloud

On my podcast PhoneBoy Speaks today, I discussed (very briefly) the idea of doing information security in the cloud. Surely, I could talk and write volumes on the subject. I've even given presentations on the subject.

The reality is, virtualization changes the game in so many ways that it's hard to know where to begin. That said, my view starts with the most basic question: what is it we're ultimately trying to protect?

The good news is that the answer is still the same, regardless of whether physical servers on your premises are involved or some cloud services provider is: it's the data. Your job in information security is to ensure the Confidentiality, Integrity, and Availability of data to prevent Disclosure, Alteration, or Destruction of said data.  

The bad news: the cloud makes this job a lot harder. The reality of bring your own device (BYOD) also makes this harder for much the same reason--less opportunities to inject the necessary controls to ensure data doesn't go where it's not supposed to.

Of course, it's not just about protecting the data. That part is actually pretty easy. Protecting in a way that allows it to be used in a convenient way, now that's a lot harder.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/590982 2013-01-16T07:29:00Z 2013-10-08T17:27:45Z How to Catch People Outsourcing Their Own Jobs

I've heard of companies outsourcing jobs to China. I used to joke with my remote co-workers that I had been replaced by a Perl script. That said, I never heard of an employee outsourcing his own job, going so far as to FedEx his RSA token to China so they could log into the VPN and do work on his behalf. While the real guy was in the office, working!

Regardless of what security or remote access solution you use, if you're not looking at your logs, you have no idea if you have a problem! That's how "Bob" was able to get away with this for months! No one bothered to look at the VPN logs and notice there was a remote access VPN connection going from China during the workday!

Of course, with the sheer volume of logs that your different security or remote access devices generate, it's hard to know what to look for. This is why large companies in particular employ Security Information Event Management systems (SIEMs) which attempt to gather and correlate this data from disparate systems to try and help you find that problem needle in the haystack of security logs--finding the key events that you need to focus on.

Check Point puts out a SIEM for its own product suite called SmartEvent, which works across all of our Software Blades and distills the hundreds of thousands of logs into useful and actionable data, telling you the things you need to know about what's going on through your ChecK Point infrastructure.

Regardless of whose security or remote access solutions you employ, if you're not looking at your logs, you have no idea what's going on!

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/590980 2013-01-15T17:47:00Z 2013-10-08T17:27:45Z Zen and the Art of Malware Detection

Detecting malicious code that enters your network is challenging problem that traditional anti-virus and anti-malware can't keep up with. These tools use a series of heuristics and static signatures to try and detect malicious code.

Tomer Teller, a security researcher and evangelist at Check Point, told me about an incident at the 29th annual Chaos Communication Congress where the crowd of security professionals were asked if anyone is actually using AV. Not a single person raised their hand.

How much value does AV provide? There was an article put out by Imperva that got misquoted and misrepresented in the media. The main message? AV only catches about 5% of malware. The real story is a bit better, but it's still not all that rosy.

To help you understand why this problem is particularly tricky, consider how many ways you can write a "Hello, World!" program--often one of the first programs you write when learning a computer language. It is so called because the program writes the phrase "Hello, World!" to the output device. It's often a simplistic program, such as the following C-based example:

#include <stdio.h>int main() {    printf("Hello Worldn"); }

A more complex example (that I borrowed from here) might look something like this:

#include <stdio.h> #define THIS printf(
#define IS "%sn"
#define OBFUSCATION ,v);
double h[2]; int main(_, v) char *v; int _; { int a = 0; char f[32]; h[2%2] = 21914441197069634153456391018824026170709523170177760997320759459436800394073 07212501870429040900672146338833938303659439237740635160500855813030357492372 682887858054616489605441589829740433065995076650229152079883597110973562880.0 00000; h[4%3] = 1867980801.569119; switch (_) { case 0: THIS IS OBFUSCATION break; default: main(0,(char *)h); break; } }

Replace "print Hello World" with "inject code into target host using latest exploit" and you can begin to understand why it is so hard to detect malicious code by simple inspection.

That isn't to say that static analysis provides no value--it does. It catches the really obvious stuff. Unfortunately, it's not foolproof.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/590978 2013-01-11T13:55:00Z 2013-10-08T17:27:45Z In Information Security, Trust Matters

In a previous post, I asked if we could trust no one in Information Security. The reality is that, at some point, we have to trust. We have to trust that we have the right policy in place. We have to trust our people to do the right thing. We have to trust our tools will do their job.

Of course, we should not blindly trust. We need to evaluate that our tools are doing their job, keeping the bad stuff out and enforcing the policy you've specified. We need to trust that our people are doing the right thing. Your tools are both enforcing the policy and educating the users about what the policy is, right? And, of course, you need to evaluate the policy to ensure it is both effective and in-line with business objectives.

If Information Security professionals spend all their time doubting everything, not only will they drive themselves crazy, but the real threats will get by.

Along these lines, The marketing folks at Check Point put together a video discussing trust and its very important role in Information Security.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/590977 2013-01-10T15:28:00Z 2013-10-08T17:27:45Z Unsafe At Any Version?

It's funny, every time I read about yet another security vulnerability in Internet Explorer, such as the recent one involving Adobe Flash hosted on the Council of Foreign Relations website that performs a heap spray against Internet Explorer 8, I am reminded of the old Ralph Nader tome Unsafe at Any Speed, which was a book released in 1965 about how unsafe the automobiles designed by the American Auto Industry are. Thus, the phrase "Unsafe at Any Version" seems to come to mind when I think of Internet Explorer. Likewise, I tend to think the same thing about Adobe Acrobat, Adobe Flash, or Adobe Shockwave (2 year old vulnerability, anyone?)

Is it fair to say these products are unsafe at any version? While evidence seems to suggest that is probably true, I believe the security problems we see in these products are evidence of their success. Okay, maybe Internet Explorer was successful because of being illegally tied to Microsoft Windows, but I'm trying to remember the last time Internet Explorer, Adobe Flash, and Adobe Acrobat Reader were not considered "required items" for a PC.

Which is part of the problem of keeping these programs secure. There is a lot of legacy code in those apps. They were written well before Secure Coding Practices became the norm. Internet Explorer itself has a fundamental flaw by being so tightly tied into the operating system. Rewriting code is no fun and, unless there is a significant business reason to do so, doesn't happen.

Granted, Adobe did do this with Adobe Reader, but there's still a lot older Adobe Readers out there still, just waiting to be compromised. Just like there are millions of people still running XP and Internet Explorer 8, which Microsoft will eventually stop providing security patches for.

These applications aren't going anywhere anytime soon. Which means the bad guys are going to continue to find vulnerabilities in these applications for the foreseeable future. It certainly will keep us good guys busy for the foreseeable future, too.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/590976 2012-12-13T17:57:00Z 2013-10-08T17:27:45Z Trust No One?

Trust. It's something I'm sure many security professionals think about in various contexts. However, I don't think anyone can fully appreciate the level of trust that we exercise on a daily basis without really thinking about it.

Just think about getting packets from point A to point B. There's an insane amount of things we simply trust without really thinking about it. This includes:

  1. The program running on point A to generate traffic: who created that program? Will that program do something you don't expect?
  2. The OS running on point A: is that program running through an OS where key calls are "compromised" in the same way that the recent Linux Rootkit was?
  3. The various processors that run on point A: are those processors calculating true? Will they have a divide-by-zero bug like ye olde Pentium processors? Or did someone replace the processors in your device with one that purposefully does what you don't expect?
  4. The transmission medium of those packets: how secure is that medium? Who (or what) can read those packets off the wire, or the air as appropriate?
  5. The routers and switches along the way between point A and point B. They, too, are computers running code, are they not? Are they configured correctly? Will they route the packets along the path you expect? Are they potentially compromised as a result of bad design or malicious intent?
  6. Point B, that receives the traffic: does it believe what it is reading off the wire? How does it know Point A sent it? Will Point B process it correctly?

And so on. Trying to account for all these possibilities to ensure absolute security is next to impossible and will surely drive you crazy. That said, the thought exercise is important if you're trying to design a secure system. All of your assumptions about various elements of that system must be examined on a regular basis to ensure that you don't miss when something transitions from a largely theoretical threat to a very real one.

Soon, I'll share some ideas on what a "trust no one" network might look like. Does such a thing exist today? Is maintaining such a thing even practical?

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/590975 2012-12-05T00:12:00Z 2013-10-08T17:27:45Z How to COPE with BYOD

If you've been in the IT industry long enough, you'll start seeing the same concepts "reinvented" every few years or so.

The current panacea is so-called Bring Your Own Device--the idea that an end user can use their own technology devices in a corporate setting while having some level of access to corporate data. While we went through this with laptops and personal computers over the years, now the devices we are bring our own of? Mobile phones/tablets.

Another acronym I've heard recently describes the state of IT, again, as long as I've been in it--Corporate Owned, Personally Enabled. Here, the idea is that a corporate-owned asset is used for an employee's personal needs. This has been the case with corporate-owned PCs forever without any formal policy for the last couple of decades. Now we're starting to see this with mobile devices, either with or without the use of third party tools.

The reality is that, regardless of whether companies adopt BYOD, COPE, something else, or neither, the reality is, employees are going to use personal devices to do work. And, likewise, use corporate devices for "personal use." This has always been the case and will always be the case, regardless of any formal policies to the contrary.

From a security point of view, this creates some rather obvious issues. On corporate-owned devices, some sort of "device management" or "Endpoint Security" offering is installed, which users tolerate to varying degrees. (I happen to like Check Point's Endpoint Security offering, but I will admit, I'm biased.) BYOD won't work because users are often asked to submit "device management" or an "Endpoint Security" installation in order to use their own device on the corporate network.

But ask yourself: what is it that you're really trying to protect on that endpoint? Prevent malicious software? You have a properly segmented network, right? You have the technology to detect any malicious traffic from that segment, right? Good. That should take care of it.

But what if the software doesn't "phone home" while on the corporate network (or generate malicious traffic), but collects data and then sends it out over the mobile operator's network? Modern mobile operating systems have these things called sandboxes that prevent one app from reading data from another in the first place. Obviously, if you're jailbroken or rooted, all bets are off.

And malicious apps, while not unheard of, are nearly non-existent in the official App Stores for iOS or Android. Same with potential privilege escalation-type attacks in iOS and Android. Not impossible but a lot harder to pull off, given that Android and (moreso) iOS are pretty secure out-of-the-box.

Really there's only one thing to worry about on these devices: the corporate data. This data needs to be protected. Which is generally pretty easy to do assuming only a trusted application is able to access the data, the regular OS protections are in place (i.e. device isn't rooted or jailbroken). And, of course, the data has to come on and off the device in a secure manner (e.g. either with strong encryption or using a physical access mechanism).

Once you have the magic, trusted app (or suite of apps) to access, work with, and secure the small amounts of corporate data the device can work with, congratulations! You've now eliminated the headache of managing potentially unknown devices in the hands of users who will do everything they can to thwart your security controls anyway. If users want to work with corporate data, they can use the "trusted" apps to do it, which should have appropriate hooks back to corporate to validate whether you are able to even use the data and, if you or your device goes rogue, wipe the data from your device without wiping the entire device (which has personal data on it).

While I believe there are great solutions along these lines (yet), this is the only kind of solution I believe makes any sense in the long term. People will be able to bring their own devices and access business data while infosec will rest easy knowing business data is still  accessed and stored safely.

It's a BYOD solution everyone can COPE with.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/590973 2012-11-27T09:48:00Z 2013-10-08T17:27:45Z Denial of Service: An Old Classic Not Going Anywhere

The implementation details will vary depending on the attack target and the request type, but the basic concept of a denial of service is to overwhelm a target with a seemingly legitimate series of requests so that real requests are ignored and not processed. If the attack is launched from multiple points on the network, it is referred to as a distributed denial of service attack (a.k.a. DDoS).

I remember when I first started hearing about denial of service issues more than 10 years ago. I even played with a few tools to generate them on my own--in my own network, of course. Back then, defenses for this sort of thing simply didn't exist. Aiming a tool at a helpless server protected by an even more helpless firewall was a great way to wreak havoc.

In general, there are two ways to create a Denial of Service condition:

  • Cause the relevant service to crash. This is actually pretty difficult to do in a simple fashion given the resiliency of most public-facing services, though for lesser-used services, it can happen.
  • Resource exhaustion, where some resource needed to provide the service (bandwidth, buffers, etc) is depleted in a way that causes the service to not respond to legitimate requests.

Over the years, every part of the service stack has gotten more resilient against the most basic form of these attacks. For example, a simple SYN flood won't take down most web servers these days as web servers are smart enough not to allocate resources to a connection until the three-way handshake completes. That said, if you can get a few million of your closest friends to run Low Orbit Ion Cannon at a server, or visit a particular webpage, you can still take down a target.

To give you a real-world example of this, I'll talk briefly about a recent denial of service someone brought to Check Point's attention via a Pastebin posting. The attack, ironically, was not all that different from the first SYN floods I played with more than a decade ago.

According to the report on Pastebin, the security researcher was able to essentially cause a Check Point Security Gateway to stop processing packets by sending ~120k packets per second. That's easy to generate inside a VMware environment, or even on an isolated network segment. Over an Internet link? Depends on the size of the pipe.

Regardless of how realistic that particular incoming packet rate may be in a given environment, what happens when the gateway sees that much traffic? In many cases, it simply runs out of memory, causing traffic to be dropped. It also runs out of processor capacity to forward said traffic.

Since the only way to truly stop a denial of service attack is to disconnect from the Internet, your best hope is that the various components are optimally configured to operate in potential denial of service conditions. Check Point's official response to this issue mentions four specific configuration changes that can be made to their security gateways:

  1. Implementing the IPS protection "SYN Attack" feature in SYN cookie mode, which will reduce the memory footprint caused by these bogus connection attempts.
  2. A hotfix for SPLAT/GAiA systems that, along other things, optimizes how the SYN+ACK packets that come back from the server are processed (IPSO does not require this).
  3. Where applicable, use multiqueue to increase the number of processors that can handle traffic on a given interface.
  4. Increasing the receive buffers on the interface.
These settings have proven to be effective at mitigating a potentially large SYN flood. That said, there's only so much you can do. Your Internet link is still a potential bottleneck. Your Security Gateways, even optimally configured, can only process so much traffic.

Unfortunately, there is no silver-bullet solution to this problem other than bigger and better servers--or security gateways. This means that denial of service is something we're going to have to battle with for a long time.
]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/590972 2012-11-20T17:47:00Z 2013-10-08T17:27:45Z Why not do a little Security Theater of my own?

Recently, I was approached by a few people inside Check Point about doing a security-focused blog. I could have easily done this on phoneboy.com, though despite its history as being the definitive place to go for information related to Check Point FireWall-1 back in ye olden days, I decided to leave the current site as-is.

My goal here on phoneboy.net is to share my own insights about things going on in the computer and network security space. I've done that in the past on phoneboy.com, but my goal is to do it here with a singular focus. I will also leverage some of the brightest minds at Check Point to explain things that, quite frankly, I don't understand, and share that insight with you.

Just to be clear: I work for Check Point, and I will surely share Check Point-related things here, but this blog will be my own thoughts.

]]>
Dameon Welch-Abernathy
tag:phoneboysecurity.posthaven.com,2013:Post/590970 2012-05-08T13:17:00Z 2013-10-08T17:27:45Z Parking Lots and PCI Compliance

Like many things in Computer/Network Security, I've learned many things as a result of my job. Not because I necessarily wanted to learn them :)

PCI Compliance is one of those things I've encountered a handful of times during my tour of duty at Check Point. I don't even pretend to play an expert on PCI on the Internet, which stands for Payment Card Industry (i.e. companies that process credit cards). The goal of the various PCI standards is pretty simple: ensure the credit card data of customers remains protected as it is captured, stored, and transmitted on the various systems that process it.

What does this have to do with Parking Lots? Many parking lots, especially in big cities like Seattle, are self-service. You pre-pay with a credit card, get a ticket from the machine, and put it in your windshield. A minimum wage lackey (hereafter referred to as parking lackey) periodically checks the lot to make sure everyone who has parked there has paid, issuing parking tickets for those who have not.

I parked in one such lot recently in downtown Seattle. They issued me a receipt like this (except both halves were attached and the personally identifiable data was not blacked out):

What was on this stub was the type of card I have and the last four digits of said card. I was asked to place this on my windshield. In plain sight. For anyone to walk by and collect.

To comply with the posted signs, I did leave the ticket in plain view on my dash, but only the right (smaller) half, which had the least personally identifying information on it. Unfortunately, the parking lackey didn't think I had complied with the rules and issued me a parking violation, which I immediately contested.

PCI-DSS Requirement 7 is to restrict access to cardholder data by business need to know, where "access rights are granted to only the least amount of data and privileges needed to perform a job." Does the parking lackey need to know what credit card I used to pay my parking fee with? Does he need the last four digits of my credit card? And even if he does (and I'm not sure on what planet that information would be required by a parking lackey), why do I also have to expose this information to the general public?

I realize that, in the grand scheme of things, this is not a huge data exposure. The number of people that likely saw the relatively small amount of data is pretty close to zero. That said, at least how I read the PCI-DSS 2.0 requirements, this is a clear-cut violation of the guidelines.

Clearly, I need to keep a sharpie in my car so I can comply with these parking lot rules yet maintain the confidentiality of my personal data.

Am I right? Is this a violation of PCI guidelines? Do other parking systems do stuff like this?

]]>
Dameon Welch-Abernathy