Automated Deployments of Palo Alto Firewalls in AWS

I’ve recently been working with a client on magically spinning up entire environments in AWS. This means I’ve learned a fair bit about AWS on the way!

Without going into too much detail (as it’s the clients work), we have been bootstrapping Palo Alto firewalls. This allows you to be able to stand up a fully configured Palo Alto firewall using a CloudFormation script in AWS, in a matter of minutes. That’s pretty cool.

Palo Alto are pretty helpful with this – they provide a decent sample here: https://github.com/PaloAltoNetworks/aws

From this, you can amend the scripts as appropriate to fit into your own environment – this method does rely on having a full configuration for the firewall available to bootstrap from available on an S3 bucket. If this is static, then easy. If not, you’ll have to do some magic elsewhere before calling the CloudFormation script to make sure the config you need is in the bucket.

One of the challenges we faced was that there is an interface limit (depending on which EC2 instance type you choose). This means that the example from Palo Alto does not scale too well – if you have too many subnets, it becomes impossible to put a Palo interface in every subnet. To get around this, you can add routes in the routing tables pointing to the ENI’s (Elastic Network Interfaces) of the Palo. This means you can have multiple subnets behind one interface.

eBGP – ECMP in depth!

My client recently did a fairly big change to the edge network in their data centre, including a migration to 4-byte AS numbers. This wasn’t without it’s challenges. So here is a (long) post about the challenges we faced, and some explanations of some of the more advanced features of BGP such as local-as no-prepend replace-as, and bestpath as-path multipath-relax.

Here is a very simplified version of the topology, post change – everything is fictional. The two ISP’s provide a private MPLS.

Continue reading

Testing a 1 Gb Internet circuit

Have you ever needed to prove a gigabit Internet circuit? It’s more of a headache than you’d think. I had to prove one recently – we were seeing some errors which seemed to happen every time the bandwidth went over about 400mbps outbound, so we needed to prove we could push more. We could ask the ISP to run some tests – but I’m an untrusting kinda person. Plus, those tests wouldn’t include some of the internal infrastructure which we also wanted to prove.

Download is easy. Get a bunch of users to download the CentOS Everything ISO (or anything else that’s a few gig), and watch it get hammered.

Upload is trickier. It’s hard to push that much data, without somewhere to push it to – you need to know the remote end can handle a gig, and also the throughput is affected by latency (it is on the download too, but that’s easier to max out by just getting more users to download).

So, I stood up 10 servers on Digital Ocean. I installed CentOS, and configured vsftpd – plenty of guides on the internet. I then downloaded the CentOS ISO to 10 laptops or servers in the organisation (see – Download is easy :o)), and shouted 3, 2, 1, GO and everyone clicked upload in FileZilla.

It worked really well – we smashed the Internet, and whilst we didn’t quite hit a gig, we got close – close enough to be confident we could push it more than we do day to day.

And….it cost me…. $0.10. Ten…cents…even with the crappy exchange rate today, I can stretch to that. As soon as we finished I destroyed the servers – they were up for less than an hour.

So – we can definitely get more throughput than we thought (sorry for the blame, ISP!). Best we go figure out what the real problem is then.

PS: I have no ties or links to Digital Ocean – I just picked them because they had a DC in the UK and it was simple and cheap. I also did check their ToS – I’m not a lawyer, but as I wasn’t out to break their service I don’t think I did anything wrong – if I did and anyone at Digital Ocean is upset then I’m very, very sorry and I’ll gladly delete my account and star out your name in this post! :o)

PPS: To be transparent…the two Digital Ocean links above are referral links, which give you ten dollars free credit, and earn me 25 dollars credit if you spend 25 dollars with them. If you don’t like referral stuff, here’s a plain link: https://www.digitalocean.com/

VCP 6 passed – like the new Fault Tolerance features!


I recently updated my VMware certification from 5.5 to 6. My 5.5 was expiring so it made sense to do the delta exam and upgrade, rather than recertify the same level. I realise I’ve done this just as 6.5 is coming out, but I’ve been using 6 lately so it made sense to me.

A lot of the maximums in VMware have been increased, and a good summary of that is available here: http://www.virten.net/vmware/vsphere-5-5-vs-vsphere-6-0/

One of the areas of most interest to me was the big improvements in VMware Fault Tolerance (FT). A couple of years ago I was investigating multiple options for a high availability (HA) VPN solution, and looked into using the CSR1000v to terminate the VPN’s. The idea was to have one Cisco device and let VMware FT handle the resilience. The advantage of this would have been only purchasing and licensing one CSR, and not having to worry about any kind of stateful IPSEC synchronising between two devices. One of the main issues we had with this was that to get decent performance out of the CSR we needed multiple CPU’s, and FT wouldn’t support it. In version 6 it is now capable of up to 4 vCPU’s. This improvement has potentially made the solution worth exploring again if I ever needed to.

Obviously there are a whole host of other differences, but there are hundreds of other sites out there to review them all on so use Google!

Thanks to Keith Barker (https://twitter.com/KeithBarkerCCIE) at CBT Nuggets (www.cbtnuggets.com) for the useful videos!

Off-site backups for Synology NAS – using two raspberry pi’s, behind dynamic NAT IP’s

I recently bought a 4 bay synology NAS (DS416 Play), to move away from Dropbox and OneDrive. The main issue I had before choosing to do this, was off-site backups. It’s ok having 4 disks for resilience, but if my house burns down or gets burgled, I still lose everything.

So I started to think up ways of doing an offsite backup, without having to remember to do it or drive around with hard disks. I came up with the idea of putting a Raspberry Pi at a family members house with an external drive attached, and rsync’ing to it. If I do this in the middle of the night, then it won’t noticeably interfere with internet connections.

The main issue is that family members don’t have static IP’s (they are behind typical ISP routers with dynamic IP and NAT), and the synology makes an outbound connection to do the rsync. So I decided to use an intermediate server which I already have in the internet, and tunnel the rsync over a reverse SSH tunnel. Another stumbling block was synology trying to be too clever – initially I tried to set up a reverse tunnel on the NAS, but the HyperBackup software won’t let you back up to a local IP. For this reason I ended up with another Raspberry Pi next to the NAS, but I suppose you could use any device that’s always on. I could have cut out this second Raspberry Pi by going straight via the external box too, but I didn’t want to open the forwarded ports to the internet – using the second Pi, the forwarded port is only open to my LAN.

So the topology will end up looking like this:

NAS — LocalPi — InternetServer — RemotePi

Initially though, it’s best to set up this:

NAS — RemotePi

This lets you test that the rsync works, and also lets you do the initial sync (which might be a big one) on the LAN rather than uploading it to the internet over ADSL!

So, lets get started.Continue reading

Python Scripting on a Cisco Nexus 7k

A few days ago I stumbled upon the python interpreter on the Nexus platform. It got me to tinkering.

In the past, I have had a requirement to grab a list of all of the interfaces on a box, the IP’s, and the masks. The interfaces and IP’s can easily be obtained from a show ip int br, and using column select to grab the relevant columns (hold down Alt when you are selecting in putty, if you didn’t know that before then go try it!). Getting the subnet masks for those is a little less trivial though.

As a side note, in the past I’ve used this:

sh ip int | i is up|Internet add

This works, but you have to mess a little to strip out just the bits you want (not a lot of work with a decent text editor though, I admit).

Anyway, more just to see if I could, I wrote a python script to extract the structured dictionary response from a “show ip interface” and parse out the relevant pieces and print them into a fixed width table, under the columns ‘Name’,’IP’,’CIDR’,’Mask’,’Admin’,’Link’,’Protocol’.
Continue reading

Check Point Certified System Administrator (CCSA) Study Notes – R77

CP_ltd_vertical_Pos

I’m now a Check Point Certified System Administrator (CCSA)! I took the R77 exam and passed. I have to say I was a little disappointed with the exam – there were 100 questions in 90 minutes, but I found a lot of the questions were repeated – albeit with a slightly different phrasing.

Below are the notes I made while I was studying. Definitely lab it (download the ISO and set up some VM’s – make use of the free eval mode!). There are a few questions that require familiarity with the UI’s, and knowing where to find certain config options. I’ve been working with Check Point pretty extensively for a couple of years now, and have worked on some pretty big upgrade projects and stuff, but I’m definitely glad I put some hours in studying and labbing – my experience hasn’t exposed me to all of the features and there is no way I would have passed it without putting some time in with a virtual lab.

Below are my notes that I took whilst studying. I haven’t edited them, it’s a straight copy and paste from OneNote, so they may not be formatted correctly or whatever, and contain some notes that might only make sense to me! But here they are anyway, they may be useful to someone.Continue reading

Packet capture, built in to Windows

microscope

Sometimes when you are working in secure environments, you can’t just go installing software. But if you need a packet capture, and it’s a windows server, then what? If you can’t install Wireshark, then you can use Microsoft Network Monitor.

The capturing is done via a command-line tool. Once you export the file, then you have to use some Microsoft software to analyse it – it’s very similar to Wireshark in functionality, but uses a “.etl” file instead of a pcap.

To get the capture, launch a command prompt with admin rights, and enter the following sequence of commands:

netsh
trace
start scenario=LAN capture=yes

Do whatever you need to capture, and enter:

stop

It will give you the location of the .etl file. If you enter “show scenarios”, that will show you some other things you can trace against, but for everything I’ve ever needed, LAN has been sufficient.

Export the file over RDP shared folders or whatever means you like, and then open it on your machine using Microsoft Network Monitor – available at: http://www.microsoft.com/en-us/download/details.aspx?id=4865

When I first installed this program, I had to change a setting to make it work properly: Go to Tools / Options / Parser Profiles, right click on “Windows” and select “Set as Active”.

I’d still much prefer a pcap, but in a pinch this has helped.

Palo Alto scheduled backups – without Panorama

Recently we deployed a Palo Alto VM-200 firewall. It was a stand-alone deployment on a remote site. We were going to deploy a pair, but we didn’t see how much value it added as the VM-series firewalls do support HA but not stateful HA.

As it was stand-alone, it wasn’t managed by Panorama. And without Panorama management, it is seemingly not very straightforward to enable scheduled automated backups. This seems odd to me – in my paranoid world of engineering, I want things backing up somewhere regularly. Maybe it’s just something to make you buy Panorama.

Anyway, there is a way to do it. We used a general purpose management Linux box, and set up a cron job to download the config using the XML API. Here are the details.

First, if you don’t use the API already, you need to generate an API key. This is basically your “password” for using the API. Go to the following URL:

https://10.5.0.2/api/?type=keygen&user=admin&password=admin

Obviously swapping your IP, username and password in.

That should give you an XML response like this:

<response status="success">
<result>
<key>
LUFRPT14MW5xOEo1R09KVlBZNnpnemh0VHRBOWl6TGM9bXcwM3JHUGVhRlNiY0dCR0srNERUQT09
</key>
</result>
</response>

Now you can get a full config backup via the API, by visiting the following URL:

https://10.5.0.2/api/?type=export&category=configuration&key=LUFRPT14MW5xOEo1R09KVlBZNnpnemh0VHRBOWl6TGM9bXcwM3JHUGVhRlNiY0dCR0srNERUQT09

This will dump out an XML configuration file.

So now we have a means to get the config file, we just need to schedule it. To do that, we set up a cron job on a linux server to run the following command:

curl -o /backups/`date +%Y%m%d`-my_firewall_backup.xml  -k -H "Accept: application/xml" -H "Content-Type: application/xml" -X GET "https://10.5.0.2/api/?type=export&category=configuration&key=LUFRPT14MW5xOEo1R09KVlBZNnpnemh0VHRBOWl6TGM9bXcwM3JHUGVhRlNiY0dCR0srNERUQT09"

Set it to run whenever you like – I think we went for weekly as we don’t change it very much.

What is ARP?

A number of times in the last few weeks I have been asked by a number of people:

What is ARP?

There is the simple answer – which is simply a definition:

Address Resolution Protocol (ARP) is a mechanism to resolve IP addresses into MAC addresses.

However…that doesn’t really explain a lot. It probably doesn’t explain anything you didn’t already know. To really understand ARP, you probably need to understand the following:

  • Why do we need to ARP? Why do we care what MAC address is associated with what IP?
  • When do we ARP, and what do we ARP for?
  • How does ARP work? What does the conversation look like?
  • How often do we perform ARP?

In this post I’ll aim to go over some low level basic network stuff, to try and explain all of this. I’m going to generalise quite a lot – I’m specifically talking about ARP and IPv4. There are other protocols at the various layers, but I’m sticking to simple and relevant.

Why do we need ARP?

Right back at basics, we have the OSI 7 layer model (or the TCP/IP model – whichever you prefer):

OSI Models

Continue reading