Showing posts with label splunk. Show all posts
Showing posts with label splunk. Show all posts

Tuesday, September 9, 2014

cbQosCMDropPkt demystified

It is very difficult to get stats on QOS drops from a Cisco router via snmp. There are multiple nested OIDs that eventually will get you the proper index and class-map. But following the bread crumb trail of OIDs is not straightforward. Sure it is documented by Cisco, but the trail is very confusing

My project required that I grab the QOS drops from the different class-maps on multiple routers and then feed them into a Splunk dashboard.  Scripting the multiple snmpwalks and outputting to CSV for ingestion by Splunk was the best way I had at my disposal.

Introducing get_qos-v3.sh

Usage : get_qos-v3.sh Customer Host_IP Host_Name Priv_Proto(DES/AES)

> get_qos-v3.sh Cust1 10.10.10.4 USA-Winslow "DES"

This script will output the following:

> cat /nsm/snmp/customer/Cust1/Cust1-10.10.10.4-qos.log

07-03-2014 17:00:01,Cust1,10.10.10.4,Winslow,class-default,sla,0
07-03-2014 17:00:01,Cust1,10.10.10.4,Winslow,class-default,class-default,0
07-03-2014 17:00:01,Cust1,10.10.10.4,Winslow,class-default,control,0
07-03-2014 17:00:01,Cust1,10.10.10.4,Winslow,class-default,media-ports,25
07-03-2014 17:00:01,Cust1,10.10.10.4,Winslow,MAP-QoSParentPolicy,class-default,25

My requirements dictated that I had customer separation as well as individual sites listed separately. Remove them if you dont need! (Or ask me to edit them out of the code)

Before you run the script you will need to edit the top part of the script to match your SNMPv3 settings, and log file locations. User, auth_passwd, priv_passwd and log_file are the only variables you should need to set. Please note that user and auth_passwd need the single (') quotes around them, while priv_passwd does not.. It was extremely frustrating troubleshooting those quotes.

This script assumes SNMPv3. If you need SNMPv2 functionality, write me and we can work up a new command line.

> cat get_qos-v3.sh 
#/bin/sh

# http://www.activexperts.com/admin/mib/Cisco/CISCO-CLASS-BASED-QOS-MIB/
# ftp://ftp.cisco.com/pub/mibs/oid/CISCO-CLASS-BASED-QOS-MIB.oid

if [ "$2x" = "x" ];
        then
        echo "Usage : $0 Customer Host_IP Host_Name Priv_Proto(DES/AES)"
        echo " "
        exit 1
fi

customer=$1
host=$2
host_name=$3
version=3
#add user here - keep the quotes
user='SNMPv3 user'
auth_mode=authPriv
auth_proto=SHA
#add auth_password - keep the quotes
auth_passwd='auth_password'
priv_proto=$4
#add priv_password here - no quotes this time
priv_passwd=priv_password 
mibs_dir=/usr/share/mibs/
cmd_variables="-v $version -M $mibs_dir -m ALL -u $user -l $auth_mode -a $auth_proto -A $auth_passwd -x $priv_proto -X $priv_passwd " 
#change log location - keep quotes
log_file="/nsm/snmp/customer/$customer/$customer-$host-qos.log"

##
#Should not have to edit below this line!!
##

#Save Field Seperator
OldFS=$IFS

# Get cbQosCMDropPkt with snmpwalk
OID=1.3.6.1.4.1.9.9.166.1.15.1.1.13
timestamp=$(date +"%m-%d-%Y %T")
cbQosCMDropPkt_WALK=$(snmpwalk $cmd_variables $host $OID)

#New Field seperator to parse the SNMPWALK
IFS=$'\n'

#For each line in cbQosCMDropPkt_WALK determine index and QOS drops
i=1
for I in $cbQosCMDropPkt_WALK;
        do
cbQosCMDropPkt_index=`echo $I | awk 'BEGIN {FS="."} {print $16}' | awk 'BEGIN {FS="="} {print $1}'`
CM_parent_index=`echo $I | awk 'BEGIN {FS="."} {print $15}' | awk 'BEGIN {FS="="} {print $1}'`
cbQosCMDropPkt=`echo $I  | awk '{print $4}'`

#Set Field seperator back to original
IFS=$OldFS

### Get class-map name
#Match up the index from cbQosCMDropPkt to cbQosConfigIndex
   OID="1.3.6.1.4.1.9.9.166.1.5.1.1.2.$CM_parent_index.$cbQosCMDropPkt_index"
   cbQosConfigIndex=`snmpget $cmd_variables $host $OID | cut -d" " -f4`

   OID="1.3.6.1.4.1.9.9.166.1.7.1.1.1.$cbQosConfigIndex"
   cbQosCMName_classmap=`snmpget $cmd_variables $host $OID | sed 's/\"//g' | awk '{print $4}'`
### 

#Clear variables
cbQosParentObjectsIndex1=
cbQosParentObjectsIndex2=
cbQosConfigIndex2=
cbQosCMName_parent=

#What site does this class-map belong too (double query to cbQosParentObjectsIndex)
OID="1.3.6.1.4.1.9.9.166.1.5.1.1.4.$CM_parent_index.$cbQosCMDropPkt_index"
cbQosParentObjectsIndex1=`snmpget $cmd_variables $host $OID |awk '{print $4}'`

if [ $cbQosParentObjectsIndex1 -ne $CM_parent_index ]; then

   OID="1.3.6.1.4.1.9.9.166.1.5.1.1.4.$CM_parent_index.$cbQosParentObjectsIndex1"
   cbQosParentObjectsIndex2=`snmpget $cmd_variables $host $OID |awk '{print $4}'`

   OID="1.3.6.1.4.1.9.9.166.1.5.1.1.2.$CM_parent_index.$cbQosParentObjectsIndex2"
   cbQosConfigIndex2=`snmpget $cmd_variables $host $OID |awk '{print $4}'`

   OID="1.3.6.1.4.1.9.9.166.1.7.1.1.1.$cbQosConfigIndex2"
   cbQosCMName_parent=`snmpget $cmd_variables $host $OID |sed 's/\"//g' | awk '{print $4}'`

else

   OID="1.3.6.1.4.1.9.9.166.1.5.1.1.2.$CM_parent_index.$cbQosParentObjectsIndex1"
   cbQosParentObjectsIndex2=`snmpget $cmd_variables $host $OID |awk '{print $4}'`

   OID="1.3.6.1.4.1.9.9.166.1.6.1.1.1.$cbQosParentObjectsIndex2"
   cbQosCMName_parent=`snmpget $cmd_variables $host $OID | sed 's/\"//g' | awk '{print $4}'`

fi

        echo $timestamp,$customer,$host,$host_name,$cbQosCMName_parent,$cbQosCMName_classmap,$cbQosCMDropPkt >> $log_file

 i=`echo $i+1|bc`
done

Tuesday, November 19, 2013

Code accepted into Splunk App!

Bill Matthews informed me that the script I wrote and referenced in a previous post has made it into the Hurricane Labs Vulnerability Management v 1.5 app for Splunk!

 http://apps.splunk.com/app/1093/

They cleaned it up and put it in /opt/splunk/etc/apps/HurricaneVulnerabilityManagement/bin/Nessus.sh
#!/bin/bash

#Variables
SPLUNK_NESSUS=/mnt/nessus
SERVER="x.x.x.x"

#Retrive AUTH Token
token="$(/usr/bin/wget -q --no-check-certificate --post-data 'login=USERNAME&password=PASSWORD' https://$SERVER:8834/login -O - | grep -Po '(?<=token\>)[^\<]+(?=\<\/token)')"

#Get list of reports
/usr/bin/wget -q --no-check-certificate --post-data "token=$token" https://$SERVER:8834/report/list -O - | grep -Po '(?<=name\>)[^\<]+(?=\<\/name)' > /tmp/reports

#Get Friendly Names
/usr/bin/wget -q --no-check-certificate --post-data "token=$token" https://$SERVER:8834/report/list -O - | grep -Po '(?<=readableName\>)[^\<]+(?=\<\/readableName)' > /tmp/names

#Merge two files
/usr/bin/pr -tmJ --sep-string=" " /tmp/reports /tmp/names > /tmp/named.reports

for i in $(cut -d' ' -f1 /tmp/named.reports);
do
#Get Filenames for reports
FILENAME=$(/usr/bin/wget -q --no-check-certificate --post-data 'token='$token'&report='$i'&xslt=csv.xsl' https://$SERVER:8834/file/xslt -O - | grep -Po '(?<=/file/xslt/download/\?fileName=)[^\"]+(?=\"\>)')

#Get files
#build Readable name to report number match
READABLENAME=$(grep $i /tmp/named.reports | cut -d' ' -f2- --output-delimiter='')
sleep 5
/usr/bin/wget -q --no-check-certificate --post-data 'token='$token'&fileName='$FILENAME'&step=2' https://$SERVER:8834/file/xslt/download -O $SPLUNK_NESSUS/$READABLENAME.csv;
done;

#Cleanup
#rm /tmp/reports
#rm /tmp/names
#rm /tmp/named.reports


Wednesday, October 9, 2013

Importing Nessus CSV reports to SPLUNK from the Command Line!

Problem Solved!  Hurricane Labs provides a nice Splunk App to consume Nessus CSV files.  But, I did not want to manually download a new CSV from the Nessus web interface and then move it to my Splunk server. I could have made a samba share from my Splunk server to my PC and just saved the output from the Nessus web interface to the share.. Still too much manual work!

After a lot of searching I found some good information on the Nessus discussion pages

https://discussions.nessus.org/message/17812#17812
cmerchant@responsys.com answers their own question:

#!/bin/bash

AUTH=$(wget --no-check-certificate --post-data 'login=nessus&password=password' https://server:8834/login -O -| grep -Po '(?<=token\>)[^\<]+(?=\<\/token)')
FILE=$(wget --no-check-certificate --post-data 'token='$AUTH'&report=XXXXXX&xslt=csv.xsl' https://server:8834/file/xslt -O - | grep -Po '(?<=/file/xslt/download/\?)[^\"]+(?=\"\>)')

wget --no-check-certificate --post-data 'token='$AUTH'&'$FILE'&step=2' https://server:8834/file/xslt/download -O test.csv

This got me moving toward a solution. I had never done any web page parsing with wget and javascripts, so it was about time to learn...

My requirements were:

  • No interaction - must be able to be run in cron
  • Grab all completed Nessus results
  • Save the file with the Friendly Report name so Splunk can use the file name as the Report Name

Here is the results. This needs some clean up and more documentation, but it is completely usable as is. Except you will need to replace xxxxxx with your password and x.x.x.x with your nessus server IP.
(word wrap didnt play nice here, carefull with your cut and paste)


#!/bin/bash

#Variables
SPLUNK_NESSUS=/mnt/nessus

#Retrive AUTH Token
token="$(/usr/bin/wget -q --no-check-certificate --post-data 'login=nessus&password=xxxxxx' https://x.x.x.x:8834/login -O - | grep -Po '(?<=token\>)[^\<]+(?=\<\/token)')"

#Get list of reports
/usr/bin/wget -q --no-check-certificate --post-data "token=$token" https://x.x.x.x:8834/report/list -O - | grep -Po '(?<=name\>)[^\<]+(?=\<\/name)' > /tmp/reports

#Get Friendly Names
/usr/bin/wget -q --no-check-certificate --post-data "token=$token" https://x.x.x.x4:8834/report/list -O - | grep -Po '(?<=readableName\>)[^\<]+(?=\<\/readableName)' > /tmp/names

#Merge two files
/usr/bin/pr -tmJ --sep-string=" " /tmp/reports /tmp/names > /tmp/named.reports

for i in $(cut -d' ' -f1 /tmp/named.reports);
do
#Get Filenames for reports
FILENAME=$(/usr/bin/wget -q --no-check-certificate --post-data 'token='$token'&report='$i'&xslt=csv.xsl' https://x.x.x.x:8834/file/xslt -O - | grep -Po '(?<=/file/xslt/download/\?fileName=)[^\"]+(?=\"\>)')

#Get files
#build Readable name to report number match
READABLENAME=$(grep $i /tmp/named.reports | cut -d' ' -f2- --output-delimiter='')
sleep 5
/usr/bin/wget -q --no-check-certificate --post-data 'token='$token'&fileName='$FILENAME'&step=2' https://x.x.x.x:8834/file/xslt/download -O $SPLUNK_NESSUS/$READABLENAME.csv;
done;

#Cleanup
rm /tmp/reports
rm /tmp/names
rm /tmp/named.reports

#note
# Remove files in /opt/nessus/var/nessus/users/nessus/files on nessus server

If you use this please send me an email rossw@woodhome.com



Wednesday, May 22, 2013

Splunk: Creating eventtypes from csv to name VLANS

Everyone got the VLAN name lookup working from the last post? You did.. ? Really someone is listening?

Next lets use the information in the internal_networks.csv file to create event types and really change the way we search in splunk.

While researching this topic, I learned that you CAN NOT do subsearches in eventtypes.conf
I was hoping to do something like (dont try it it doesnt work)

eventtypes.conf
[vlan:Guest] 
search src_ip=172.30.21.0/24 |lookup vlan network AS src_ip OUTPUT name AS Src_VLAN

So I had to find an easy way to parse our csv. I love trying to do all my heavy lifting on one line in linux. So I challenged myself.. can it be done?

As a reminder here is our internal_networks.csv  I have cleaned it up to conform with the common information model the best I could.  No spaces in the names and no capital letters.

network,name
"192.168.1.0/24","corporate"
"192.168.2.0/24","voice"
"192.168.3.0/24","nosc_tac"
"192.168.4.0/24","servers"
"192.168.5.0/24","engineering"
"192.168.6.0/24","security"
"192.168.7.0/24","unassigned"
"192.168.8.0/24","it_engineering"
"192.168.9.0/24","human_resources"
"192.168.10.0/24","call_manager"
"192.168.11.0/24","wireless"
"192.168.12.0/24","executive_office"
"192.168.13.0/24","unassigned"
"192.168.14.0/24","voip"
"192.168.15.0/24","finance"
"192.168.16.0/24","marketing"
"192.168.17.0/24","pm_users"
"192.168.18.0/24","sales"
"192.168.19.0/24","consultants"
"192.168.20.0/24","procurement"
"192.168.21.0/24","guest"
"192.168.255.0/24","255"

..and with a single beautiful line you can create a new eventtypes.conf with our internal_networks.csv.

sudo awk -F"\""  'NR!=1{ print "[vlan:"$4"]" "\n", "search = src_ip="$2"\n"}' /opt/splunk/etc/apps/search/lookups/internal_networks.csv >> /opt/splunk/etc/system/local/eventtypes.conf
You might have to change permissions to write (append to /opt/splunk/etc/apps/default/local/eventtypes.conf)

and now look what we have..  well defined event types! Now you can stop guessing what 192.168.3.45 is.. It is the nosc_tac vlan!


Additionally you now have the ability to search via vlan name
eventtype=vlan:sales or even try eventtype=vlan:* | top eventtype

Remember: If your internal_networks.csv changes you will have to regenerate your eventtypes.conf with the magical awk line from above.

Comments?

Ross Warren
Cyber Security, CISSP, GCIH, GSEC 



Tuesday, May 21, 2013

SPLUNK: Finding VLAN to VLAN traffic

We are most often concerned with traffic flowing out of the network or into the network. This is where the bad guys start from and most often show their intentions.. But what about the embedded bad guy that is already in your network..

**For what ever reason** Your IDS missed it, or they were already in and then you deployed your IDS...

I am talking about internal VLAN traffic, from the Marketing VLAN to the finance VLAN.. that probably shouldn't be happening and we need to watch for it.

Assumption: Your internal network numbering is based off of 192.168.0.0/16

So a simple Splunk search would reveal traffic being sourced from internal PC/Laptop hosts to internal PC/Laptop hosts
src_ip=192.168.0.0/16 AND dest_ip=192.168.0.0/16
but this ends up with a lot of events that are hard to decipher what is going where and we don't have any "nice" names to determine if an infected Marketing PC is trying to get to the Finance VLAN.

In comes splunk lookups! Here is the reference docat splunk: Splunk Lookup Command but I will break down the steps here.

1) First create the csv file where your VLAN to Name translation is:
sudo vi /opt/splunk/etc/apps/search/lookups/internal_networks.csv
network,name
"192.168.1.0/24","Corporate"
"192.168.2.0/24","Voice"
"192.168.3.0/24","Operations"
"192.168.4.0/24","Server VLAN"
"192.168.5.0/24","Engineering"
"192.168.6.0/24","Security"
"192.168.7.0/24","Unassigned"
"192.168.8.0/24","IT Engineering"
"192.168.9.0/24","Human Resources"
"192.168.10.0/24","Unassigned"
"192.168.11.0/24","Wireless"
"192.168.12.0/24","Executive Office"
"192.168.13.0/24","Unassigned"
"192.168.14.0/24","VoIP VLAN"
"192.168.15.0/24","Finance"
"192.168.16.0/24","Marketing"
"192.168.17.0/24","PM Users"
"192.168.18.0/24","Sales"
"192.168.19.0/24","Consultants"
"192.168.20.0/24","Procurement"
"192.168.21.0/24","Guest"

2) Then create the lookup:  Additional documentation at Splunk Docs - Configure field lookups

sudo vi /opt/splunk/etc/apps/search/local/props.conf
[*]
LOOKUP-vlan = vlan network OUTPUT name


3) Test with a simple search:

(src_ip=192.168.0.0/16 AND dest_ip=192.168.0.0/16)| lookup vlan network AS src_ip OUTPUT name AS Src_VLAN 

We can now see a new field SRC_VLAN !

4) Finalize the search by removing the "Server VLAN (192.168.4.0/24)" and any broadcasts (255).

(src_ip=192.168.0/16 AND dest_ip=192.168.0.0/16) AND src_ip!=192.168.4.0/24 AND dest_ip!=192.168.4.0/24 NOT 255 
| lookup vlan network AS src_ip OUTPUT name AS Src_VLAN 
| lookup vlan network AS dest_ip OUTPUT name AS Dest_VLAN 
| where Src_VLAN != Dest_VLAN |chart count by Src_VLAN, Dest_VLAN












So now we know what VLANs are making connections to each other.
.. but what is normal? It is up to you to decide... Should the Finance VLAN be making connections to the "Corporate VLAN"? and more importantly I should talk to IT about a better description than "Corporate VLAN"...

Next post.. Using this lookup we created to name the VLANs on the fly in any search we do.

Ross Warren
Cyber Security, CISSP, GCIH, GSEC