Would anyone happen to have documentation for configuring ZC and Bro? I have NTop's PF_RING and ixgbe driver packages installed, the proper license in /etc/pf_ring, and have compiled Bro with the NTop libraries but I'm seeing the kernel error below along with a ton of “split routing” messages in weird.conf, so I suspect the flows aren’t being load balanced correctly.
Jun 22 15:10:03 win-csignsm-01 kernel: [11060.244524] [PF_RING] Unable to activate two or more ZC sockets on the same interface eth6/link direction
The monitored NIC is an Intel X520-LR1.
Contents of /etc/pf_ring/zc/ixgbe/ixgbe.conf:
RSS=10 allow_unsupported_sfp=0
Contents of /etc/pf_ring/hugepages.conf
node=1 hugepages=1024
And Bro is configured as:
[MID_INT]
type=worker
host=10.20.30.123
interface=zc:eth6
lb_method=pf_ring
lb_procs=10
pin_cpus=10,11,12,13,14,15,16,17,18,19
Thanks!
-Dave
We are getting a spurious sourcetype when ingesting bro json logs into splunk.
Specifically, we are getting a sourcetype of bro_00. There is no log file named this, and the splunkforwarder is just pushing the raw logs for indexing into splunk. There is no massaging of the log data. Anyone know why this sourcetype is popping up?
Does Bro have an event that will get fired for the first packet after
the tcp 3-way handshake, or is there a way to get at that easily or does
it require a lot of state to be maintained in the script?
I am trying to get at this first packet following the 3 way handshake
because that is where the client hello in the ssl handshake should be.
--
Ben
> On Jul 13, 2016, at 4:08 PM, Obndnar smith <obdnanr(a)gmail.com> wrote:
>
> It's not using pf_ring! I recompiled it like the directions said, any advice on getting it to work?
>
> libpcap.so.0.8 => /usr/lib/x86_64-linux-gnu/libpcap.so.0.8 (0x00007f90c13c7000)
try deleting the build/ directory in the bro source tree and then running ./configure again with the right options.
As part of the configure output it will say which pcap it is using. We install pf_ring into /opt/pfring so we build bro using
--with-pcap=/opt/pfring
--
- Justin Azoff
We're getting around 30-160 mbps worth of traffic on our sensor and when we
use multiple workers they are all pegged 96% to 99% cpu utilization. I've
tried disabling our custom scripts and that wasn't the culprit. I've tried
4 workers and 8 workers, I've tried pinning them to different CPUs and
nothing has changed anything. We're using PF_RING. Any idea how I can
trouble shoot this issue? We're having packet loss, but with the low speed
of traffic I wouldn't expect to see any loss.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
32027 root 20 0 1527600 1.430g 11696 R 99.9 4.7 10:37.82 bro
32018 root 20 0 1520192 1.423g 11792 R 99.6 4.6 10:38.83 bro
32019 root 20 0 1525308 1.428g 11780 R 99.6 4.7 10:39.06 bro
32032 root 20 0 1522564 1.425g 11712 R 99.6 4.6 10:39.20 bro
32002 root 20 0 1571588 1.472g 11716 R 98.9 4.8 10:36.93 bro
32029 root 20 0 1529472 1.432g 11788 R 98.9 4.7 10:38.50 bro
32016 root 20 0 1341716 1.252g 11712 R 97.9 4.1 10:33.21 bro
32006 root 20 0 1422616 1.330g 11712 R 96.0 4.3 10:20.19 bro
Thanks
Howdy all,
I recently began poking around Bro and had my first attempt of writing a
script. The purpose of it was to detect whether or not traffic on port
443 used SSL, the moment I implemented the script my CPU usage was at
100% and the same happened to my memory. Without the script enabled, the
machine runs fine and bro only uses about 50-60% of the resources. I
also tried to turn off every other thing in local.bro but it was the
same result. The code is as follows (be warned its pretty rough):
@load base/protocols/ssl
@load base/frameworks/notice
@load base/protocols/conn
@load base/utils/directions-and-hosts
module conn;
export {
redef enum Notice::Type += {
Unencrypted_Traffic
};
const List_of_Hosts = LOCAL_HOSTS &redef;
const Encryption = "SSL" &redef;
}
event new_connection(c: connection) &priority=3
{
if ( ! addr_matches_host(c$id$resp_h, List_of_Hosts) )
return;
local port_number=c$conn$id$resp_p;
local ip_address=c$conn$id$resp_h;
local encrypted=c$conn$service;
if ( port_number != 443/tcp )
return;
if ( encrypted != Encryption )
NOTICE([$note=Unencrypted_Traffic,
$msg=fmt("Unencrypted traffic"),
$conn=c,
$identifier=cat(c$id$resp_h, c$id$resp_p)
]);
}
It is probably something I am not catching in my code or a limited
knowledge of bro, but any help is much appreciated.
Thanks
Connor
I'm having an issue where Bro is not extracting the host field correctly
from captured HTTP traffic (in the form of a PCAP). I've verified it has
nothing to do with split-routing. I also manually examined the PCAP file
using Wireshark and found the host field to be present in all instances. I
am a bit puzzled. This is significant for our use case because we will be
using Bro to monitor for malicious URLs and the like.
I have my http.log, weird.log, and the PCAP file itself. Unfortunately, I
cannot attach the PCAP due to its size and the mail list rejecting the
message. Please reply and I will send the PCAP individually.
Any advice is appreciated.
Good day everyone,
I wanted to share what I have done with regards to uploading my bro
clusters logs to Azure HDInsight Blob storage, then each day creating HIVE
tables from the logs. My bro cluster averages around 55-60Gbps, so sorting
through logs via zgrep, or even Elastic, is far from ideal. I found that
looking for a specific file ID took around 30 minutes to search all logs
for the day, but when using HDInsight it was under a minute.
Now I am sure there are some big data scientists on this forum, so forgive
my newbness on Hadoop (HDInsight is Hortonworks Hadoop), as well as a basic
python programmer, so that script is simple.
It is my hope that this work helps someone else, or at least gets them
started. I will see if I can sanitize my bro cluster build documents and
send that out in the hope that it also helps.
Here we go:
First I modify each bro log to be Hadoop name convention friendly (doesnt
like colons), then each hour I upload my log files with the below python
script:
=================
/etc/BRO-Azure-Upload.sh
#!/bin/bash
# Script to run each hour, which modifies the file names to be Azure Hadoop
friendly, then calls a Python script to upload
# each file to Azure blob storage
DIR=/DATA/bro-logs
DATE=`date +%Y-%m-%d "--date=last hour"`
TIME=`date +%H "--date=last hour"`
# Change to working directory
cd $DIR/$DATE
# Remove all ":" and replace with "_" in file names
ls -1 *.`date +%H "--date=last hour"`* | while read FILE ; do
newfile="$(echo ${FILE} | sed -e 's/\:/\_/g')" ; mv "${FILE}" "${newfile}"
; done
# Add the building name to each file by finding the _([[:digit:]]).log.gz
files, in case the first buildings logs have already been added
ls -1 *.`date +%H "--date=last hour"`_*_[0-9][0-9].log.gz | while read FILE
; do newfile="$(echo ${FILE} | sed -e 's/.log.gz/.B31.log.gz/')" ; mv
"${FILE}" "${newfile}" ; done
# Call python script to upload last hours logs:
for LOG in $(ls -1 *.`date +%H "--date=last hour"`_*.B01.log.gz ) ; do
/DATA/Azure-Blob-PROMPT.py $LOG $DATE ;
done
*Here is the Python script I use to upload the logs:*
=======================
Azure-Blob-PROMPT.py
=======================
REQUIRES THAT YOU INSTALL PYTHON 2.7 on CentOS/RHEL
#!/usr/local/python2.7/bin/python2.7
import sys
import azure.storage
from azure.storage.blob import BlockBlobService
FILE = sys.argv[1]
DIR = sys.argv[2]
storage_account_name = '<NAME>' # storage key in file in parent directory
called <storage_account_name>.storagekey
storage_account_key = '<KEY>'
block_blob_service = BlockBlobService(account_name=storage_account_name,
account_key=storage_account_key)
storage_container_name = '<CONTAINER>'
block_blob_service.create_container(storage_container_name)
#storage_dir = (storage_container_name,'/', DIR)
storage_dir = storage_container_name+"/"+DIR
from azure.storage.blob import ContentSettings
block_blob_service.create_blob_from_path(
storage_dir,
FILE,
FILE,
content_settings=ContentSettings(content_type='application/octet-stream'),
progress_callback=None,
max_connections=5,
max_retries=5,
retry_wait=2
)
==============
Here are the commands I use to create the HIVE tables. I create a directory
with the date via a cron job, then make a table with the entire days logs
in it.
=============
SSL:
=================
DROP TABLE IF EXISTS SSLDAILY;
CREATE EXTERNAL TABLE SSLDAILY (ts TIMESTAMP, uid STRING, orig_h STRING,
orig_p STRING, resp_h STRING, resp_p STRING, version STRING, cipher STRING,
curve STRING, server_name STRING, resumed STRING, last_alert STRING,
next_protocol STRING, established STRING, cert_chain_fuids STRING,
client_cert_chain_fuids STRING, subject STRING, issuer STRING,
client_subject STRING, client_issuer STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n'
LOAD DATA INPATH
'wasb://<CONTAINER>@<STORAGE_ACCT>.blob.core.windows.net/<DIR>/ssl.*'
INTO TABLE SSLDAILY;
=================
CONN:
=================
DROP TABLE IF EXISTS CONNDAILY;
CREATE EXTERNAL TABLE CONNDAILY (ts TIMESTAMP, uid STRING, orig_h STRING,
orig_p STRING, resp_h STRING, resp_p STRING, proto STRING, service STRING,
duration DECIMAL, orig_bytes INT, resp_bytes INT, conn_state STRING,
local_orig STRING, local_resp STRING, missed_bytes INT, history STRING,
orig_pkts
INT, orig_ip_bytes INT, resp_pkts INT, resp_ip_bytes INT, tunnel_parents
STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n';
LOAD DATA INPATH
'wasb://<CONTAINER>@<STORAGE_ACCT>.blob.core.windows.net/<DIR>/conn.*'
INTO TABLE CONNDAILY;
==================
DNS:
==================
DROP TABLE IF EXISTS DNSDAILY;
CREATE EXTERNAL TABLE DNSDAILY (ts TIMESTAMP, uid STRING, orig_h STRING,
orig_p STRING, resp_h STRING, resp_p STRING, proto STRING, trans_id STRING,
query STRING, qclass STRING, qclass_name STRING, qtype STRING, qtype_name
STRING, rcode STRING, rcode_name STRING, AA STRING, TC STRING, RD STRING,
RA
STRING, Z STRING, answers STRING, TTLs STRING, rejected STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n';
LOAD DATA INPATH
'wasb://<CONTAINER>@<STORAGE_ACCT>.blob.core.windows.net/<DIR>/dns.*'
INTO TABLE DNSDAILY;
==================================
DPD (Dynamic Protocol Detection):
==================================
DROP TABLE IF EXISTS DPDDAILY;
CREATE EXTERNAL TABLE DPDDAILY (ts TIMESTAMP, uid STRING, orig_h STRING,
orig_p STRING, resp_h STRING, resp_p STRING, proto STRING, analyzer STRING,
failure_reason STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n';
LOAD DATA INPATH
'wasb://<CONTAINER>@<STORAGE_ACCT>.blob.core.windows.net/<DIR>/dpd.*'
INTO TABLE DPDDAILY;
==================
FILES:
==================
DROP TABLE IF EXISTS FILESDAILY;
CREATE EXTERNAL TABLE FILESDAILY (ts TIMESTAMP, fuid STRING, tx_hosts
STRING, rx_hosts STRING, conn_uids STRING, source STRING, depth STRING,
analyzers
STRING, mime_type STRING, filename STRING, duration DECIMAL, local_orig
STRING, is_orig STRING, seen_bytes INT, total_bytes INT, missing_bytes INT,
overflow_bytes INT, timedout STRING, parent_fuid STRING, md5 STRING, sha1
STRING, sha256 STRING, extracted STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n';
LOAD DATA INPATH
'wasb://<CONTAINER>@<STORAGE_ACCT>.blob.core.windows.net/<DIR>/files.*'
INTO TABLE FILESDAILY;
==================
FTP:
==================
DROP TABLE IF EXISTS FTPDAILY;
CREATE EXTERNAL TABLE FTPDAILY (ts TIMESTAMP, uid STRING, orig_h STRING,
orig_p STRING, resp_h STRING, resp_p STRING, user STRING, password STRING,
command STRING, arg STRING, mime_type STRING, file_size STRING, reply_code
STRING, reply_msg STRING, data_channel_passive STRING, data_channel_orig_h
STRING, data_channel_resp_h STRING, data_channel_resp_p STRING, fuid STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n';
LOAD DATA INPATH
'wasb://<CONTAINER>@<STORAGE_ACCT>.blob.core.windows.net/<DIR>/ftp.*'
INTO TABLE FTPDAILY;
==================
HTTP:
==================
DROP TABLE IF EXISTS HTTPDAILY;
CREATE EXTERNAL TABLE HTTPDAILY (ts TIMESTAMP, uid STRING, orig_h STRING,
orig_p STRING, resp_h STRING, resp_p STRING, trans_depth STRING, method
STRING, host STRING, uri STRING, referrer STRING, user_agent STRING,
request_body_len INT, status_code STRING, status_msg STRING, info_code
STRING,
info_msg STRING, filename STRING, tags STRING, username STRING, password
STRING, proxied STRING, orig_fuids STRING, orig_mime_types STRING,
resp_fuids
STRING, resp_mime_types STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n';
LOAD DATA INPATH
'wasb://<CONTAINER>@<STORAGE_ACCT>.blob.core.windows.net/<DIR>/http.*'
INTO TABLE HTTPDAILY;
==================
INTEL:
==================
DROP TABLE IF EXISTS INTELDAILY;
CREATE EXTERNAL TABLE INTELDAILY (ts TIMESTAMP, uid STRING, orig_h STRING,
orig_p STRING, resp_h STRING, resp_p STRING, fuid STRING, file_mime_type
STRING, file_desc STRING, seen_indicator STRING, seen_indicator_type
STRING, seen_where STRING, seen_node STRING, sources STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n';
LOAD DATA INPATH
'wasb://<CONTAINER>@<STORAGE_ACCT>.blob.core.windows.net/<DIR>/intel.*'
INTO TABLE INTELDAILY;
==================
IRC:
==================
DROP TABLE IF EXISTS IRCDAILY;
CREATE EXTERNAL TABLE IRCDAILY (ts TIMESTAMP, uid STRING, orig_h STRING,
orig_p STRING, resp_h STRING, resp_p STRING, nick STRING, user STRING,
command
STRING, value STRING, addl STRING, dcc_file_name STRING, dcc_file_size INT,
dcc_mime_type STRING, fuid STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n';
LOAD DATA INPATH
'wasb://<CONTAINER>@<STORAGE_ACCT>.blob.core.windows.net/<DIR>/irc.*'
INTO TABLE IRCDAILY;
==================
KERBEROS:
==================
DROP TABLE IF EXISTS KERBEROSDAILY;
CREATE EXTERNAL TABLE KERBEROSDAILY (ts TIMESTAMP, uid STRING, orig_h
STRING, orig_p STRING, resp_h STRING, resp_p STRING, request_type STRING,
client
STRING, service STRING, success STRING, error_msg STRING, from STRING, till
STRING, cipher STRING, forwardable STRING, renewable STRING,
client_cert_subject STRING, client_cert_fuid STRING, server_cert_subject
STRING, server_cert_fuid STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n';
LOAD DATA INPATH
'wasb://<CONTAINER>@<STORAGE_ACCT>.blob.core.windows.net/<DIR>/kerberos.*'
INTO TABLE KERBEROSDAILY;
==================
INTEL:
==================
DROP TABLE IF EXISTS INTELDAILY;
CREATE EXTERNAL TABLE INTELDAILY (ts TIMESTAMP, uid STRING, orig_h STRING,
orig_p STRING, resp_h STRING, resp_p STRING, fuid STRING, file_mime_type
STRING, file_desc STRING, seen_indicator STRING, seen_indicator_type
STRING, seen_where STRING, seen_node STRING, sources STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n';
LOAD DATA INPATH
'wasb://<CONTAINER>@<STORAGE_ACCT>.blob.core.windows.net/<DIR>/intel.*'
INTO TABLE INTELDAILY;
I welcome any feedback or questions.
=====================
Brandon Glaze
bglaze(a)gmail.com
"Lead me, follow me, or get the hell out of my way."
- General George Patton Jr