Recently I have some problems with Bro and PF_RING in cluster.
On my server, when I have less than 32 worker threads(rings),
everything is okay, but when I use worker threads more than 32, pf_ring
start to receive repeating data packets. For example, rings less than 32, I
send 400000 packets to server and pf_ring info in /proc shows there is
400000 packets in rings, but when rings greater than 32, I can get 800000
packets when 33 rings and 1200000 packets when 34 rings and so on.
I guess if there is some rules that a pf_ring or a bro cluster can only
support less than 32 rings or worker threads on a server or some other
Any insight would be helpful.
Just joined the list and had a question … that I apparently sent to customer support ..oops.
anyways Im building a freebsd server and was wondering what the best practice / placement for bro would be
Essentially It’s a forward facing firewall based on freebsd. SO I was wondering if its best to deploy on the host OS, or create a jail or two and funnel traffic through that? I also wanted to know if there were any special considerations with jails / setup.
some options I came up with ..
internet > firewall > lan/dmz
internet > firewall > nginx proxy > lan/dmz
internet > firewall > dmz jail > NO lan
internet > firewall > bro jail > proxy jail > lan/dmz
We are still having a problem with our Bro cluster and logging. During
peak times the manager will slowly consume all available memory while the
logs sent to disk are delayed by an hour or more.
Does anyone know the official bug ID for this within
I've tracked this problem for a while now and tried all variations of the
proposed fixes: the flare patch, the no-flare patch, segmented cluster with
one manager per box, and an architecture change from Linux+PF_RING to
FreeBSD+Myricom. Currently we are using a standard build of bro-2.5-beta
in a cluster configuration with one dedicated manager and three dedicated
sensors, each using both ports of a Myricom card with 22 workers attached
to each port. ( 1 manager, 1 logger, 12 proxies, 6 worker nodes (22 procs
each, 132 total).
Restarting the cluster on a regular basis is much easier without PF_RING
but that's only partially curing the symptom. In that regard the last
proposed solution is the most expensive, using faster CPUs which will
reduce the worker count. But will that really solve the problem ? I'm
more interested in defining what the problem actually is.
FWIW there's some text below to illustrate, the dates are somewhat old but
it's still a representative example.
- Manager node is near out of memory.. 2800 Mb left
- Workers have moderate CPU usage, 60%
- Logs on manager node are 25 minutes behind..
- 21:05 vs 20:40
- Initiated cluster restart at 21:06, completed at 21:11.
- Workers have moderate CPU usage.
- Logs are 16 minutes behind
Earlier the logs were roughly two hours behind.
[bro@mgr /opt/bro]$ date -r 1471373408 (most recent conn.log timestamp)
Tue Aug 16 18:50:08 UTC 2016
[bro@mgr /opt/bro]$ date
Tue Aug 16 20:43:45 UTC 2016
Bro manager process is using 70G of memory and the system is swapping:
last pid: 96557; load averages: 46.37, 53.09,
55 processes: 8 running, 47 sleeping
CPU: 7.7% user, 2.1% nice, 68.1% system, 0.2% interrupt, 21.9% idle
Mem: 103G Active, 2412M Inact, 19G Wired, 549M Cache, 331M Free
ARC: 15G Total, 89M MFU, 15G MRU, 29M Anon, 68M Header, 211M Other
Swap: 12G Total, 12G Used, 85M Free, 99% Inuse, 9248K In
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU
7305 bro 34 20 0 40121M 39498M uwait 10 31.7H 280.27% bro
7337 bro 1 96 5 70653M 61577M CPU36 36 868:45 59.96% bro
Currently in this state the logs over two hours behind the current time.
bro@mgr:~ % date -r 1471374952 (most recent conn.log timestamp)
Tue Aug 16 19:15:52 UTC 2016
bro@mgr:~ % date
Tue Aug 16 21:27:04 UTC 2016
Memory usage over the past week:
Has anyone given any thought as to the possiblity of using a compressed
file analyzer to open and detect embedded flash files in docx files, or
macros in the same? I realize that that means we need a file analyzer
first, but I have been thinking about alternate use cases for the analyzer,
and this one sprung to mind...
I've recently come to be responsible for a Bro server and am doing my best
to keep everything running smoothly at the moment.
We are running a cluster configuration on a single physical machine.
Recently we updated to Bro 2.5 from 2.4. Additionally, we modified our
cluster configuration to enable a 1 logger process alongside 16 workers, 3
proxies, and 1 manager process (prior we were running without the logger and
were seeing the manager crashing regularly due to memory constraints).
The output log file structure has had a strange file naming for a short
period of time around 2am last night. It seems that the incorrect file
naming may correspond to the logger having crashed. It seems when the Logger
process is being brought back online by the Broctl Cron task, the logger
logs to a strange directory naming for some short period of time.
Strange log directory naming:
[/bro/logs]$ du -h 20*
How can I ensure that when the logger comes online after a crash that it
won't use a strange directory naming?
Thanks for any thoughts or help!
I have two questions for the following pcap.
Bro says the mime-type as "text/plain" for the response of first HTTP GET
However, at least, wireshark (and also CapTipper) says it is "text/html".
The correct one is text/html, it is clear.
I think, bro does not look only Content-Type (maybe due to malicious
manipulation), but makes some heuristics. But there should be some issues
for this case.
The other one is that, there are 3 binary files in this pcap.
Bro extracts them pretty fine.
However again there are some issues about content-type.
While their content type is application/x-msdownload, the http.log and
files.log says dash dash (not found).
In relation to this issue, I have a local file extract bro script, although
I have definition for application/x-msdownload extension, I am not able to
set its extension as exe. Since meta$mime_type returns empty.
I guess I'm in this boat as well. Since my upgrade, bro will stop
rotating logs at some point. I'm not running bro via broctl. Here's
my process for log rotation:
redef Log::default_rotation_interval = 86400 secs;
redef Log::default_rotation_postprocessor_cmd = "archive-log";
LogRotationInterval = 86400
sudo /usr/local/bro/bin/broctl install
sudo ln -s /usr/local/bro/share/broctl/scripts/archive-log
sudo ln -s /usr/local/bro/share/broctl/scripts/broctl-config.sh
sudo ln -s /usr/local/bro/share/broctl/scripts/make-archive-name
sudo ln -s /usr/local/bro/share/broctl/scripts/expire-logs
sudo ln -s /usr/local/bro/share/broctl/scripts/delete-log
sudo ln -s /usr/local/bro/share/broctl/scripts/cflow-stats
sudo ln -s /usr/local/bro/share/broctl/scripts/stats-to-csv
This will work for a while. But at some point it stops:
at the core I believe it's because bro, after sometime, won't respond
to a "normal" kill command. A "sudo killall bro" will do nothing.
Usually I'll "sudo killall bro", wait a minute, and then my spool
directory will be empty, I'll have an email with stats, and I'll have
my new archive directory. I'll have to -9 it in order to get it to
stop, I've restarted this morning and will see how many days it will
go. Thank you.
If i have a cluster that contains 2 workers among a proxy and logger etc,
Worker 1 watches and logs everything, Is there a way i can tell worker 2 to
only log a specific protocol and not watch everything the Worker 1?
I was wondering if anyone can tell me why the sha256 hash functionality
isn't turned on by default for the files log.
I am working on something and needed to turn it on. I normally only use Bro
to process pcap files offline and have never used it on a live network.
Does it cause performance issues?