Hi all,
Recently I have some problems with Bro and PF_RING in cluster.
On my server, when I have less than 32 worker threads(rings),
everything is okay, but when I use worker threads more than 32, pf_ring
start to receive repeating data packets. For example, rings less than 32, I
send 400000 packets to server and pf_ring info in /proc shows there is
400000 packets in rings, but when rings greater than 32, I can get 800000
packets when 33 rings and 1200000 packets when 34 rings and so on.
I guess if there is some rules that a pf_ring or a bro cluster can only
support less than 32 rings or worker threads on a server or some other
reasons?
Any insight would be helpful.
Holla!
notice.log is extremely large before it rotates, sometimes 140G+. At times
it rotates to another log with a timestamp added to it's name. This
happened after turning on other analyzers.
Is there a way to suppress notice.log or minimize the events written to it.
The events in the other logs are more important.
There are also other logs that are extremely large as well, and I'm trying
to balance processing and space vs the visibility.
Any advice appreciated.
Merci!
Hello,
I've got a vmware instance of Ubuntu running Bro 2.6-beta2. I want bro to
monitor the eth0 interface that is directly receiving ERSPAN (gre tunneled)
data from a Cisco switch. I've tried a few different scenarios. I
thought Bro could by default recognize and decapsulate the real traffic
from the GRE tunnel (according to the bro notes it should be able to do
this) but so far when bro runs it just sees the gre traffic in it's
weird.log. I've also tried creating another tunnel interface tun0 set up
as GRE on the Ubuntu instance and have the traffic forwarded from eth0 to
tun0 and have linux decapsulate it. That is not working either.
Has anyone gotten something similar to work reading cisco ERSPAN traffic
into bro?
Adam Slagell decided earlier this year that it was time to step down from
his service as the chair of the Bro/Zeek project's Leadership Team[*].
(He'll be continuing as a member of the LT.) The LT has enthusiastically
selected Keith Lehigh of Indiana University to serve as its new chair.
Many of you will know Keith from his service as this year's program committee
chair of BroCon 2018.
Our thanks to Adam for his industrious and invaluable service!
Vern
[*] https://www.bro.org/team.html
FYI if you have an existing Bro git repo clone and want to update/pull
the master branch and all submodules recursively, it will require
syncing one time:
git submodule sync --recursive
That will update the URL for a git submodule that just changed.
Afterwards, the usual update command will work:
git submodule update --recursive --init
New git clones Just Work.
- Jon
Hi, i'm writing a bro script to generate telnet.log, as showing in
attachment. But when i simulate several remote telnet login actions, bro
could't record all the login actions completely, lost nearly half. And if
i run this script with PCAPs, it tuns out to be normal. Thanks for read
my letter.
Hello Bro Members,
I apologize if this is not the right mailing list... I do not wish to spam
everyone. I just picked up Bro a few days ago, to still learning the ropes.
I have a syntax question that I can't seem to find anywhere. How do you do
a nested switch case inside a record? I have some data
0xAABBCCDD01020304 or 0xAABBCCDD01020405 that I need to verify that the
header is 0xAABBCCDD and switch based on the last two bytes, either 0x0304
or 0x0405. Is this a good practice of switch record since data length will
change based on the command. The nested case I have below is incorrect and
is throwing error "make[3]: *** [test_pac.h] Segmentation fault (core
dumped)"
Currently, I have:
enum cmd_codes {
NOP = 0x00000000,
DEVICE_HEADER = 0x AABBCCDD,
DEVICE_CMD2_1 = 0x0304,
DEVICE_CMD2_2 = 0x0405
};
type Header = record {
header: uint32; # header
cmd1: uint16; # 0x0102
cmd2: uint16; # 0x0304 or 0x0405
} &byteorder=bigendian;
type Device_Response = record {
header: Device_Header;
data: case(header.header) of {
DEVICE_HEADER -> head: case(header.cmd2) of {
DEVICE_CMD2_1 -> info1: Record_A;
DEVICE_CMD2_2 -> info2: Record_B;
};
# All the rest
default -> unknown: bytestring &restofdata;
};
} &byteorder=littleendian;
type Record_A = record {
# some data goes here
}
type Record_B = record {
# some data goes here
}
Thanks!
Hello. I have a script which defines a custom handler on mime_data event:
event mime_all_data (c: connection, length: count, data: string)
{
// do stuff
}
When this script is ran with capturing traffic in PF_RING mode using lb_procs=2, Bro processes consume 100% of both pinned CPU cores. This is not the case when capturing without PF_RING in single process mode though. What are possible reasons for this? Can it be optimized on the script side? What can be done to lower the CPU usage?
Does anyone know why Bro would be using resources when no traffic flowing
to the sensor?
Recently we were having some ECC errors on one of our sensors and turned
off the traffic to that sensor for troubleshooting purposes.
Noticed that the load was pretty high (~7) on that sensor, and was
wondering what Bro must be doing that would cause that load, shouldn't it
be just waiting for the packets without using much cpu/memory resources on
the box?
Stats when no traffic flowing to the sensor, bro processes running because
of cron on manager kicking the bro processes on the workers:
$ top
top - 12:18:17 up 13 days, 19:12, 2 users, * load average: 6.72, 7.05,
7.34*
Tasks: 555 total, 9 running, 546 sleeping, 0 stopped, 0 zombie
%Cpu(s): 9.7 us, 5.7 sy, 0.0 ni, 84.5 id, 0.0 wa, 0.0 hi, 0.1 si,
0.0 st
KiB Mem : 13191564+total, 95957600 free, 32708392 used, 3249652 buff/cache
KiB Swap: 8388600 total, 8388600 free, 0 used. 98285016 avail Mem
When the traffic was turned back on, load average:
$ top
top - 10:39:52 up 1 day, 19:02, 2 users, load average: 12.89, 12.89, 12.82
Tasks: 551 total, 11 running, 540 sleeping, 0 stopped, 0 zombie
%Cpu(s): 20.9 us, 6.1 sy, 0.1 ni, 72.4 id, 0.0 wa, 0.0 hi, 0.5 si,
0.0 st
KiB Mem : 11540057+total, 59135456 free, 52346920 used, 3918204 buff/cache
KiB Swap: 8388600 total, 8388600 free, 0 used. 62253548 avail Mem
Any thoughts? :)
Thanks,
Fatema