Hey there,
I'm writing an analyzer as a plugin and I would like to create some new
bro data type (record type to be exact) to hand over some protocol data
in a compact form as parameters to the event functions.
For now I have declared the new types in types.bif and defined them in
init-bare.bro, but I don't think that this is the right way, because I
have to manually modify the bro source files.
Is there a way to declare and define the new type inside the plugin
source files, so that the types will be featured in bro after the plugin
was installed?
Greetings,
Dane
Hey everyone,
I have a few questions on behavioral issues with the intel framework and SMB / SMB file logging:
1. I’m not sure if this is expected behavior or not, but it doesn’t look like filenames parsed in smb_files.log are properly being logged in files.log. We had a red team exercise recently where our red team was able to successfully retrieve the ntds.dit file off of one of our domain controllers. This transfer occurred over SMB, so I figured we could add ntds.dit to the Intel framework so that next time we don’t have to dig in logs to find out that our domain is owned – we’ll have a handy alert to tell us :) I did some testing with this though, and while I see ‘ntds.dit’ logged clearly in the name field in smb_files.log, I don’t have a corresponding entry in files.log for this file transfer, and therefore no Intel match. What makes this weirder is I have other irrelevant files from this connection logged in files.log, that I didn’t actually touch or move during this connection:
bro@SObro:/nsm/bro/logs/current$ cat /opt/bro/share/bro/intel/intel.dat | grep ntds.dit
ntds.dit Intel::FILE_NAME domain ownage - update your resume! F
bro@SObro:/nsm/bro/logs/2018-03-31$ zcat smb_files.16\:00\:00-17\:00\:00.log.gz | bro-cut uid id.orig_h id.resp_h id.resp_p action name | grep ntds.dit
C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 445 SMB::FILE_OPEN share path\\and more\\more\\my testing directory\\ntds.dit
C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 445 SMB::FILE_OPEN share path\\and more\\more\\my testing directory \\ntds.dit
C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 445 SMB::FILE_OPEN share path\\and more\\more\\my testing directory \\ntds.dit
C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 445 SMB::FILE_OPEN share path\\and more\\more\\my testing directory \\ntds.dit
C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 445 SMB::FILE_OPEN ntds.dit
C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 445 SMB::FILE_OPEN ntds.dit
C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 445 SMB::FILE_OPEN ntds.dit
C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 445 SMB::FILE_OPEN ntds.dit
If I search for “ntds.dit” in files log, I get nothing. If I search for the connection UID in files.log, there are some files logged – but not the only file I actually transferred over this connection!
bro@ SObro:/nsm/bro/logs/2018-03-31$ zcat files.16\:00\:00-17\:00\:00.log.gz | bro-cut conn_uids tx_hosts rx_hosts source filename | grep C35jBF1HlcrVNLiXW2
C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 SMB desktop.ini
C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 SMB share path\\and more\\more\\not my testing directory!? \\desktop.ini
C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 SMB share path\\and more\\more\\my testing directory \\random <> file that lives at this path.exe
C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 SMB desktop.ini
C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 SMB favorites\\desktop.ini
C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 SMB Random excel file that lives in my testing directory.xls
C35jBF1HlcrVNLiXW2 1.1.1.1 2.2.2.2 SMB random executable that lives in my testing directory.exe
Is there something wrong with my Bro instance? I feel like filenames from smb_files ‘name’ field should *all* be fed into files.log. I tested this with two different share paths and similar results – everything gets logged as I would expect in smb_files.log but this filename never shows up in files.log. How can I reliably alert on file names transferred over SMB?
2. As part of the above red team exercise, I found (what I suspect) are some instances of Meterpeter being transferred from popped hosts back to the adversary system over SMB. These were logged in “smb_files.log” with names like: “Temp\\PBetVKZU.tmp” and “Temp\\FapcPatS.tmp”. I don’t think the Intel framework supports wildcards – is there a way to alert on files transferred that match a regex such as “Temp\\[a-zA-Z]{8}.tmp”, or even: “Temp\\*.tmp”?
3. Unrelated to the Intel framework - it seems like smb_files.log is super noisy. If I browse to a share drive, a massive amount of the contents of the share are enumerated in the smb_files log without taking any action (with the ‘action’ field indicating SMB::FILE_OPEN). This feels like expected behavior in SMB. Is there any way to ‘filter’ the log to only log files that are actually opened, written to, moved, deleted, or had any real operation occur against them?
We’re running Bro 2.5.3 in Security Onion (Ubuntu 14.04). The intel framework is loaded and successfully fires on other indicators we have running.
Thanks!
James Gordon
Hi all,
I'm using bro 2.5.1 for network security monitoring , the message queue
is kafka componment (the bro-to-kafka plugin version is v0.5.0, librdkafka
version is v0.9.5).
Now i have encountered an error when network traffic up to 1.6Gbps, the
error message is segment fault from `src/Event.cc#90`, bro crashed.
The following listed is our test environment informations:
> CPU: 32 core Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
Memory: 64G
NIC: Speed 10000Mb/s
Storage: 2TB SATA, 100GB SSD
Below listed information is backtrace from core dump. (more on gist
<https://gist.github.com/MythRen/b55220647ca28654c6f7e1db12ee6036>)
> #0 SetNext (this=0x0, n=0x7fe292ebd490) at
> /opt/download/bro/src/Event.h:21 #1 EventMgr::QueueEvent (this=0xc302c0
> <mgr>, event=event@entry=0x7fe292ebd490) at
> /opt/download/bro/src/Event.cc:90 #2 0x00000000005fe6a7 in QueueEvent
> (obj=0x0, mgr=0x0, aid=0, src=0, vl=0x7fe2e2bedb80, h=..., this=<optimized
> out>) at /opt/download/bro/src/Event.h:88 #3 Reporter::DoLog
> (this=0x29aabb0, prefix=prefix@entry=0x908cd7 "error", event=...,
> out=0x0, conn=conn@entry=0x0, addl=addl@entry=0x0, location=location@entry=true,
> time=time@entry=true, postfix=postfix@entry=0x0, fmt=fmt@entry=0x7fe36c719d70
> "Kafka send failed: %s", ap=ap@entry=0x7fe36aa3eaf8) at
> /opt/download/bro/src/Reporter.cc:350 #4 0x00000000005fee8f in
> Reporter::Error (this=<optimized out>, fmt=fmt@entry=0x7fe36c719d70
> "Kafka send failed: %s") at /opt/download/bro/src/Reporter.cc:76 #5
> 0x00007fe36c717fa9 in logging::writer::KafkaWriter::DoWrite
> (this=0x6369270, num_fields=<optimized out>, fields=<optimized out>,
> vals=0x69d2080) at
> /opt/download/bro/aux/plugins/kafka/src/KafkaWriter.cc:156 #6
> 0x000000000089e495 in logging::WriterBackend::Write (this=0x6369270,
> arg_num_fields=<optimized out>, num_writes=1000, vals=0x6dc7bf0) at
> /opt/download/bro/src/logging/WriterBackend.cc:301 #7 0x0000000000662180
> in threading::MsgThread::Run (this=0x6369270) at
> /opt/download/bro/src/threading/MsgThread.cc:371 #8 0x000000000065eaa8 in
> threading::BasicThread::launcher (arg=0x6369270) at
> /opt/download/bro/src/threading/BasicThread.cc:205 #9 0x00007fe36e8ce2b0
> in ?? () from /lib64/libstdc++.so.6 #10 0x00007fe36ed2ce25 in start_thread
> () from /lib64/libpthread.so.0 #11 0x00007fe36e03634d in clone () from
> /lib64/libc.so.6
Varibles on frame 1
(gdb) f 1 #1 EventMgr::QueueEvent (this=0xc302c0 <mgr>,
event=event@entry=0x7fe292ebd490)
at /opt/download/bro/src/Event.cc:90 90 tail->SetNext(event); (gdb) info
args this = 0xc302c0 <mgr> event = 0x7fe292ebd490 (gdb) info locals done =
<optimized out> (gdb) p head $1 = (Event *) 0x7fe3540c81c0 (gdb) p tail $2
= (Event *) 0x0 (gdb) p event
>
> $3 = (Event *) 0x7fe292ebd490
During test, the variable `tail` is NULL pointer always when bro crashed,
however the variable `head` is NULL or not.
on my research, in the huge network traffic scenario, KakfaWriter write
log to kafka exceed the limit of
configuration `queue.buffering.max.messages(default is 100000)` or
`queue.buffering.max.kbytes(default is 4000000, 4G in other words)` in
librdkafka,
and QUEUE_FULL error raised by librdkafka, then KafkaWriter call
Reporter::Error to report the runtime error.
so KafkaWriter::DoWrite lead too many action to call Reporter::Error
function.
I guess the issue cause with concurrency access to the varible `tail`
without lock, then it set to be a NULL pointer, but i don't know why.
Then call the function `SetNext` with the NULL pointer, segment fault was
raised.
The above is my guesswork, maybe there is another reason.
Wish someone could help.
Best regards,
Myth
I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat
to push the logs. Is that even the best way to do this? I have found MUCH
outdated material on ingesting bro logs into an elk stack, but very little
that is up to date, and some of which is up to date but is using older
versions of software from elastic.co. If anyone has a modern bro/elk
integration document they use(d) to set their environment up, it would be
greatly appreciated if you could share. Thanks!
Erik
Erik,
We are using filebeat to feed our bro 2.5.3 logs into logstash for a
small 5 node elastic stack cluster. We're running elastic 6.0.x
currently and are in the process of upgrading to 6.2. This is just a
staring point for us and it seems to be working well. We're not doing
any json output from bro, but the native file format with logstash side
processing is working fine. Below are the files I'm currently feeding
into elastic.
/<bro-path>/logs/current/capture_loss.log
/<bro-path>/logs/current/conn.log
/<bro-path>/logs/current/dns.log
/<bro-path>/logs/current/files.log
/<bro-path>/logs/current/ftp.log
/<bro-path>/logs/current/http.log
/<bro-path>/logs/current/intel.log
/<bro-path>/logs/current/notice.log
/<bro-path>/logs/current/radius.log
/<bro-path>/logs/current/smb_file.logs
/<bro-path>/logs/current/smb_mapping.log
/<bro-path>/logs/current/smtp.log
/<bro-path>/logs/current/software.log
/<bro-path>/logs/current/ssh.log
On 3/28/18 1:42 PM, bro-request(a)bro.org wrote:
> Op 28/03/2018 om 18:52 schreef erik clark:
>>> I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat
>>> to push the logs. Is that even the best way to do this? I have found MUCH
>>> outdated material on ingesting bro logs into an elk stack, but very little
>>> that is up to date, and some of which is up to date but is using older
>>> versions of software from elastic.co. If anyone has a modern bro/elk
>>> integration document they use(d) to set their environment up, it would be
>>> greatly appreciated if you could share. Thanks!
>>>
>>> Erik
>>>
--
Philip Romero, CISSP, CISA
Sr. Information Security Analyst
CENIC
promero(a)cenic.org
Phone: (714) 220-3430
Mobile: (562) 237-9290
Hi, everyone,
I have recently worked on some BRO-ID works, that is, I want to intercept some REST messages from net interface using signatures, and I found that I can only intercept a part of all of the messages, for example, I can use tshark to intercept, let's say, 100 messages, but with BRO, there is only 50. And I have read the official document that says, "Each signature is reported at most once for every connection, further matches of the same signature are ignored". I just want to know is their any chance to change this situation? or did I configure something wrong?
Regards,
Sherry from China
I've got some custom log names happening, and it's causing a memory leak.
Bro never closes the file descriptors or releases the objects. This is
causing the manager to crash over a period of time.
I'm running my cluster with broctl, and rotation is turned off because I'm
naming files with a timestamp to begin with.
Any suggestions on how to perform a periodic "clean up"?
function datepath(id: Log::ID, path: string, rec: any) : string
{
local filter = Log::get_filter(id, "default");
return string_cat(filter$path, strftime("_%F_%H", current_time()));
}
event bro_init() {
Log::disable_stream(Syslog::LOG);
for ( id in Log::active_streams ) {
local filter = Log::get_filter(id, "default");
filter$path_func = datepath;
Log::add_filter(id, filter);
}
}
Thanks,
Hi,
I do have certain OSINT Feeds and wanted to convert those to intel.dat and
later consumed by ELK stack. Can someone guide how do I convert those IP
addresses into intel.dat.