Hi,
According to instruction of intelligence framework, i wrote a intelligence framework text file myintel.txt which content is:
#fields indicator indicator_type meta.source meta.desc meta.url
14.215.177.39 Intel::ADDR baidu use baidu search -
Very simple. I also wrote a simple bro script file mytest.bro which content is:
@load policy/frameworks/intel/seen
@load policy/frameworks/intel/do_notice
redef Intel::read_files += { "./myintel.txt" };
when i run this script with command "./bro -i eth3 mytest" on a shell terminal and run "ping 14.215.177.39" command on another shell terminal, i got the following warning and :
warning: ./myintel.txt/Input::READER_ASCII: Did not find requested field indicator in input data file ./myintel.txt.
It seems that there is no error with the myintel.txt file, then what happened leads to this warning.
Best Regards
DeJin Wang
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
We announce the release of Bro v2.5.5. The new version is now available
for download at:
https://bro.org/download/index.html
or directly at:
https://www.bro.org/downloads/bro-2.5.5.tar.gz
Binary packages for the new version are currently building and will be
available in the next hours at:
https://bro.org/download/packages.html
This release has the following security fixes:
* Fix array bounds checking in BinPAC: for arrays that are fields
within a record, the bounds check was based on a pointer to the start
of the record rather than the start of the array field, potentially
resulting in a buffer over-read.
* Fix SMTP command string comparisons: the number of bytes compared was
based on the user-supplied string length and can lead to incorrect
matches. e.g. giving a command of "X" incorrectly matched
"X-ANONYMOUSTLS" (and empty commands match anything).
The following changes address potential vectors for Denial of Service
reported by Christian Titze & Jan Grashöfer of Karlsruhe Institute of
Technology:
* "Weird" events are now generally suppressed/sampled by default
according to some tunable parameters (see the changelog for more
details). These changes help improve performance issues resulting
from excessive numbers of weird events.
* Improved handling of empty lines in several text protocol analyzers
that can cause performance issues when seen in long sequences.
* Add 'smtp_excessive_pending_cmds' weird which serves as a
notification for when the "pending command" queue has reached an
upper limit and been cleared to prevent one from attempting to slowly
exhaust memory.
Please update your Bro installations as soon as possible.
-----BEGIN PGP SIGNATURE-----
iQIcBAEBAgAGBQJbhYrrAAoJEMaLSU31asx+ChUQAKsl6Wz5lBhoB51MV8uP/QGX
FCL2hbmAqrGKOMkgT0JAGck1jh9tgjV6KCaoeLbOYs4ai+fPktSpuoCFwlNfjkoK
jvFkg1dTBRpg+Vd/4yghrVK0N9eTg+i9D/0iOnpwdNfuAaLhJXnUlstTgpaJpPiI
powyp15kJmFlKhAeudlJpYXt5FbfLaRlOxHVaVQ5h60T4fEBe+zHA1YFpUOMCdPA
hZA6Nz1mkzvSkntG8VwkjUUVr3sEhwSEQO5S+1YHPYyNftYTgAJnHR0KLxG1LWyX
MOuomR2LpFagrLE3eFeZ/x9nsttDsBGaV8WXRCrYDknKwj4CBk6NhZESiOAzjCd8
Atv7A6i/ImY3qqkTlVrE4HC6xNsWagTgHeYGEp2nSuet88l9MbJsg/7C5VdiPXbK
Xclzczw3aSJ+1Of2kvDnV5OrqfAAZ3+pGIm6Dul/I7CLvXUQispRtyGPtUwtDENE
XIDPsG82AwRkZQEOek6DyQHcEPk13eJTgWsbtqmpyHhxWEe5mGfsu+4JWT+mZpLD
51nPNyv8NKNkcfMdNO8mpUekQVOEqYKfHZzoV1s+El2uz1VQ25jdBRZ8qcRBqZlb
P7l1iIvRVIS/VFAhtksGLpNQVM++x+CDqYXFS4lq2sF8D4mpoyP5GReG9LaXFWFF
cdSmPyEeM92Qsh/o3ySQ
=x3s7
-----END PGP SIGNATURE-----
Hello all!
I am facing with a lot of network traffic saved in pcaps and need to parse
them very fast.
I tried using broctl but unfortunately it could not use all computer
hardware.
So, I am running script which invoke a lot of bro instances, one for each
pcap.
I am using my own bro script which dippend on bro http log and con log
files.
Because each instance write his logs to the same folder, they run over each
other.
Creating directory for each instance is too complex and not enough
dynamically.
Can you offer something better?
Maby there is a way to make each instance save his logs to a different dir?
My invoke looks something like this :
" bro -C -r pcap_path bro_script_path "
Hi
Why when I download file over HTTP bro doesn't extract the filename ?
Here's the http & files log :
srv@srv:/nsm/bro/logs/current$ tail -f http_br0.log | grep 192.168.1.1
1534860833.865081 CxLm9G4WxaJ6Z0zqIh 192.168.1.1 31451 77.138.188.44 8080 1 GET 77.138.188.44 http://77.138.188.44/Browsing.exehttp://77.138.188.44/ 1.1 Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36 0 506576 200 OK - - (empty) - - PROXY-CONNECTION -> keep-alive - - - FI7yey3gl5U0JXLnji - application/x-dosexec
srv@srv:/nsm/bro/logs/current$ tail -f files.log | grep 192.168.1.1
1534860834.713869 FI7yey3gl5U0JXLnji 77.138.188.44 192.168.1.1 CxLm9G4WxaJ6Z0zqIh HTTP 0 PE,SHA1,MD5 application/x-dosexec - 0.189665 F F 506576 506576 0 0 F -ea845778462ef5bd2bbf68381df324ca 4af433d0c22067d921c912deae87619b262262f3 - -
[Enjoy]<http://www.hot.net.il/>
איציק בירקה
רכז תחום אבטחת מידע מערכות מידע
חטיבת מערכות מידע
077-7077790 | 053-6064571
P חשבו על הסביבה בטרם תדפיסו מייל זה
[Enjoy]<http://magazine.hot.net.il/%D7%90%D7%99%D7%98-%D7%92%D7%99%D7%A8%D7%9C%D7%9…>
This message (including any attachments) is intended only for the use of the individual or entity to which it is addressed and may contain materials protected by copyright or information that is non-public, proprietary, privileged, confidential, and exempt from disclosure under applicable law or agreement.
If you are not the intended recipient, you are hereby notified that any use, dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication by error, notify the sender immediately and delete this message immediately.
Thank you.
Does anyone know if there is a Bro plugin for CoAP traffic detection and
identification? We have the MQTT plugin for IoT bits, but we would like to
support CoAP as well. Thanks!
How can I rename a field based on the analyzer? For example:
smtp.log:path -> smtp.log->smtp_path
smb_files.log:path -> smb_files.log:smb_path
Currently I am using default map, but this does it for all analyzers:
redef Log::default_field_name_map = { ["path"] = "smb_path",
Hi everyone,
I am trying to search for bugs in bro file parser using libfuzzer. I found
the old branch where you tried to fuzz HTTP protocol. So, I have
implemented everything on top this branch. I was able to easily make it
work for DNP3 protocol but file analyzers are different. The problem is
that I can't reach the actual PE parser code from my LLVMFuzzerTestOneInput.
The actual code is :
analyzer::file::File_Analyzer *filea = new
analyzer::file::File_Analyzer("TCP",conn);
filea->DeliverStream(DataSize, Data, true);
I had problems with file handler which I solved by adding the following
line in the Manager.cc
--- a/src/file_analysis/Manager.cc
+++ b/src/file_analysis/Manager.cc
@@ -427,6 +427,7 @@ string Manager::GetFileID(analyzer::Tag tag,
Connection* c, bool is_orig)
mgr.QueueEvent(get_file_handle, vl);
mgr.Drain(); // need file handle immediately so we don't have to
buffer data
+ file_mgr->SetHandle("random_str");
return current_file_id;
}
In this case, Bro will call "hash" analyzer and ignore PE. It looks like
the PE file analyzer is not loaded/initialized, the debug log reports only
the following modules being loaded:
[FjjsZfY8GArx2E0Ih] Add analyzer MD5
[FjjsZfY8GArx2E0Ih] Add analyzer SHA1
Probably, I am on completely wrong way to make it work. it would be great
if you can suggest me some other ways to make it work without significant
modification of bro source code.
Thank you in advance.
----------------------
Best regards,
Maksim Shudrak.
tel. +1-415-793-0894
skype: vitality_3
All,
Having an issue with the bro logger crashing due to large volumes of DNS log traffic, 20-30GB an hour. This is completely a local configuration, on a system with super-fast flash storage, 64 cores, 256GB RAM running BRO 2.5.4. If I disable DNS logging, everything works fine without issue. When I enable it, I get the results below. I thought it might be an issue with gzipping the old logs, so I replaced the standard gzip with pigz and I can manually compress the 30+ gig files in seconds, so don't think that is the issue. I also tried pinning dedicated cores to the logger, currently 6 cores, which should be way more than enough. Any thoughts or suggestions.
Thanks,
Ron
current]# ll -h
total 43G
-rw-r--r--. 1 root root 3.2K Aug 18 12:00 capture_loss-18-08-18_11.00.00.log
-rw-r--r--. 1 root root 3.2K Aug 18 12:18 capture_loss-18-08-18_12.00.00.log
-rw-r--r--. 1 root root 2.3M Aug 18 12:00 communication-18-08-18_11.00.00.log
-rw-r--r--. 1 root root 1.4M Aug 18 12:18 communication-18-08-18_12.00.00.log
-rw-r--r--. 1 root root 4.8K Aug 18 12:18 communication.log
-rw-r--r--. 1 root root 19G Aug 18 11:39 dns-18-08-18_10.11.22.log
-rw-r--r--. 1 root root 16G Aug 18 12:26 dns-18-08-18_11.00.00.log
-rw-r--r--. 1 root root 12M Aug 18 12:00 files-18-08-18_11.00.00.log
-rw-r--r--. 1 root root 5.2M Aug 18 12:18 files-18-08-18_12.00.00.log
-rw-r--r--. 1 root root 15K Aug 18 12:00 known_certs-18-08-18_11.00.00.log
-rw-r--r--. 1 root root 15K Aug 18 12:18 known_certs-18-08-18_12.00.00.log
-rw-r--r--. 1 root root 98K Aug 18 12:00 known_hosts-18-08-18_11.00.00.log
-rw-r--r--. 1 root root 24K Aug 18 12:18 known_hosts-18-08-18_12.00.00.log
-rw-r--r--. 1 root root 71K Aug 18 12:00 known_services-18-08-18_11.00.00.log
-rw-r--r--. 1 root root 5.2K Aug 18 12:18 known_services-18-08-18_12.00.00.log
-rw-r--r--. 1 root root 1.6K Aug 18 12:00 notice-18-08-18_11.00.00.log
-rw-r--r--. 1 root root 954 Aug 18 12:18 notice-18-08-18_12.00.00.log
-rw-r--r--. 1 root root 262 Aug 18 12:18 reporter-18-08-18_12.00.00.log
-rw-r--r--. 1 root root 23M Aug 18 12:00 smtp-18-08-18_11.00.00.log
-rw-r--r--. 1 root root 9.2M Aug 18 12:18 smtp-18-08-18_12.00.00.log
-rw-r--r--. 1 root root 1.2M Aug 18 12:00 snmp-18-08-18_11.00.00.log
-rw-r--r--. 1 root root 415K Aug 18 12:18 snmp-18-08-18_12.00.00.log
-rw-r--r--. 1 root root 81K Aug 18 12:00 software-18-08-18_11.00.00.log
-rw-r--r--. 1 root root 8.4K Aug 18 12:18 software-18-08-18_12.00.00.log
-rw-r--r--. 1 root root 30K Aug 18 12:00 ssh-18-08-18_11.00.00.log
-rw-r--r--. 1 root root 13K Aug 18 12:18 ssh-18-08-18_12.00.00.log
-rw-r--r--. 1 root root 217K Aug 18 12:00 ssl-18-08-18_11.00.00.log
-rw-r--r--. 1 root root 78K Aug 18 12:18 ssl-18-08-18_12.00.00.log
-rw-r--r--. 1 root root 37K Aug 18 12:00 stats-18-08-18_11.00.00.log
-rw-r--r--. 1 root root 16K Aug 18 12:18 stats-18-08-18_12.00.00.log
-rw-r--r--. 1 root root 28 Aug 18 12:18 stderr.log
-rw-r--r--. 1 root root 188 Aug 18 10:11 stdout.log
-rw-r--r--. 1 root root 6.8G Aug 18 12:00 weird-18-08-18_11.00.00.log
-rw-r--r--. 1 root root 2.5G Aug 18 12:18 weird-18-08-18_12.00.00.log
-rw-r--r--. 1 root root 178K Aug 18 12:00 x509-18-08-18_11.00.00.log
-rw-r--r--. 1 root root 80K Aug 18 12:18 x509-18-08-18_12.00.00.log
# /usr/local/bro/bin/bro --version
/usr/local/bro/bin/bro version 2.5.4
Hi All,
Recently I was troubleshooting some fragmentation occurring in UDP DNS
responses from our DNS servers, because of packet size > 1500 bytes.
The responses are completely valid, including 13 additional RR's with 7
Authoritative records, hence exceeding the normal size of the packet, and
getting fragmented into two packets.
When grep'ed the connection from the dns.log file in Bro, appeared that Bro
logged two connections for the single fragmented DNS response:
2018-08-13T10:16:40-0400 C42pXn2GRPxmh8JRBd 74.220.198.174
19401 128.175.13.16 53 udp 34754 - upenn.edu 1
C_INTERNET 1
5 MX - - F F F F 1 -
- F - -
2018-08-13T10:16:40-0400 CsFVfL2czxAmhLprqj 74.220.198.174
19401 128.175.13.16 53 udp 34754 - upenn.edu -
- - -
0 NOERROR T F F F 0
cluster5a.us.messagelabs.com,cluster5.us.messagelabs.com,<unknown type=46>
900.000000,900.000000,900
.000000 F
dns1.udel.edu,dns2.udel.edu,adns1.upenn.edu,sns-pb.isc.org,<unknown
type=46>,adns3.upenn.edu,adns2.upenn.edu
128.91.254.22,2607:f470:1002::2:3,2607:f4
70:1003::3:c,<unknown
type=46>,128.91.251.33,2607:f470:1001::1:a,128.91.3.128
I verified the transaction ID ( 34754) with the one in the pcap capture of
the same traffic from the firewall and was curious to know how Bro deals
with the Fragmentation assembly and logging.
Any thoughts?
Thanks!
Fatema.
P.S: I can provide the pcap capture to the corresponding connection
mentioned above.