I have been thinking and trying different things but for now, it appears that if we are to share policies around, there is no easy way to be able to distribute input-files along with policy files.
Basically, right now I use
redef Scan::whitelist_ip_file = "/usr/local/bro/feeds/ip-whitelist.scan" ;
and then expect everyone to edit path as their setup demands it and place accompanying sample file in the directory or create one for themselves - this all introduces errors as well as slows down deployment.
Is there a way I can use relative paths instead of absolute paths for input-framework digestion. At present a new-heuristics dir can have __load__.bro with all policies but input-framework won't read files relative to that directory or where it is placed.
redef Scan::whitelist_ip_file = "../feeds/ip-whitelist.scan" ;
Something similar to __load__.bro model
Also, one question I have is should all input-files go to a 'standard' feeds/input dir in bro or be scattered around along with their accompanied bro policies (ie in individual directories )
Something to think about as with more and more reliance on input-framework i think there is a need for 'standardization' on where to put input-files and how to easily find and read them.
Aashish
Could folks take a look at NEWS and see what's missing?
Couple of things I think we should add at least:
- Document the cluster framework's new logger node, with an
example how to use it.
- Document the recent intel framework updates.
- Add BroControl news/changes.
Any takers for these?
Robin
--
Robin Sommer * ICSI/LBNL * robin(a)icir.org * www.icir.org/robin
Hello,
Because I lose so much processing power when manually converting Bro output
logs from Epoch to EST using bro-cut, can I have a feature that
automatically outputs the Bro logs to EST automatically instead of Epoch
while Bro is timestamping the logs as it sees the traffic?
I'm not sure if using the Epoch format makes Bro much faster while it's
processing, but I would like a more integrated solution aside from using
the bro-cut utility.
Thank you for your time,
Feedback from the previous bro-dev thread is now addressed and I’d consider this version of the package manager ready for people to start trying out. Documentation:
http://bro-package-manager.readthedocs.io/
Notes/questions:
* Docs are now hosted at Read the Docs instead of GitHub
I found it more straightforward to setup/use/maintain docs on RtD. It automatically builds docs upon committing to master or making a release tag. See [1] for directions on how the docs could be made reachable at a bro.org domain, but still hosted at RtD.
* A package submission process is found in the README of https://github.com/bro/packages
Which is just my idea of what the process will look like, so please comment if it differs from anyone’s expectations.
* I’ve put it on PyPi [2] and it’s installable via `pip`
What are thoughts on making this the standard installation process for bro-pkg instead of bundling it into the Bro distribution? Benefits:
1) Decoupling from Bro’s release cycle. Seems like the kind of tool where it’s important to have quick/lightweight releases/patches that are easy for users to upgrade to. Would bundling in with Bro make for a more cumbersome upgrade path?
2) Installing via pip will automatically install dependencies (gitpython, semantic-version). Installing those automatically via Bro’s CMake, while probably possible, seems unusual/hacky.
A downside would be that the first-time configuration is currently easier if done via Bro’s CMake since it can fill in all the paths of bro-pkg.config automatically. But I also see potential for addressing that in a way that doesn’t involve putting bro-pkg into Bro’s CMake. E.g. if there were `bro —config site_dir` and `bro —config plugin_dir` commands that could be executed from the command-line, bro-pkg could just use those to autogenerate an appropriate config.
- Jon
[1] http://read-the-docs.readthedocs.io/en/latest/alternate_domains.html
[2] https://pypi.python.org/pypi/bro-pkg
Building current master I see some warnings that seem new to me (the
only one I remember is the one in Hasher.cc):
[ 1%] Building CXX object
aux/binpac/src/CMakeFiles/binpac.dir/pac_scan.cc.o
/home/jgras/devel/bro/build/aux/binpac/src/pac_scan.cc: In function
‘yy_buffer_state* yy_scan_bytes(const char*, yy_size_t)’:
/home/jgras/devel/bro/build/aux/binpac/src/pac_scan.cc:2290:19: warning:
comparison between signed and unsigned integer expressions [-Wsign-compare]
for ( i = 0; i < _yybytes_len; ++i )
^
--
[ 5%] Building CXX object src/CMakeFiles/bifcl.dir/bif_lex.cc.o
/home/jgras/devel/bro/build/src/bif_lex.cc: In function
‘yy_buffer_state* yy_scan_bytes(const char*, yy_size_t)’:
/home/jgras/devel/bro/build/src/bif_lex.cc:1867:19: warning: comparison
between signed and unsigned integer expressions [-Wsign-compare]
for ( i = 0; i < _yybytes_len; ++i )
^
--
[ 77%] Building CXX object src/input/CMakeFiles/bro_input.dir/Manager.cc.o
/home/jgras/devel/bro/src/input/Manager.cc: In member function ‘void
input::Manager::ErrorHandler(const input::Manager::Stream*,
input::Manager::ErrorType, bool, const char*, __va_list_tag*)’:
/home/jgras/devel/bro/src/input/Manager.cc:2537:67: warning: ‘ev’ may be
used uninitialized in this function [-Wmaybe-uninitialized]
SendEvent(i->error_event, 3, i->description->Ref(), message, ev);
^
--
[ 80%] Building CXX object
src/probabilistic/CMakeFiles/bro_probabilistic.dir/Hasher.cc.o
/home/jgras/devel/bro/src/probabilistic/Hasher.cc: In member function
‘probabilistic::Hasher::digest probabilistic::UHF::hash(const void*,
size_t) const’:
/home/jgras/devel/bro/src/probabilistic/Hasher.cc:130:51: warning:
dereferencing type-punned pointer will break strict-aliasing rules
[-Wstrict-aliasing]
return *reinterpret_cast<const Hasher::digest*>(d);
^
--
[ 82%] Building CXX object src/CMakeFiles/bro.dir/rule-scan.cc.o
/home/jgras/devel/bro/build/src/rule-scan.cc: In function
‘yy_buffer_state* rules__scan_bytes(const char*, yy_size_t)’:
/home/jgras/devel/bro/build/src/rule-scan.cc:2303:19: warning:
comparison between signed and unsigned integer expressions [-Wsign-compare]
for ( i = 0; i < _yybytes_len; ++i )
^
[ 82%] Building CXX object src/CMakeFiles/bro.dir/re-scan.cc.o
/home/jgras/devel/bro/build/src/re-scan.cc: In function
‘yy_buffer_state* re__scan_bytes(const char*, yy_size_t)’:
/home/jgras/devel/bro/build/src/re-scan.cc:2083:19: warning: comparison
between signed and unsigned integer expressions [-Wsign-compare]
for ( i = 0; i < _yybytes_len; ++i )
^
[ 82%] Building CXX object src/CMakeFiles/bro.dir/scan.cc.o
/home/jgras/devel/bro/build/src/scan.cc: In function ‘yy_buffer_state*
bro_scan_bytes(const char*, yy_size_t)’:
/home/jgras/devel/bro/build/src/scan.cc:3286:19: warning: comparison
between signed and unsigned integer expressions [-Wsign-compare]
for ( i = 0; i < _yybytes_len; ++i )
After having a bit of trouble last week with merging SMB into master and having Robin revert my commit, I finally got it merged in again today. If anyone has any trouble with master now, please say something. There is always the possibility that we messed up something.
BTW... this does mean that SMB is now in master. :)
If you want to run it, make sure you load policy/protocols/smb because it isn't loaded by default.
.Seth
--
Seth Hall
International Computer Science Institute
(Bro) because everyone has a network
http://www.bro.org/
I just pushed some changes to the GSSAPI and Kerberos analyzers in the master branch. Bro is now handling kerberos carried over GSSAPI although I don't think there is any script support for how it's used yet so it's likely you won't see any change in your krb.log. If anyone sees any trouble, let me know.
Thanks,
.Seth
--
Seth Hall
International Computer Science Institute
(Bro) because everyone has a network
http://www.bro.org/
(Moving to bro-dev).
On Sun, Aug 07, 2016 at 09:59 -0700, Johanna Amann wrote:
> Yup, that is exactly it. There currently is a rewrite of Broker
> underway, which will use the newer library versions, but it is not
> quite done yet.
I'm wondering if we should add a version check to Broker that clearly
says what's is needed if it cannot find the right CAF version? People
have run into this a few times now both ways (Broker wants newer/older
CAF version). That seems something worth adding for 2.5 still.
Robin
--
Robin Sommer * ICSI/LBNL * robin(a)icir.org * www.icir.org/robin
The package manager client is at a point now where I think it would be usable. Documentation is here:
https://bro.github.io/package-manager/
There is a branch in the ‘bro’ repo called ‘package-manager’ that simply changes CMake scripts to install ‘bro-pkg’ along with bro. Here’s an example usage/session:
$ git clone --recursive --branch=package-manager git://bro.org/bro
...
$ cd bro && ./configure && make install
...
$ /usr/local/bro/bin/bro-pkg list all
default/jsiwek/bro-test-package
$ /usr/local/bro/bin/bro-pkg install bro-test-package
installed "bro-test-package"
loaded "bro-test-package"
$ /usr/local/bro/bin/bro packages
loaded bro-test-package plugin
loaded bro-test-package scripts
$ /usr/local/bro/bin/broctl
Test package: initialized
…
That test package shows that bro-pkg was able to install a package containing Bro scripts, a Bro plugin, and a BroControl plugin and everything should “just work” without needing any configuration.
Roadmap/TODO/Questions:
* Add a way for package’s to define “discoverability metadata”.
E.g. following the original plan for this would involve putting something like a “tags” field in each package’s pkg.meta file, but the problem with this is the client would need to either download every package to be able to search this data or have a third-party periodically aggregate it.
My current idea is that instead of putting this type of data inside the package’s metadata, the user puts it in the package source’s metadata. They do this on first registration and may update it whenever. That way, bro-pkg always has access to latest discoverability metadata, no need for a separate aggregation process. It’s also something that will rarely change, so not a problem for that data to live in a repo not owned by the package author and not much increased burden for Bro Team to accept pull requests to update this data. Thoughts?
* Automatic inter-package dependency analysis
Simply a TODO. I put it at lower priority since I don’t think it will be common right off the bat to have complex package dependencies and users can always manually resolve dependencies at the moment.
* Is it acceptable to depend on GitPython and semantic_version python packages?
Both are replaceable implementation details, just didn’t want to write something myself if not necessary and in interest of time.
* Documentation is hosted on GitHub at the moment, move to bro.org?
Mostly just on GitHub now to be able to show something without having to touch any of the master bro/www doc generation processes, but maybe it’s a nice thing to start keeping docs more compartmentalized? The current doc/www setup feels like it’s getting rather large/monolithic and maybe that contributes to the difficulty of approaching/understanding it when there’s breakages. Just an idea.
* Thoughts on when to merge ‘package-manager’ branch in ‘bro’ ?
IMO, it can be done now or soon after I address responses/feedback to this email.
- Jon
> On Aug 2, 2016, at 6:03 PM, Slagell, Adam J <slagell(a)ILLINOIS.EDU> wrote:
>
> Wow. Big difference
Indeed :-) I realized one of the bigger issues in the sumstats based code is not really the detection of scans, but what happens AFTER the detection. After detection it keeps accumulating data, or possibly only slightly better, keeps trying to accumulate data.
Connection events are used to generate sumstats observations
sumstats observations feed into the sumstats framework
which may cross a threshold
which generates notices that are fed into the notice framework
which are often suppressed for at least 1hr by default.
However, sumstats has no idea that the only reason it is collecting observations is to raise a notice that could be currently suppressed for an entire day.
The observations don't stop once attacker has already triggered a notice. The whole machine keeps running even though nothing visible will ever come out of it.
I managed to fix this in the sumstats based unified scan.bro, but it is only a partial fix.
The code that does this is in this version:
https://github.com/bro/bro/blob/b12153653dddef0c5b2f9fa190cd3ab62c0652a4/sc…
Basically this part:
global known_scanners: table[addr] of interval &create_expire=10secs &expire_func=adjust_known_scanner_expiration;
# There's no way to set a key to expire at a specific time, so we
# First set the keys value to the duration we want, and then
# use expire_func to return the desired time.
event Notice::begin_suppression(n: Notice::Info)
{
if (n$note == Port_Scan || n$note == Address_Scan || n$note == Random_Scan)
known_scanners[n$src] = n$suppress_for;
}
function adjust_known_scanner_expiration(s: table[addr] of interval, idx: addr): interval
{
local duration = s[idx];
s[idx] = 0secs;
return duration;
}
and then later, the checks are aborted early with:
if ( scanner in known_scanners )
return;
This works, but the reason it is incomplete is that if the notice was triggered by an intermediate update, sumstats still contains data for this attacker. This data will be sent over to the manager at the end of the epoch, even though by that time it isn't needed anymore since there was only one threshold and it was already crossed. The fix for that would be some more changes inside of sumstats..
if (intermediate update crossed a threshold && the number of thresholds is 1)
instruct all worker nodes to purge any data associated with this key
my non-sumstats based scan.bro and AAshish's scan-NG both handle 'known scanner' suppression directly with a cluster event. Because there's no middleman (sumstats) it ends up being a bit simpler.
This Notice::begin_suppression trick itself is nice though, since it lets you configure suppression intervals in one place, instead of potentially having to configure policy specific suppression and keep them in sync.
--
- Justin Azoff