I am getting a Dependency is not satisfiable: libc6(<2.12) error message.
[cid:image001.png@01CE5875.C815C190]
I have run the required dependency:
sudo apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig zlib1g-dev libmagic-dev
But I have version 2.17
[cid:image002.png@01CE5875.C815C190]
Can someone assist me with this?
VR
Jerry Champion
Information Secuity Engineer
Synovus Financial Corp
706-644-4589
I'm trying to write a bro script that pulls out authoritative nameservers
and additional records from DNS.
I think I need the the dns_EDNS_addl event to get at that part of a DNS
reply, since the dns_edns_additional structure seems like it has the
information I'm looking for:
http://trac.bro-ids.org/sphinx-git/scripts/base/init-bare.html#type-dns_edn…
Unfortunately, it looks like dns_EDNS_addl isn't implemented yet:
# scripts/base/protocols/dns/main.bro
318 # TODO: figure out how to handle these
324 #event dns_EDNS_addl(c: connection, msg: dns_msg, ans:
dns_edns_additional)
Has anyone worked out a way to grab this information from a DNS reply?
If not, could anyone point me in the right direction so that I can roll my
own solution?
-Chris
Hello,
We recently lit up a 100G link and are attempting to tackle migrating
our IDS and monitoring infrastructure from 10G to 100G capabilities. We
have an existing set of servers that we are are using to evaluate SNORT,
Suricata and Bro on with a 100G Gigamon upstream. For purposes of a Bro
proof of concept I have two of the following Dell 720s to start from:
Dell 720XD
64 G RAM (1600 MHz RDIMMS)
30TB (usable) RAID 6 7.2K RPM SAS 6Gbps
2 146GB 15K RPM SAS 6Gbps
2 Intel Xeon E5-2670 2.60GHz, 20M Cache, 8.0GT/s QPI, Turbo, 8C
3 Intel X520 DP 10Gb DA/SFP+
I'm starting from build 2.2-beta-114 and looking at using it and PF_RING
with the DNA drivers for the Intel cards for now as some of the other
popular cards are "complicated" for us to get approval to purchase. I
haven't found much info on running Bro this way other than issue ID 845
<https://bro-tracker.atlassian.net/browse/BIT-845> and even that only
suggests that there is a Bro Control plugin in the works for this, but
that it may not be fully tested yet. Has anyone tried the plugin yet or
have any experience configuring Bro and PF_RING/DNA to work together?
Regards,
--
Gary Faulkner
UW Madison
Office of Campus Information Security
608-262-8591
I've set up a Bro 2.1 instance with a network tap, but keep getting notice
log entries of "PacketFilter::Dropped_Packets". I'm assuming this is
because Bro is single threaded and it needs more workers to keep up with
the traffic, so I'm trying to implement pf_ring to distribute the traffic
across multiple workers. I've installed the pf_ring RPM package from ntop (
http://www.nmon.net/packages/rpm/x86_64/PF_RING/) and that gets the kernel
module loaded but seems to be lacking something still - probably linking
libpcap to pf_ring? That's what I'm not sure about. After installing
pf_ring from the RPM package and configuring Bro for multiple workers it
starts up ok but is still dropping packets (all of the workers, per the
notice log) and pf_ring doesn't appear to be used:
# cat /proc/net/pf_ring/info
PF_RING Version : 5.6.2 ($Revision: 6910$)
Total rings : 0
Standard (non DNA) Options
Ring slots : 4096
Slot version : 15
Capture TX : No [RX only]
IP Defragment : No
Socket Mode : Standard
Transparent mode : Yes [mode 0]
Total plugins : 0
Cluster Fragment Queue : 0
Cluster Fragment Discard : 0
Has anyone had any success with clustered Bro with pf_ring on RHEL/CENTOS,
and did you have to compile it from source and re-compile libpcap? I'd
prefer to stick with the RPM packages since it tends to make updating less
problematic. I installed Bro 2.1 as an RPM package as well.
Thanks,
Matt
Hello,
I am pretty new user of BRO and use it as a part of the Security Onion
distributions. I currently came across a problem which I was hoping one of
you might be able to help with.
When looking at some telnet connections on a non-standard TCP port I noticed
that some data flows are reported in the wrong direction. When checked the
conn.log files, all the entries in question had the same characteristics
below:
1. They would only appear in the archive (gzip) conn.*.log.zip files - not
the current conn.log file.
2. Entries would always be at the beginning of the zipped conn.*.log.zip
file
3. Conn_State field would say RSTR 4. History field would be DaFr (on most
of them)
Below are some examples, as you can see the file name reflects from/to
date/time, and the characteristics of the entries in question where flow
direction is reversed are below:
zcat conn.16:27:17-17:00:00.log.gz | bro-cut -d ts proto conn_state history
| grep RSTR
2013-10-25T16:27:12+0000 tcp RSTR DaFr
2013-10-25T16:27:12+0000 tcp RSTR DaFr
2013-10-25T16:27:12+0000 tcp RSTR DaFr
<snip>
zcat conn.18:36:28-19:00:00.log.gz | bro-cut -d ts proto conn_state history
| grep RSTR
2013-10-25T18:36:23+0000 tcp RSTR DaFr
2013-10-25T18:36:23+0000 tcp RSTR DaFr
2013-10-25T18:36:23+0000 tcp RSTR DaFr
<snip>
zcat conn.18:36:28-19:00:00.log.gz | bro-cut -d ts proto conn_state history
| grep RSTR
2013-10-25T18:36:23+0000 tcp RSTR DaFr
2013-10-25T18:36:23+0000 tcp RSTR DaFr
2013-10-25T18:36:23+0000 tcp RSTR DaFr
<snip>
It almost seems that when conn.log file is being divided up and zipped this
happens.
Just to give some context, we have a script running which telnets to
multiple devices and polls certain variables and exits on a non-standard
telnet ports.
Thanks,
Konrad
_____
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2014.0.4158 / Virus Database: 3615/6790 - Release Date: 10/29/13
Hi All,
Here is what I am trying to achieve: Incoming traffic on Host-A should
be sent to worker Host-B (and to more workers in future).
Here is how my config looks like in node.cfg
Manager: Host-A
Proxy: Host-A
Worker1: Host-B (which is 10.73.149.31)
I have bro installed on all machines. Now, I start broctl on Host-A:
-bash-4.2$ sudo broctl
Password:
Welcome to BroControl 1.1
Type "help" for help.
[BroControl] > install
removing old policies in
/usr/local/spool/installed-scripts-do-not-touch/site ... done.
removing old policies in
/usr/local/spool/installed-scripts-do-not-touch/auto ... done.
creating policy directories ... done.
installing site policies ... done.
generating cluster-layout.bro ... done.
generating local-networks.bro ... done.
generating broctl-config.bro ... done.
updating nodes ... warning: host 10.73.149.31 is not alive
done.
[BroControl] > install
removing old policies in
/usr/local/spool/installed-scripts-do-not-touch/site ... done.
removing old policies in
/usr/local/spool/installed-scripts-do-not-touch/auto ... done.
creating policy directories ... done.
installing site policies ... done.
generating cluster-layout.bro ... done.
generating local-networks.bro ... done.
generating broctl-config.bro ... done.
updating nodes ... done.
[BroControl] > start
starting manager ...
starting proxy-1 ...
starting worker-1 ...
cannot create working directory for worker-1 <<-- not sure why I get
this message.
[BroControl] >
Do I need to do anything on Worker-1?? Do I need to put it in some special
mode?
Any help/pointers would be appreciated.
Cheers,
Hiren
Hello all,
We recently did a fresh install of Bro 2.1 on a new machine as per the
quick start guide. This machine has been watching traffic for about a week
now and all of the logs seem to be fine except for the SSH logs, which have
the following problems.
1) These logs are not adding geo-location information. The MaxMind
databases were installed and put in the proper location, and a quick bro
script that called the lookup_location() function seems to be working fine
in retrieving this information. However, none of this information is
logged, even for heuristically successful connections.
2) About half of the entries in the SSH log have a status of
"undetermined". This is not something we saw before on our older machine,
where every entry was listed as either a 'success' or 'failure' in the
status column.
3) The "resp_size" field of *every* entry is 0. Once again, this is not
something that we have seen before.
I should also mention that we have an older machine watching the exactly
same network as this one (though with a smaller network card) and that one
seems to be picking up on SSH traffic fine. Any idea what's going on here?
Thank you,
N. Siow
Does anyone know if it is possible to dump all the packets associated
with a specific connection to a file? It would be nice to be able to
check during connection_state_remove if a connection was interesting
or not and write it to disk.
-AK
On Wed, Oct 30, 2013 at 08:08:58PM +0000, Rosebraugh, Connar wrote:
> type ConnDelta: record{
> #time connection was last seen.
> #consider updating to a set, and taking the stdev of the times
> lastSeen: time;
> delta: interval;
> };
>
> function update_time(t: time, c: ConnDelta): interval
> {
> c$delta = c$lastSeen - t;
> c$lastSeen = t;
> }
>
> #event bro_init()
> #{
> # local t1 = current_time();
> # local t2 = network_time();
> # local t3: interval = t1 - t2;
> #
> # print t1;
> # print t2;
> # print t3;
> #}
>
> local connection_deltas: table[addr, addr, port] of ConnDelta;
This needs to be 'global' not 'local'. Possibly a parser bug that it
doesn't return a more helpful error message since it clearly gets
confused.
> @load base/protocols/conn
>
> #event new_connection(c: connection)
> event new_connection(c: connection)
> {
> update_time(c$start_time, connection_deltas[c$id$orig_h, c$id$resp_h, c$id$resp_p]);
> }
The next problem you will run into is that you need to set &default for
connection_deltas that returns a default ConnDelta object for the first
time that this tuple is seen.
> event bro_done()
> {
> #print connection_deltas;
> }
>
--
-- Justin Azoff
-- Network Security & Performance Analyst