I was wondering if anyone can tell me why the sha256 hash functionality
isn't turned on by default for the files log.
I am working on something and needed to turn it on. I normally only use Bro
to process pcap files offline and have never used it on a live network.
Does it cause performance issues?
Thanks,
Shawn
Dear list,
I'm having trouble understanding some log entries from my conn.log. I
already learned from this mailing list that bro cannot surely detect who
initiated a connection if it does not see the initial connection setup,
which seems logical to me.
But if I look to my conn.log file, I find entries like these:
1446190221.687738 Cbu3fj3FYdODxvLF1h 87.152.221.xxx 50993
192.168.100.yyy 36709 tcp ssl 122.745965 1238 5340
S1 F T 0 ShAD
ad 20 2050 19 6112 (empty)
1446190138.746769 CykNrp4VEfzbrJ2vm6 87.152.221.xxx 50993
192.168.100.yyy 36679 tcp ssl 223.406750 1384 18908
S1 F T 0 ShAD ad 39 2956 36
20360 (empty)
It looks like our IMAP server (87.152.221.xxx running on port 50993)
initiated a connection to my notebook (192.168.100.yyy). That should not
be possible due to lack of port forwarding for this connection.
So my first guess is that bro didn't see the initial connection setup
(midstream traffic, OTH state). But I took a look into the documentation
on https://www.bro.org/sphinx/scripts/base/protocols/conn/main.bro.html
regarding the reported states (S1), which says:
S1 Connection established, not terminated.
This looks to me like bro saw the connection setup. Or did I get
something wrong here?
Oh and by the way: the next paragraph reads:
SF Normal establishment and termination. Note that this is the same
symbol as for state S1. You can tell the two apart because for S1 there
will not be any byte counts in the summary, while for SF there will be.
I don't understand this. Do S1 and SF really only differ in byte count
zero or non-zero? It seems to me that they also differ in "connection
still alive" and "connection was terminated".
Looking further trough the logs, I also find entries with "SF" flag in
whuch source and destination seem twisted:
1445338094.186121 C9uuKp4dE9nrHo46bd 87.152.220.xxx 50993
192.168.100.yyy 20108 tcp -462.348551 401 754 SF
F T 0 DdAfFa 13 921 12 1234 (empty)
Does anybody have a hint? Did I misunderstand something?
I'm running bro 2.4.1.
Thanks a lot!
Sven
Hi all,
I'm using Bro in Security Onion with Critical stack for intel feeds, we've alsoenabled email notices for Bro which are working well (as per https://github.com/Security-Onion-Solutions/security-onion/wiki/Email).
The email notices generated though just contain something like:
Message: Intel hit on 'some.domain' at 'DNS::IN_REQUEST'
Sub-Message: some.domain
Connection: x.x.x.x -> x.x.x.x Connection uid: aaaaa
Email Extensions
-----
orig/src hostname: box.internal
resp/dst hostname: some.domain
I then have to go grep the critical stack intel file for the description related to the particular hit to see whats up.
I've tried, but can't figure out how I add $sources from the Intel log into say $sub in /opt/bro/share/bro/policy/intel/do_notice.bro
I'm missing something small - can anyone help me out?
Cheers,
Scotty
Dear
Use logs-to-elasticsearch.bro send logs to ES. Is now work.
ES error logs:
[2016-03-25 17:30:52,957][DEBUG][action.bulk ]
[node-1] [whbro-201603251500][1] failed to execute bulk item (index) index
{[whbro-201603251500][dns][AVOtHLQHooGOx5uLgLSQ],
source[{"_timestamp":1458898236411,"ts":1458898206267,"uid":"ClbNI74bIcRQ8Gs
6Wc","id.orig_h":"10.100.78.88","id.orig_p":137,"id.resp_h":"10.100.79.255",
"id.resp_p":137,"proto":"udp","trans_id":47282,"query":"ISATAP","qclass":1,"
qclass_name":"C_INTERNET","qtype":32,"qtype_name":"NB","AA":false,"TC":false
,"RD":true,"RA":false,"Z":1,"rejected":false}]}
MapperParsingException[Field [_timestamp] is a metadata field and cannot be
added inside a document. Use the index API request parameters.]
at
org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.jav
a:213)
at
org.elasticsearch.index.mapper.DocumentParser.innerParseDocument(DocumentPar
ser.java:131)
at
org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.j
ava:79)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:304)
at
org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:500)
at
org.elasticsearch.index.shard.IndexShard.prepareCreateOnPrimary(IndexShard.j
ava:481)
at
org.elasticsearch.action.index.TransportIndexAction.prepareIndexOperationOnP
rimary(TransportIndexAction.java:214)
at
org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPri
mary(TransportIndexAction.java:223)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(T
ransportShardBulkAction.java:326)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrima
ry(TransportShardBulkAction.java:119)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrima
ry(TransportShardBulkAction.java:68)
at
org.elasticsearch.action.support.replication.TransportReplicationAction$Prim
aryPhase.doRun(TransportReplicationAction.java:595)
at
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnab
le.java:37)
at
org.elasticsearch.action.support.replication.TransportReplicationAction$Prim
aryOperationTransportHandler.messageReceived(TransportReplicationAction.java
:263)
at
org.elasticsearch.action.support.replication.TransportReplicationAction$Prim
aryOperationTransportHandler.messageReceived(TransportReplicationAction.java
:260)
at
org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:3
50)
at
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnab
le.java:37)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11
42)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6
17)
at java.lang.Thread.run(Thread.java:745)
Bro config file:
/usr/local/bro/lib/bro/plugins/Bro_ElasticSearch/scripts/init.bro
module LogElasticSearch;
export {
## Destination for the ES logs. Valid options are
## "direct" to directly connect to ES and "nsq" to
## transfer the logs into an nsqd instance.
const destination = "direct" &redef;
## Name of the ES cluster.
const cluster_name = "my-application" &redef;
## ES server.
const server_host = "10.100.79.10" &redef;
## ES port.
const server_port = 9200 &redef;
## Name of the ES index.
const index_prefix = "testooo" &redef;
## Should the index names be in UTC or in local time?
## Setting this to true would be more compatible with Kibana and
other tools.
const index_name_in_utc = F &redef;
## Format for the index names.
## Setting this to "%Y.%m.%d-%H" would be more compatible Kibana and
other tools.
#const index_name_fmt = "%Y%m%d" &redef;
const index_name_fmt = "%Y%m%d%H%M" &redef;
## The ES type prefix comes before the name of the related log.
## e.g. prefix = "bro\_" would create types of bro_dns,
bro_software, etc.
const type_prefix = "" &redef;
## The time before an ElasticSearch transfer will timeout. Note that
## the fractional part of the timeout will be ignored. In
particular,
## time specifications less than a second result in a timeout value
of
## 0, which means "no timeout."
const transfer_timeout = 2secs;
## The batch size is the number of messages that will be queued up
before
## they are sent to be bulk indexed.
const max_batch_size = 1000 &redef;
## The maximum amount of wall-clock time that is allowed to pass
without
## finishing a bulk log send. This represents the maximum delay you
## would like to have with your logs before they are sent to
ElasticSearch.
const max_batch_interval = 1min &redef;
## The maximum byte size for a buffered JSON string to send to the
bulk
## insert API.
const max_byte_size = 1024 * 1024 &redef;
## If the "nsq" destination is given, this is the topic
## that Bro will push logs into.
const nsq_topic = "bro_logs" &redef;
}
While parsing smpt logs I notice a bunch of strange data contained in my from/to/subject fields
Example:
"subject":"=?utf-8?q?CBO_drops_the_March_base=E2=80=A6line?="
"subject":"=?Windows-1252?Q?Automatic_reply:_CBO_drops_the_March_base=85line?=",
"from":"\u0022NAMEOFPERSON\u0022 <first.middle.last(a)something.com>"
Why am I getting all of this extra info in these fields?
I am printing logs as JSON not CSV.
Thanks in advance
Hi,
I am a newbie at bro and wanted to change log format of http.log file to json. Currently I have made changes in ascii.bro and now I am getting all logs format in json but what I have to do if I only want http.log to have that format and others keep default?
Also my log files have got still .log extension, how can I change it o .json?
Thanks, Regards, Aneela Safdar
Hi all,
I've set up a Bro instance to test out URL extraction from SMTP, using the smtp-embedded-url-bloom.bro scripts. For the most part the extract/logging is working, but many times I'll find that the host and url logged will be truncated. As an example I'd see one email listed that has 20 links extracted, but one log entry would have host name as "award" with the url as "http://award". The remaining URLs for that email look to be extracted correctly.
Has anyone else noticed this issue?
Thanks,Steve
Hi
I have setup bro 2.4.1 to monitor bridge interface(br0) with pf-ring
in the conn.log history field, there are lot of them missing 'S'
I did not have this problem with bro2.3 with same setup.
in bro 2.4.1 if I change br0 to physical NIC eth4, this problem goes away
is there a way to work around this?
has anyone encounter this kind of problem?
my setup is
cpu: AMD Opteron 6376 32core
ram: 64G
ubuntu 12.04.5
bro 2.4.1
pf-ring version-5.6.1, mode 0, RX+TX
intel NIC 4port (igb dirver)
*conn.log history count TOP 25 :*
15265 Dd
8796 D
7267 hadfF
6558 hadf
2629 FRa
2294 Fr
1938 hadFf
1883 Fa
1298 S
1245 hadfFR
1134 hf
1067 d
1043 -
1001 F
984 R
858 hdf
700 hdaFf
667 FRr
643 hdfFa
608 ShADadFr
568 ShADfFa
517 r
474 hadR
416 hdafF
393 hFf
363 hdaf
360 hadFR
*bro node.cfg*
[manager]
type=manager
host=localhost
[proxy-1]
type=proxy
host=localhost
[proxy-2]
type=proxy
host=localhost
[br0]
type=worker
host=localhost
interface=br0
lb_method=pf_ring
lb_procs=8
pin_cpus=2,3,4,5,6,7,8,9
*bro network.cfg*
192.168.0.0/16
*network NIC and bridge setup:*
rmmod igb && modprobe igb
modprobe pf_ring transparent_mode=0 enable_tx_capture=1
ifconfig eth4 down
ethtool -K eth4 rx off
ethtool -K eth4 tx off
ethtool -K eth4 sg off
ethtool -K eth4 tso off
ethtool -K eth4 gso off
ethtool -K eth4 gro off
ifconfig eth4 mtu 1514
ifconfig eth5 down
ethtool -K eth5 rx off
ethtool -K eth5 tx off
ethtool -K eth5 sg off
ethtool -K eth5 tso off
ethtool -K eth5 gso off
ethtool -K eth5 gro off
ifconfig eth5 mtu 1514
brctl addbr br0
brctl addif br0 eth4
ifconfig eth4 promisc up -multicast
brctl addif br0 eth5
ifconfig eth5 promisc up -multicast
ethtool stp br0 on
ethtool -K br0 sg off
ethtool -K br0 tso off
ethtool -K br0 gso off
ethtool -K br0 gro off
ethtool -K br0 lro off
ethtool -K br0 rxvlan off
ethtool -K br0 txvlan off
ifconfig br0 mtu 1514
ifconfig br0 promisc up -multicast
--
------------------------------------------------------
Hichul Kim 김희철 선임 연구원
All,
Hoping not to reinvent the wheel or duplicate work, but didn't find much
via google or in git.
Any info/pointers/code to the following areas would be greatly appreciated:
1. dDoS detection script
This was a list topic back in 9/14.
The thread hinted that a script was sent, but I didn't see one in
email or the list archive.
2. bro log reporting
Back in the 2.0/BroLite days, there was 'site-report.pl' that could
generate daily summaries from bro logs.
Anyone using anything similar for recent releases?
3. bro 'configurator'
The dream is for a web interface you could use to configure and
inspect a bro deployment.
Something that scans all the code(base/policy/site) for
modules/vars/notices and generates HTML.
thanks in advance!
Shane