Hello All,
Is there a way to add Bro server hostname field into all the Bro log types? We have 5 Bro servers capturing traffic on different network nodes, we are trying to add each server/sensor hostname into all the log types so analyst can identify where the logs are coming from.
v/r
Jawad Rajput
We have about 10 tasks left for the renaming from Bro to Zeek before
the next release. Any help addressing those is appreciated, see this
board: https://github.com/zeek/zeek/projects/2
We're hoping to get these in place within the next 4 weeks. If you can
work on any of these, please assign the ticket to yourself. It's best
to start with a short proposal on what you plan to do. You can also
use the ticket discussion for any further clarification you might
need.
Thanks!
Robin
--
Robin Sommer * Corelight, Inc. * robin(a)corelight.com * www.corelight.com
Dear Zeek-devs,
I think it is a common experience to need to evaluate zeek's performance
according to different customizations at the script level, when using an
event rather than another or a new plugin. Obviously this heavily depends
on the traffic zeek is analyzing. But it would be of great help if there
were a tool which could count the amount of time zeek has spent on certain
events/plugins when analyzing traffic. Is something like this available?
Thanks in advance.
Mauro
Hello Zeek Devs,
I would like to write a protocol analyzer and need some direction. I would
like to write something simple which works on TCP, similar to the ConnSize
analyzer. I would like my analyzer to be distributed as a plugin, similar
to MITRE's HTTP2 analyzer, so I am following the docs here:
https://docs.zeek.org/en/stable/devel/plugins.html
However, the docs don't detail much beyond creating a built in function. A
colleague pointed me at this quickstart script for binpac:
https://github.com/grigorescu/binpac_quickstart
The quickstart script seems to be intended for writing a protocol analyzer
which gets merged into the Zeek source. This is not how plugins operate.
I'm looking for some guidance on how to proceed. Thanks in advance.
-AK
Hello Zeek Devs,
I am working with the intel framework, using intel data from a file
which is updated periodically. As far as I have seen in the
documentation, it should be possible to update this file with new data
and Zeek can adjust its behavior accordingly without restarting. The
intel file must be loaded in mode=REREAD to achieve this.
However, I noticed that this works fine if new fields are added to the
intel data file, but NOT if some fields are removed (for example if an
ip address previously believed to be malicious is removed from the intel
file because it was later realized to be safe). At the script level in
the intel framework, intel data are stored into
global data_store: DataStore &redef;
and there are some functions for removing items from the record (
remove(item: Item, purge_indicator: bool), remove_meta_data(item: Item):
bool ). But I am not sure they are really called anywhere.
Is anyone aware of this issue? Is it a work in progress? Thanks in advance.
Mauro
I am working on improving the btests for the kafka writer plugin with the
goal of validating some logic in KafkaWriter::DoInit. The best approach
that I have so far is to write a BiF and use it in both DoInit and the
btest via Zeek policy, but I have only been able to find limited
documentation[1][2] on the topic.
I've looked around for examples of this approach without success, including
in the past few years of the Zeek-dev mailing list archives. I explicitly
want to stay away from the assumption that the Manager/Logger has a kafka
broker available to it at the time of testing.
My BiF is fairly simple:
```
function SelectTopicName%(override: string, default: string, fallback:
string%) : string
%{
// Things
%}
```
bifcl appears to be generating the following:
```
namespace BifFunc { namespace Kafka { extern Val*
bro_SelectTopicName(Frame* frame, val_list*); } }
```
At this point I'm just randomly poking around in Zeek/src trying to find my
way - any pointers regarding how to use this function in C++ (or another
approach altogether) would be appreciated. Thanks,
1: https://www.zeek.org/development/howtos/bif-doc/index.html#functions
2: https://www.zeek.org/development/howtos/bif-doc/example.html
Jon
--
Jon Zeolla
Hi,
I'm trying to write a Kerberos GSSAPI decryption support analyzer. Currently krb5_init_context() is called to get a krb5_context inside KRB
analyzer. I think it's a good thing to share the context among all the components that need to call KRB5 API.
Is there any mechanism to do so? Or should I just call krb5_init_context() in main.cc and export the context via a new .h file?
Best regards,
Song
Hi Ronka,
Did you mean doing everything in a single analyzer? That would make things complicated. As I said, the clear text extracted from
a single Wrap Token may be just a fragment of a RPC PDU so we need to resemble those fragments to form a complete RPC PDU,
then feed the result RPC PDU to a RPC type.
The most simple solution to do the resembling I can think of is to delegate the work to a dedicated RPC flowunit analyzer. Please
note that this is a completely different analyzer. I have 2 analyzers in 1 plugin (2 AddComponent() in Plugin.cc).
The other solution I could think of is to do the resembling inside the flow or connection, maybe implemented with FlowBuffer. But
I think the code will not be trival (more states to keep, more boundary checks to do, buffer management ...) and I'm too lazy ...
Song
------------------ Original ------------------
From: "ronka_mata"<ronka_mata(a)centrum.cz>;
Date: Sat, Mar 9, 2019 06:18 PM
To: "Song"<oldpopsong(a)qq.com>;"zeek-dev(a)zeek.org"<zeek-dev(a)zeek.org>;
Cc: "zeek-dev"<zeek-dev(a)zeek.org>;
Subject: Re: [Zeek-Dev] Fwd: Re: Fwd: binpac crash triggered byexportsourcedata
Did you think about keeping some member variable inside dce_Conn,that would be used as a switch? Once you established that you are getting plain text or encrypted rpc, you can set your var set appropriately and make your pdu look like..
Type test()= record {
Len:...;
Body: case $context.connection.my_switch_var() of{
PLAIN -> plaint: test_abc(len);
ENCR -> encrt: ....;
}
}
..and then test_ab can define rpc req/response pdu.. or forward to gssapi.
You would have to reset the variable on every gap/connection restart/.. but you would not need feed data back again.
Sorry, I am currently on phone, so proof of concept coming later.
Ronka
____________________________________________________________
> Od: "Song" <oldpopsong(a)qq.com>
> Komu: "zeek-dev(a)zeek.org" <>
> Datum: 09.03.2019 10:41
> Předmět: Re: [Zeek-Dev] Fwd: Re: Fwd: binpac crash triggered byexportsourcedata
>
I tried to set flow type of the controlling analyzer to datagram and used &exportsourcedata. Althoughthe resulted analyzer works great against my test pcap file, but after checking the code binpac generatedI think the datagram analyzer is not suited to the TCP-based protocols. Below are the generated codes:2308 void TEST_Flow::NewData(const_byteptr t_begin_of_data, const_byteptr t_end_of_data)2309 {2310 try2311 {2312 dataunit_ = new TEST_PDU(is_orig());2313 context_ = new ContextTEST(connection(), this);2314 int t_dataunit__size;2315 t_dataunit__size = dataunit_->Parse(t_begin_of_data, t_end_of_data, context_);2316 // Evaluate 'let' and 'withinput' fields2317 delete dataunit_;2318 dataunit_ = 0;2319 delete context_;2320 context_ = 0;2321 }2322 catch ( binpac::Exception const &e )2323
{2324 delete dataunit_;2325 dataunit_ = 0;2326 delete context_;2327 context_ = 0;2328 throw;2329 }2330 }Notice that in line #2312, every piece of data will be treated as a new PDU which obviously is not good for TCPdata stream.I think now the only option I have is to build a new bytestring from the length and data fields and to feed it to the RPC analyzer. This solution is bad from the performance point of view since we have to do 2 extra memorycopy: first to generate data field, second to regenerate the original whole PDU.------------------ Original ------------------From: "Song"<oldpopsong(a)qq.com>;Date: Sat, Mar 9, 2019 11:13 AMTo: "ronka_mata"<ronka_mata(a)centrum.cz>;Cc: "zeek-dev(a)zeek.org"<zeek-dev(a)zeek.org>; Subject: Re: [Zeek-Dev] Fwd: Re: Fwd: binpac crash triggered byexportsourcedataHi Ronka,The protocol I'm trying to analyze supports multiple authe
ntication methods, including SASL Kerberos GSSAPI.After authentication, according to the authentication method chosen and security layer negotiated, the RPCrequests/responses followed could be in plain text, signed or encrypted.In the plain text form, the PDU is like: <4 bytes length field> <request/response data with length indicated by the 4 bytes length field>While in signed or encrypted form, the outmost layer of PDU is like: <4 bytes length field> <Kerberos 5 GSSAPI Wrap Token with length indicated by the 4 bytes length field>In the later case, the RPC requests/responses PDU (including the 4 bytes length field indicating the length of therequest/response data) is encapsulated in the Wrap Tokens. It is possible that a big RPC request/response willbe carried by multiple Wrap Token PDUs.So I have two analyzers: - controlling analyzer: deal with authentication and decryption, forward decrypted RPC PDU data to RPC analyzer - RPC analyzer: decode RPC request
/responseI need the &exportsourcedata for the plain text case in which the whole controlling analyzer PDU should be forwardedto the RPC analyzer.Today I will try to change the type of controlling analyzer to datagram.Best regards,Song------------------ Original ------------------From: "ronka_mata"<ronka_mata(a)centrum.cz>;Date: Fri, Mar 8, 2019 10:08 PMTo: "Song"<oldpopsong(a)qq.com>;Cc: "zeek-dev(a)zeek.org"<zeek-dev(a)zeek.org>; Subject: Re: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedataHi Song,Could you explain a bit more what you are trying to achieve? Do you want to deliver the same data into two analyzers? Or is it just part of it? Or deliver to one if condition is met and to second one otherwise? Do you have to wait with second analyzer after it was passed by the first interpreter or can you call second in the deliverstream function? Make the second one into child analyzer and then call ForwardStream function or some similar approach?I understand where
your problem is with your current code. I was not able to get around the len problem yet, but I will give it a go a bit later today, unless someone else knows solution first hand.For delivery of parts of stream data, defined as a bytestream, you can take example from smb-pipe.pac forward_dce_rpc funct.Hope this helps a bit.Ronka__________________________________________________________> Od: "Song" <oldpopsong(a)qq.com>> Komu: "ronka_mata" <ronka_mata(a)centrum.cz>> Datum: 08.03.2019 05:01> Předmět: Re: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedata>> CC: <zeek-dev(a)zeek.org>Thank you Ronka.It is a flowunit analyzer. I checked zeek source tree and found that there is only 1 flowunitanalyzer (tls-handshake) uses exportsourcedata directive. I guess that exportsourcedata onlyapply to non-incremental types. Maybe these are true: - all types in a datagram analyzer can use exportsourcedata directive - only non-incremental types in a flowunit analyzer can use exportsou
rcedataBut I'm not sure about what is non-incremental type, I have to check the generated code.The reason that I want sourcedata field is that I want to feed the whole test_pdu to anotheranalyzer. Now as a workaround, I have to do something like this: test_rpc->DeliverStream(${data}.length() + 4, ${data}.begin() - 4, is_orig);to bring back the first 4 bytes to form the original whole PDU.Maybe I should try datagram analyzer.Song------------------ Original ------------------From: "ronka_mata"<ronka_mata(a)centrum.cz>;Date: Thu, Mar 7, 2019 10:05 PMTo: "Song"<oldpopsong(a)qq.com>;Cc: "zee k-dev"<zeek-dev(a)zeek.org>; Subject: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedataHi,What might help is checking how you defined the the PDU in .pac file. If it is datagram, mostly used for DNS type traffic or if it is flowunit. You can read more on it here https://github.com/zeek/binpac/blob/master/README.rst#flowYou do not need to define length for datagrams. Look at oth
er protocols for example of differences. Eg radius for datagrams and smb for flows.Ronka---------- Forwarded message ---------From: Song <oldpopsong(a)qq.com>Date: Thu, Mar 7, 2019, 13:40Subject: [Zeek-Dev] binpac crash triggered by exportsourcedataTo: zeek-dev <zeek-dev(a)zeek.org>Hi,I define a PDU like below:type test_pdu = record { lenAB : uint32; pduAB : test_pdu_ab(lenAB);} &length=(lenAB + 4), &exportsourcedata; # fail to compile without &length, &exportsourcedata will cause binpac crashtype test_pdu_ab(len: uint32) = record { lenA : uint 16; dataA : bytestring &length = lenA; dataB : bytestring &length = (len - 2 - lenA);} &exportsourcedata; # &exportsourcedata here is OKThe error message is:binpac: /home/grid/git/zeek/aux/binpac/src/pac_type.cc:857: std::__cxx11::string Type::EvalLengthExpr(Output*, Env*): Assertion `!incremental_input()!` failed.Aborted (core dumped)
Hi Ronka,
The protocol I'm trying to analyze supports multiple authentication methods, including SASL Kerberos GSSAPI.
After authentication, according to the authentication method chosen and security layer negotiated, the RPC
requests/responses followed could be in plain text, signed or encrypted.
In the plain text form, the PDU is like:
<4 bytes length field>
<request/response data with length indicated by the 4 bytes length field>
While in signed or encrypted form, the outmost layer of PDU is like:
<4 bytes length field>
<Kerberos 5 GSSAPI Wrap Token with length indicated by the 4 bytes length field>
In the later case, the RPC requests/responses PDU (including the 4 bytes length field indicating the length of the
request/response data) is encapsulated in the Wrap Tokens. It is possible that a big RPC request/response will
be carried by multiple Wrap Token PDUs.
So I have two analyzers:
- controlling analyzer: deal with authentication and decryption, forward decrypted RPC PDU data to RPC analyzer
- RPC analyzer: decode RPC request/response
I need the &exportsourcedata for the plain text case in which the whole controlling analyzer PDU should be forwarded
to the RPC analyzer.
Today I will try to change the type of controlling analyzer to datagram.
Best regards,
Song
------------------ Original ------------------
From: "ronka_mata"<ronka_mata(a)centrum.cz>;
Date: Fri, Mar 8, 2019 10:08 PM
To: "Song"<oldpopsong(a)qq.com>;
Cc: "zeek-dev(a)zeek.org"<zeek-dev(a)zeek.org>;
Subject: Re: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedata
Hi Song,
Could you explain a bit more what you are trying to achieve? Do you want to deliver the same data into two analyzers? Or is it just part of it? Or deliver to one if condition is met and to second one otherwise?
Do you have to wait with second analyzer after it was passed by the first interpreter or can you call second in the deliverstream function? Make the second one into child analyzer and then call ForwardStream function or some similar approach?
I understand where your problem is with your current code. I was not able to get around the len problem yet, but I will give it a go a bit later today, unless someone else knows solution first hand.
For delivery of parts of stream data, defined as a bytestream, you can take example from smb-pipe.pac forward_dce_rpc funct.
Hope this helps a bit.
Ronka
__________________________________________________________
> Od: "Song" <oldpopsong(a)qq.com>
> Komu: "ronka_mata" <ronka_mata(a)centrum.cz>
> Datum: 08.03.2019 05:01
> Předmět: Re: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedata
>
> CC: <zeek-dev(a)zeek.org>
Thank you Ronka.It is a flowunit analyzer. I checked zeek source tree and found that there is only 1 flowunitanalyzer (tls-handshake) uses exportsourcedata directive. I guess that exportsourcedata onlyapply to non-incremental types. Maybe these are true: - all types in a datagram analyzer can use exportsourcedata directive - only non-incremental types in a flowunit analyzer can use exportsourcedataBut I'm not sure about what is non-incremental type, I have to check the generated code.The reason that I want sourcedata field is that I want to feed the whole test_pdu to anotheranalyzer. Now as a workaround, I have to do something like this: test_rpc->DeliverStream(${data}.length() + 4, ${data}.begin() - 4, is_orig);to bring back the first 4 bytes to form the original whole PDU.Maybe I should try datagram analyzer.Song------------------ Original ------------------From: "ronka_mata"<ronka_mata(a)centrum.cz>;Date: Thu, Mar 7, 2019 10:05 PMTo: "Song"<oldpopsong(a)qq.com>;Cc: "zee
k-dev"<zeek-dev(a)zeek.org>; Subject: Fwd: Re: Fwd: [Zeek-Dev] binpac crash triggered by exportsourcedataHi,What might help is checking how you defined the the PDU in .pac file. If it is datagram, mostly used for DNS type traffic or if it is flowunit. You can read more on it here https://github.com/zeek/binpac/blob/master/README.rst#flowYou do not need to define length for datagrams. Look at other protocols for example of differences. Eg radius for datagrams and smb for flows.Ronka---------- Forwarded message ---------From: Song <oldpopsong(a)qq.com>Date: Thu, Mar 7, 2019, 13:40Subject: [Zeek-Dev] binpac crash triggered by exportsourcedataTo: zeek-dev <zeek-dev(a)zeek.org>Hi,I define a PDU like below:type test_pdu = record { lenAB : uint32; pduAB : test_pdu_ab(lenAB);} &length=(lenAB + 4), &exportsourcedata; # fail to compile without &length, &exportsourcedata will cause binpac crashtype test_pdu_ab(len: uint32) = record { lenA : uint
16; dataA : bytestring &length = lenA; dataB : bytestring &length = (len - 2 - lenA);} &exportsourcedata; # &exportsourcedata here is OKThe error message is:binpac: /home/grid/git/zeek/aux/binpac/src/pac_type.cc:857: std::__cxx11::string Type::EvalLengthExpr(Output*, Env*): Assertion `!incremental_input()!` failed.Aborted (core dumped)
I have a customer that will be storing PB's of data and they will be using Zeek to analyze it (not all of it at once). They would like to use a NAS (network attached storage) and have asked me to validate that it will work. I have gone thru the documents but do not see any references to NAS or external storage.
Any assistance would be greatly appreciated.
Thank You,
John W. O'Dell