What tool is using Zeek in this manner? The way that tool uses Zeek might
You might want to ask this on the Slack channel:
I've cc'd the mailing list, so you might want to join that and reply there
On Thu, Mar 24, 2022 at 8:31 AM Noel Rodriguez Gutierrez <mail(a)zeek.org>
> Hello Zeek Team,
> Our current security tool uses Zeek as a NIDS and I just have some
> questions that hopefully I can get answers too. The first one is the
> address scan (not port). The rule is firing on a bunch of endpoints doing
> it all over port 443. The amount of traffic seems a little excessive in my
> opinion and I was wondering if this is some behavior that has been seen
> from windows computers? I also don't get the IP's that they try to scan to.
> It just gives me the originating IP and that it has scanned x number of
> unique hosts. It's there a way that we can make it give us the IP's that it
> tried to "scan"?
> Thank you very much
> team mailing list -- team(a)lists.zeek.org
> To unsubscribe send an email to team-leave(a)lists.zeek.org
Strategist and Author in Residence, Corelight
you might have heard that OpenSSL announced a new security update
release yesterday (https://www.openssl.org/news/vulnerabilities.html).
The highest severity bug they fixed is CVE-2022-0778, which can trigger
an infinite loop while parsing certificates.
As Zeek uses OpenSSL for certificate parsing, this bug does directly
affect Zeek. A Zeek process will hang when parsing traffic that contains
a malicious certificate that is aimed to exploit this vulnerability.
Updating to patched versions of OpenSSL, and restarting Zeek, will fix
this issue. We urge everyone to update to a patched version of OpenSSL
as soon as possible.
Currently we are using file_sniff events plus some conditional logic to selectively invoke Files::ANALYZER_EXTRACT on some subset of files for further analysis. However, this approach leads to a lot of duplication and we would like to hone it down - specifically, exclude files based on known hashes. So I tried:
file_sniff event -> sometimes invoke Files::ANALYZER_MD5
file_hash event -> after using some logic to make sure this is the hashing event triggered by the previous step, then try to invoke the full Files::ANALYZER_EXTRACT
but this approach results in
"Reporter::WARNING","message":"Analyzer Files::ANALYZER_EXTRACT not added successfully to file
which, based on these threads I found...
...what I got sounds like an expected result - the ANALYZER_EXTRACT call is "too late" in the event lifecycle for the file - presumably because a maximum of one ANALYZER submission per file is supported - but I'm still not clear on exactly why.
Is there documentation somewhere on the file analysis / event lifecycle that documents when and how file analysis can be triggered, and the limitations that appear to be implicit?
It also seems like this is a common enough use case that someone must have solved this problem at some point in a more elegant way than the threads I found have proposed (extract all, then delete some).
Hoping someone has some insights here they would be willing to share?
I apologize if this is the wrong list to ask this question. I'm working on creating a Wazuh cluster, can I install Bro on that cluster or do I need to create a separate server cluster and ship the logs to the Wazuh cluster?
My apologies if this is a remedial question.
I have noticed while viewing the conn.log file, that some of the timestamp
fields in the entries are not in chronological order. For example, viewing
a section of this file I see:
Is this normal?