Thanks for the details. I am aware of MarkII and am
reading through it.
How as a community can we update that clustering documentation? If it’s
not accurate it could very easily turn people away
On Sun, Apr 28, 2019 at 6:29 PM Michał Purzyński <
These rules aren't current anymore and
frankly, have never been accurate.
Your Zeek speed depends on the traffic you have, if you have some
elephant flows (and how you deal with them), scripts you run, etc. I
remember pushing between 5-10Gbit/sec through a server with 24 cores (not
threads), with room to spare.
You will also need memory, and depending on scripts you intend to write,
that might be quite a lot. We run with 192GB / server.
Do you have 100Gbit of traffic or 100Gbit interfaces?
Either way, you're gonna build yourself a cluster with a packet broker in
front of it. Arista works well, other people use different brands,
depending on your needs and your budget.
Give those tuning guides I wrote with Suricata developers a read, while
on it, they apply to Zeek as well. Of course Suricata can process way more
traffic per core, than Zeek, because the processing it does is way simpler.
On Sun, Apr 28, 2019 at 11:35 AM Woot4moo <tscheponik(a)gmail.com> wrote:
My understanding is that 4,000+ CPU cores would
be necessary to support
this throughput. In the recent meeting from CERN I recall seeing someone
describe 200Gbps, which would imply 8,000+ CPU cores. Is this accurate, or
am I doing a conversion incorrectly?
I am basing this purely on this quote, from
“The rule of thumb we have followed recently is to allocate
approximately 1 core for every 250Mbps of traffic that is being analyzed.
However, this estimate could be extremely traffic mix-specific. It has
generally worked for mixed traffic with many users and servers. For
example, if your traffic peaks around 2Gbps (combined) and you want to
handle traffic at peak load, you may want to have 8 cores available (2048 /
250 == 8.2). ”