Okay, if this works, I don't think you'll see a gain in performance.
To leverage performance in this case, lets say you have 4 cores,
Core 0 running Bro Manager,
Core 1 running Bro Proxy,
Core 2 running Bro Worker1,
Core 3 running Bro Worker2.
For max performance (and really any performance gain) through cache
localization, you'd want all traffic input to Bro to go to either Core2
or Core3; and both of these cores coupled to one RX_Queue each. You will
somehow need to at the driver layer, split the traffic coming from the
wire to go to one of these queues (intelligently to send packets sharing
state to the same RX_Queue). This has to be done at the RSS, and I have
no idea of how to do this on my network card- Intel 82598EB. (You
couldn't use Click to do this because you want to do it at the driver
level).
What do you think?
Sunjeet
On 10-12-15 7:14 PM, Justin Azoff wrote:
On Wed, Dec 15, 2010 at 05:11:18PM -0500, Sunjeet
Singh wrote:
Yes, that's a great idea. But I'm not
sure how Bro would handle
manager-proxy-worker communication between different RX_Queues instead
of different interfaces. Can't be as simple as writing eth0@1, etc. in
the cluster's node.cfg file. Maybe some changes to Bro code?
Sunjeet
Putting eth0@1,2,3,4 ec. in node.cfg should work just fine.
no changes to bro are needed, but you may have to rebuild bro with
./configure --enable-cluster...
the config I use with click just has:
[manager]
type=manager
host=10.10.1.12
[proxy-1]
type=proxy
host=10.10.1.12
[worker-1]
type=worker
host=10.10.1.12
interface=tap0
[worker-2]
type=worker
host=10.10.1.12
interface=tap1
[worker-3]
type=worker
host=10.10.1.12
interface=tap2
[worker-4]
type=worker
host=10.10.1.12
interface=tap3