This section helps you in determining whether your environment meets the requirements for a medium workload environment. It provides guidance for hardware requirements and tuning the performance of the workload. You might compare this information with the guidance for small workload.
The following table provides an example of how event ingestion activities might occur in a medium workload:
Application |
Category |
Expected Workload |
---|---|---|
Fortinet Fortigate |
Events per second |
7600 |
Microsoft Windows |
Events per second |
6000 |
Infoblox NIOS |
Events per second |
4000 |
Blue Coat, Check Point, Cisco |
Events per second |
1900 |
ArcSight Recon |
Events per second Searches (concurrent) |
19500 3 |
Category |
Requirement |
---|---|
Single node (master and worker) |
1 (G10 -L7700) |
CPU cores (per node) |
48 |
RAM (per node) |
192 |
Disks (per node) |
4 (7500 rpm) |
Storage per day (1x) |
0.9 TB |
Total disk space (12 billion Events) |
10.8 TB |
Category |
Property |
Value |
---|---|---|
Vertica |
active_partitions tm_concurrency tm_memory |
8 5 6,000 |
Resource pools |
ingest_pool_memory_size ingest_pool_planned_concurrency |
30% 12 |
Schedule |
plannedconcurrency tm_memory_usage maxconcurrency |
5 10,000 7 |
Property |
Quantity |
---|---|
# of Kafka broker nodes in the Kafka cluster |
1 |
# of ZooKeeper nodes in the ZooKeeper cluster |
1 |
# of Partitions assigned to each Kafka Topic |
12 |
# of replicas assigned to each Kafka Topic |
1 |
# of message replicas for the __consumer_offsets Topic |
1 |
Schema Registry nodes in the cluster |
1 |
Kafka nodes required to run Schema Registry |
1 |
# of CEF-to-Avro Stream Processor instances to start |
2 |