This section helps you in determining whether your environment might meet the requirements for a small workload environment. It provides guidance for hardware requirements and tuning the performance of the workload. You might compare this information with the guidance for medium workloads.
The following table provides an example of how event ingestion activities might occur in a small workload:
Application |
Category |
Expected Workload |
---|---|---|
Microsoft Windows |
Events per second |
375 |
Fortinet Fortigate |
Events per second |
375 |
Infoblox NIOS |
Events per second |
375 |
Blue Coat, Check Point, Cisco |
Events per second |
375 |
ArcSight Recon |
Events per second Searches (concurrent) |
1500 3 |
Category |
Requirement |
---|---|
Single node (master and worker) |
1 |
CPU cores (per node) |
8 |
RAM (per node) |
32 |
Disks (per node) |
1 |
Storage per day (1x) |
15 GB |
Total disk space (1.5 billion Events) |
500 GB |
Category |
Property |
Value |
---|---|---|
Vertica |
active_partitions tm_concurrency tm_memory |
8 5 6,000 |
Resource pools |
ingest_pool_memory_size ingest_pool_planned_concurrency |
30% 12 |
Schedule |
plannedconcurrency tm_memory_usage maxconcurrency |
5 10,000 7 |
Property |
Quantity |
---|---|
# of Kafka broker nodes in the Kafka cluster |
1 |
# of ZooKeeper nodes in the ZooKeeper cluster |
1 |
# of Partitions assigned to each Kafka Topic |
12 |
# of replicas assigned to each Kafka Topic |
1 |
# of message replicas for the __consumer_offsets Topic |
1 |
Schema Registry nodes in the cluster |
1 |
Kafka nodes required to run Schema Registry |
1 |
# of CEF-to-Avro Stream Processor instances to start |
2 |