Summary
This is not a recommendation of particular machines/vendors. Given the variable nature of machine configurations, there is not a fixed formula to determine the perfect size/configuration for a particular setup. Please see these as guidelines.
Solution
Cluster host machines (Electric Agent/EFS)
The most important part of the cluster host machines is processing power. Buy the most cost-effective CPU power you can. Right now, that tends to be dual-core/quad-core boxes. It is important to have sufficient memory to run parallel commands. As a rule of thumb, Electric Cloud recommends a minimum of 2 GB per agent. For very large builds 3 to 4 GB per agent may be better. Disk space is the least important issue for host machines, if only because the smallest disks vendors sell - typically 65 GB - are more than sufficient (this may vary for disk space-intensive builds).
Cluster Manager
For most deployments, an ordinary single-CPU machine is plenty for the Cluster Manager. If you are running 100+ host machines in the cluster, you may want a dual CPU for the Cluster Manager. Memory should be between 2 GB and 4 GB and disk space depends on the amount of historic data you want to keep. If you do not intend to delete old build records frequently, you may want to get a larger disk > 100 GB.
Build machines (Electric Make)
Running Electric Make (eMake) turns a machine into a "file server," which sends source files across the network and writes back output files, often with many I/O transactions in flight simultaneously due to the parallel nature of the system. That makes disk performance important (both read and write performance). Do not spend money on SCSI, though. Modern serial ATA drive are plenty fast. If you are building from IBM Rational ClearCase dynamic views, local disk speed becomes almost irrelevant. Make sure the eMake machines have enough memory. eMake uses 500 MB to 1.5 GB depending on how many agents it supports. If all build machines have 2 to 3 GB, per agent, that should be fine. For processor power, any modern CPU should be sufficient (multi-core is preferable).
Network
For all machines, a 100 Mbps network interface is sufficient (GigaBit is better). It is best to place the cluster hosts on the same switched LAN so they can make maximum use of cache sharing protocols. It is preferable to keep the cluster close to end users in terms of network topology so that there is high bandwidth and low latency from end to end.