2272
Comment:
|
1984
|
Deletions are marked like this. | Additions are marked like this. |
Line 11: | Line 11: |
== Compute cluster == The compute cluster comes in handy whenever you need a lot of cores to get your job done. It is just like looking at the CPU and memory load of the other servers and then deciding which one to use for your job, only the job schedule will take care of looking at the CPU load for you and schedule the resources on a first come first serve basis (at least for the time being, queue priorities may change in the future). === Hardware === * 1 head node: iln1 * 2 development nodes: ild1, ild2 * 28 compute nodes: * 896 CPU cores * 1792 GB RAM === Software === * Torque resource manager * MAUI job scheduler * CentOS 6.3 === Resources === * [[InfolabClusterCompute|Compute cluster]] === Hardware === * 2 head nodes * 2 development nodes * 36 compute nodes: * 1152 cores * 2.25 TB RAM * Each node: * 2x AMD Opteron 6276 (Interlagos) @2.3GHz - 3.2GHz, 16 cores/CPU, AMD64, VT * 64 GB RAM * 2 TB local HDD === Mailing list === |
== Mailing list == |
Line 52: | Line 18: |
=== Software === | == Compute cluster == |
Line 54: | Line 20: |
We decided that it is not good to have a split personality that is why we now have a set of nodes dedicated to the compute cluster and another set of nodes dedicated to a hadoop cluster. | The compute cluster comes in handy whenever you need a lot of cores to get your job done. It is just like looking at the CPU and memory load of the other servers and then deciding which one to use for your job, only the job schedule will take care of looking at the CPU load for you and schedule the resources on a first come first serve basis (at least for the time being, queue priorities may change in the future). |
Line 56: | Line 22: |
* : * TORQUE resource manager, MAUI job scheduler * Nodes: iln01-iln28 * Submission node: ilhead1 * [[InfolabClusterHadoop|Hadoop cluster]] * Apache Hadoop * Nodes: iln29-iln36 * Submission node: iln29 |
==== Hardware ==== * 1 head node: iln1 * 2 development nodes: ild1, ild2 * 28 compute nodes: iln1 - iln28 * 896 CPU cores * 1792 GB RAM ==== Software ==== * Torque resource manager * MAUI job scheduler * CentOS 6.3 ==== Resources ==== * [[InfolabClusterCompute|Using the compute cluster]] == Hadoop cluster == If you want to run map/reduce jobs then this is the cluster for you. ==== Hardware ==== * 1 head node: iln29 * 7 compute nodes: iln30 - iln36 * 224 CPU cores * 448 GB RAM ==== Software ==== * Apache Hadoop 1.0.3 * Pig * Rhipe * CentOS 6.3 ==== Resources ==== * [[InfolabClusterHadoop|Using the Hadoop cluster]] |
Infolab cluster
Beta warning
If Google can keep things in Beta, why can't we? So.. beware... Things might break. Please join the mailing list and report any glitches that you come across.
It was recently decided that having a split personality is not the best thing to have in a cluster as two different resource managers start to compete while not being aware of each other. That is why we have separated our cluster into a Compute cluster and a Hadoop cluster. Read on for more about the two.
Mailing list
There is a mailing list for all those interested in what is currently happening with the cluster and the configuration of the cluster:
List address: ilcluster@lists.stanford.edu
Manage your subscription: https://mailman.stanford.edu/mailman/listinfo/ilcluster
Compute cluster
The compute cluster comes in handy whenever you need a lot of cores to get your job done. It is just like looking at the CPU and memory load of the other servers and then deciding which one to use for your job, only the job schedule will take care of looking at the CPU load for you and schedule the resources on a first come first serve basis (at least for the time being, queue priorities may change in the future).
Hardware
- 1 head node: iln1
- 2 development nodes: ild1, ild2
- 28 compute nodes: iln1 - iln28
- 896 CPU cores
- 1792 GB RAM
Software
- Torque resource manager
- MAUI job scheduler
- CentOS 6.3
Resources
Hadoop cluster
If you want to run map/reduce jobs then this is the cluster for you.
Hardware
- 1 head node: iln29
- 7 compute nodes: iln30 - iln36
- 224 CPU cores
- 448 GB RAM
Software
- Apache Hadoop 1.0.3
- Pig
- Rhipe
- CentOS 6.3