Skip to content

Configuration files for a Bramble (RPi3 cluster)

Notifications You must be signed in to change notification settings

tkphd/bramble-config

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

      .                               .      .;
    .'                              .'      .;'
   ;-.    .;.::..-.    . ,';.,';.  ;-.     .;  .-.
  ;   ;   .;   ;   :   ;;  ;;  ;; ;   ;   :: .;.-'
.'`::'`-.;'    `:::'-'';  ;;  ';.'`::'`-_;;_.-`:::'
                     _;        `-'

Configuration files to generate a bramble, or cluster of Raspberry Pi 3 single-board computers running Raspbian and managed using Ansible.

Caveat Emptor

Each computer in the bramble has a 4-core CPU at 1.2 GHz, 1 GB RAM, a micro SD slot for boot media, four USB 2.0 ports, 100 Mbit Ethernet, and WiFi. Creating a cluster is instructive and useful for IT infrastructure testing, especially queuing system, file system, and container research.

It is not intended to perform useful computational tasks. There are better tools for that.

Layout

The files contained in this repository will be most useful to you if your bramble contains at least three nodes, for example:

head

The primary point of contact between your Bramble and the outside network, this node serves administrative duties, only. Configured with an 8 GB microSD card and 16 GB USB stick.

data

The primary point of contact for network data stores, this node serves data and database duties, only. Configured with an 8 GB microSD card and 64 GB USB stick.

r1n1

The first node (n1) on the first rack (r1) serves computational duties, as well as light data service for other nodes on its rack when a distributed filesystem (e.g., Lustre) is installed. Configured with an 8 GB microSD card and 16 GB USB stick.

r1nX

The balance of nodes on the first rack (r1) serve computational duty, only. Configured with an 8 GB microSD card and 16 GB USB stick.

Interconnect

To reduce latency, the nodes have wired Ethernet connections to a Gigabit switch. This switch is not connected to a DHCP server, so static IP addresses are assigned on 192.168.3.100/24. The translation from name to IP is 100 + 10*rack + node. So, for example, r1n1 is 192.168.3.111, r1n4 is 192.168.3.114; head, or r0n1, is 192.168.3.101, and data is 192.168.3.102. For convenience, and software updates, WiFi is also enabled on all nodes. A more representative configuration for a HPC cluster would funnel traffic through head, or a router at 192.168.3.1, but this introduces unnecessary complications to the setup. This is, after all, meant to be fun :-)

Public Domain