The Virtual Wall hardware (+550 servers) can be used as bare metal hardware (operating system running directly on the machine) or virtualized through XEN virtualization or docker containers (e.g. using Kubernetes to scale up). You have root permissions and have full control about the nodes through ssh.
An overview of the current hardware can be found here: https://doc.ilabt.imec.be/ilabt/virtualwall/hardware.html
Multiple operating systems are supported, e.g. Linux (Ubuntu, Centos, Fedora), FreeBSD and you can also create your own images.
Network impairment (delay, packet loss, bandwidth limitation) is possible on links between nodes and is implemented with software impairment. All nodes have by default a public IPv6 address, and you can ask through the tools (pools of) public IPv4 addresses to experiment with fully connected machines, build your own cloud or do big data analysis.
Some of the nodes are connected to an OpenFlow switch to be able to do OpenFlow experiments in a combination of servers, software OpenFlow switches and real OpenFlow switches. Some of the nodes also contain GPUs that you can use as bare metal. For more easy GPU usage, see GPULab or Jupyterhub.
More details can be found
You can check the number of free resources on each testbed (Virtual Wall 1 and Virtual Wall 2) at the Fed4FIRE+ Federation Monitor. For more information and documentation, see doc.ilabt.imec.be