Virtual Wall

The Virtual Wall is an emulation environment that consists of 100 nodes (dual processor, dual core servers) interconnected via a non-blocking 1.5 Tb/s Ethernet switch, and a display wall (20 monitors) for experiment visualization. Each server is connected with 4 or 6 gigabit Ethernet links to the switch. The experimental setup is configurable through Emulab, allowing to create any network topology between the nodes, through VLANs on the switch. On each of these links, impairments (delay, packet loss, bandwidth limitations) can be configured. The Virtual Wall nodes can be assigned different functionalities ranging from terminal, server, network node to impairment node. The nodes can be connected to test boxes for wireless terminals, generic test equipment, simulation nodes (for combined emulation and simulation) etc. The Virtual Wall features Full Automatic Install for fast context switching (e.g. 1 week experiments), as well as remote access.

Being an Emulab testbed at its core, the possibility to create any desired network topology and add any desired clients and servers makes it possible for the Virtual Wall to support a plethora of Future Internet experiments. These can focus more on networking experiments, e.g. emulating a large multi-hop topology in which advanced distributed load balancing algorithms are tested. However, the Virtuall Wall can just as well support experiments focusing on the application layer, e.g. performance testing of a novel web indexing service on a few server nodes, and creating a large amount of clients that feed it with a very large amount of data during the experiment. An illustration of this high level of flexibility displayed by the Virtual Wall is the fact that it is also applied in the context of OpenFlow experimentation in its role of a OFELIA island, and in the context of cloud experimentation in its role of a BonFIRE island.

Virtual Wall topology

Virtual Wall topology

 

Virtual Wall deployments

There are currently 2 deployments of the Virtual Wall at iMinds (Virtual Wall 1 with 190 nodes and Virtual Wall 2 with 134 nodes). In Fed4FIRE the initial target is to allow access to the newest instance, the Virtual Wall 2. An important characteristic of this instance is that all the nodes are also publically reachable through the public Internet using the IPv6 protocol.

Virtual Wall deployments

Virtual Wall deployments

 

Properties

Compared to the summary figure presented before, the situation at the Virtual Wall testbed is characterized by the following properties:

  • Only Zabbix is supported as a monitoring framework (not Collectd nor Nagios).
  • The depicted resources are Virtual Wall-specific: 100 dual processor, dual core servers, each equipped with multiple gigabit ethernet interfaces. These are all connected to one non-blocking 1.5 Tb/s Ethernet switch. The Virtual Wall system will automatically configure the corresponding VLAN settings in order to build any topology that the experimenter desires. In the example on the slide, the requested topology is that the first and last node should seem directly connected to each other on a LAN with a certain bandwidth and delay configuration. Therefore the system had automatically put an impairment node in between. However, from the perspective of the first and last node, this impairment node is invisible, and they believe that they are directly connected on layer 2.

An explanation of the supported tools illustrated in the figure below is provided in the overview of tools.

Virtual Wall properties

Virtual Wall properties

 

Contact

You can check the number of free resources on each testbed at https://flsmonitor.fed4fire.eu.

More details can be found

https://doc.ilabt.imec.be/ilabt-documentation/virtualwallfacility.html

Start typing and press Enter to search