From FireWiki
Jump to: navigation, search

Summary of past efforts

White paper

Context The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Service (IaaS) context is being researched in a BonFIRE experiment and the benchmarking of wired and wireless computer networks is put forward as a research topic in the research projects CREW and OneLab2. In this paper, we elaborate on the interpretation of the term “benchmarking” in these projects, and answer why research on benchmarking is still relevant today. After presenting a high-level generic benchmarking architecture, the possibilities of benchmarking are illustrated through two examples: benchmarking cloud services and benchmarking cognitive radio solutions.

Reference&download Stefan Bouckaert, Jono Vanhie-Van Gerwen, Ingrid Moerman, Stephen C. Philips, Jerker Wilander, Shafqat Ur Rehman, Walid Dabbous, Thierry Turletti, “Benchmarking computers and computer networks”, joint white paper, August 2011. The white paper may be downloaded here.

FIRE research workshop Budapest

On May 16th 2011, a FIRE research workshop on measurements and benchmarking was organized in Budapest, jointly organized by members of CREW, OneLab2 and BonFIRE. For more information, the agenda, and the presentations, please refer to the page dedicated to this FIRE workshop.

FIRE conference Poznan

On October 27th 2011, as part of a session on "The use of formal testing methods in FIRE", a presentation was given on "The need for benchmarking" by Ingrid Moerman (CREW). Presentation and video are available from the FI Poznan website.


Possible collaboration topics - rough ideas

  • Central repository for "benchmarking scenarios" => comparable to a "projects" portal, but then for measurements
    • across different projects: INPUT / methodologies
      • example: "testing routing performance, general"
        • BM1: "measure packet loss" at "testbed Y" while injecting "reference traffic A"
        • BM2: "measure packet loss" at "testbed Z { same topology as Y, different nodes } with "reference traffic A"
      • example 2: in wireless set-up, possibly including reference background interference
    • portal can also contain traces (e.g. packet traces, interference traces, …) offered online in a common data format
    • could be demand-driven (experimenter needs to measure something and asks support from the community)


  • "Benchmarking facilities"
    • apply benchmarking idea to the facilities: is there a neutral way to judge the possibilities/readiness/… of the FIRE facilities?