The objective of the IoT Benchmark consortium is to raise the bar in the quality of experimental data, and provide researchers and engineers in both academia and industry with an objective view of the strengths and weaknesses among existing protocols.
The challenge
Evaluation and comparison of low-power wireless protocols is a complex endeavor.
- There is a wide variety of settings: physical setup, definition of metrics, use of different sets of metrics, different traffic patterns, etc. Results are often not comparable, even when it may seem like it.
- Comparing against baseline protocols is challenging. Existing implementations are not always available.
- The literature contains both comparisons between protocols only (software) and between complete solutions (platforms and protocols, i.e. hardware + software).
Our vision
The IoT Benchmark consortium built itself bottom-up, driven and pushed by the low-power wireless networking academic community. Our objective is to design a comprehensive benchmark not only consisting of problem sets, but also providing tools and methodologies for performance evaluation of low-power networking solutions.
To feed the process, we have been interacting with other research communities already using benchmarking (robotics, database, etc.). Together, we co-organize CPSBench 2018, the 1st Workshop on Benchmarking Cyber-Physical Networks and Systems (satellite workshop of CPSWeek).
Discussions with various IoT companies also triggered a lot of interest: they face similar problems for evaluating their products and comparing with competitors.
A standardized benchmark is also called for by industry.
It is still not clear however how strictly the benchmark problems should be defined. There is a fundamental trade-off between accuracy and generality in the benchmark design space.
The more accurately benchmark problems are defined, the better for fair comparisons, but the less practical and usable the benchmark becomes. It is therefore paramount to finely balance the benchmark design to ensure its usability and ultimately its adoption by the community.
Thus, an ideal benchmark would
- Provide a set of tools and practices for performance evaluation
- Enable fair comparisons between new and existing approaches,
even when code is not openly available - Enable repeatability of experimental results
Ultimately, the benchmark would serve as a reference for the evaluation of academic research works but also of existing and future products from the IoT industry.
History
- 2021, May
- 4th CPS-IoTBench workshop in conjunction with CPS-IoT Week
- 2020, September
- 3rd CPS-IoTBench workshop in conjunction with MobiCom
- 2020, February
- 2019, April
- CPS-IoTBench workshop at CPS-IoT Week, Montréal
- Invited talk presents the work-in-progress towards the IoTBench vision
- 2019, February
- EWSN Dependability Competition, getting closer than ever to an actual benchmark
- 2018, April
- CPSBench workshop at CPSWeek, Porto
- Invited paper describes the vision and roadmap of IoTBench
- 2018, February
- Presentation and Poster at EWSN, Madrid
- 2017, December-onwards
- Bi-monthly telcos
- 2017, October
- Plenary meeting in Stockholm
- Group expands
- 2017, May
- Plenary meeting in Milan
- Group expands
- 2017, February
- Ad-hoc meeting at EWSN, Uppsala
- Group expands
- 2016, August
- Poster at SenSys (11 unique affiliations)
- Drafts goals and challenges
- 2016, June
- Small group discuss the idea of a benchmark
IoTBench - Past, present, and future of a community-driven benchmarking initiative (Presentation, CPS-IoTBench, 2019)
Towards a Benchmark for Low-power Wireless (Presentation, EWSN 2018)
Benchmarking Low-power Wireless Networking (Poster, EWSN 2018)