Evarilos Logo
Evarilos


Evaluation Opportunites ‐ Invitation

Track 2 ‐ RF-based Indoor Localization Algorithms Acrobat PDF

The track, organized by the EVARILOS consortium, is an online opportunity to evaluate RF-based indoor localization algorithms. Participating teams can also evaluate precise localization under the influence of different RF-interference scenarios. The localization algorithms will have to be implemented through remote access on top of the existing hardware resources available at the EVARILOS experimental facilities at TU Berlin and iMinds, including a large number of IEEE 802.11g/n access points and IEEE 802.15.4 sensor nodes, deployed in a typical multi-floor office space context. As in Track 1, the localization algorithms in Track 2 will be evaluated following the EVARILOS benchmarking methodology, aligned with the upcoming ISO/IEC 18305 standard “Test and evaluation of localization and tracking systems”.

Please note that the access to the testbed resources in Track 2 is currently limited due to building reconstruction work. Please contact Vlado Handziski (handzisk@tkn.tu-berlin.de) or Filip Lemić (lemic@tkn.tu-berlin.de) for more details.
This online track will provide participants with unique opportunity to:

  • Test and compare the performance of their RF-based indoor localization algorithms following a standardized evaluation methodology;
  • Test the sensitivity of their algorithms to radio interference under controlled conditions;
  • Disseminate information about their localization solution as part of the website and contribute towards the establishment of a public repository of experimental traces from indoor localization solutions.

Important: this version of the annex will be refined with the feedback of the participants. Refined versions will be timely distributed by means of the EvarilosCompetition@tkn.tu-berlin.de mailing list.

Participation Details
Participants are invited to submit an extended abstract describing their system, where a "participant" can be any individual or group of individuals working as a single team, associated to a single or a number of organisations. The length of the extended abstracts should not exceed four pages, including figures and tables, and should include a description of the deployment, and algorithms that will be used, and the description of the internal data produced and processed by the system.

Contacts
Vlado Handziski (handziski@tkn.tu-berlin.de)
Filip Lemić (lemic@tkn.tu-berlin.de)

Table of Contents

TKN Testbed Environment
TKN Hardware Overview
Instantiation of TKN Infrastructure for Indoor Localization Experiments
Instructions for Participants
Evaluation Scenarios
Evaluation Procedure
References

TKN Testbed Environment

TKN testbed is located at the 2nd, 3rd and 4th floor of the Telecommunication Network Group building in Berlin. According to the EVARILOS Benchmarking Handbook [14], TKN testbed environment can be characterised as “Big” with “Brick walls”, i.e. more than 1500 m2 area with more than 50 rooms. TKN testbed is an office environment mostly comprising of three types of rooms, namely small offices (14m2), big offices (28m2) and laboratories (42m2), as shown in Figure 2.2. Furthermore, TKN testbed is a dynamic environment, meaning that there is a number of people moving in the premises, constant opening of doors or slight movement of infrastructure (chairs, tables, etc.) are expected and usual. Furthermore, uncontrolled wireless interference typical for office environments can be expected, as presented in Figure  2.1.

uncontrolled_interference

Figure 2.1: Sources of uncontrolled interference

wispy

(a) 2nd floor

pict

(b) 3rd floor

pict

(c) 4th floor

Figure 2.2: Footprints of the TKN testbed environment

TKN Hardware Overview

This chapter gives a short generic description of different types of hardware that are part of the TKN infrastructure and are currently available for experimentation in TKN testbed, namely:

  • TKN Wireless Indoor Sensor Network Testbed;
  • Turtlebot II Robotic Platform;
  • WLAN Access Points;
  • WMP on Alix2D2 Embedded PCs;
  • Low-Cost Spectrum Analysers (WiSpys);
  • R&S FSV7 Spectrum Analyser;
  • R&S SMBV100A Signal Generator.

TKN Wireless Indoor Sensor Network Testbed (TWIST)

The TKN Wireless Indoor Sensor Network Testbed (TWIST) is a multiplatform, hierarchical testbed architecture developed at the TKN. The selfconfiguration capability, the use of hardware with standardized interfaces and open source software make the TWIST architecture scalable, affordable, and easily replicable (Figure 3.2). The TWIST instance at the TKN office building is one of the largest remotely accessible testbeds with 204 sockets, currently populated with 102 eyesIFX and 102 Tmote Sky nodes (Figure 3.1). The nodes are deployed in a 3D grid spanning 3 floors of an office building at the Technische Universität Berlin (TUB) campus, resulting in more than 1500 m2 of instrumented office space. In small rooms, two nodes of each platform are deployed, while the larger ones have four nodes. This setup results in a fairly regular grid deployment pattern with intra node distance of 3 m, as shown in Figure 3.3. Within the rooms the sensor nodes are attached to the ceiling.

pict

Figure 3.1: Tmote Sky, eyes IFXv2, NLSU2 supernode / USB Hub

pict

Figure 3.2: Hardware components of TWIST testbed

pict

Figure 3.3: Locations of sensor nodes in the 2nd floor of TWIST testbed

Turtlebot II Robotic Platform

Turtlebot II robotic platform [8] comprises of a mobile base called Kobuki, a laptop, a router and a Microsoft Kinect 3D camera sensor (Figure 3.4). On the software side we are using Robot Operating System (ROS) [12], an open source approach for robots. ROS comes with numerous drivers and libraries that cover everything from low-level communication with hardware as well as higher layer tasks, such as mapping, navigation and obstacle avoidance. Besides that, ROS is also a communication middleware that transports information between components in ROS. The dominating scheme is topic oriented asynchronous message exchange in the fashion of publish-subscribe, but it also has means for synchronous communication. It is easily extendible through either publishing or subscribing to existing topics or through creating new ones. By doing so, ROS can also be used to transport arbitrary data. This allows to use ROS for controlling robots and extends the system by adding components on top of that.

We have set up an autonomous testbed environment in which we use the Turtlebot to position the SUT at different locations 3.5. To do that we leverage the navigational capabilities of ROS that also includes obstacle avoidance. ROS uses a a-priori given map and localizes itself by matching the depth information of the Kinect 3D camera with the outline of the known map. ROS provides a simple interface to request the robot to drive autonomously to a given coordinate, so called goal, and a path planer is calculating the best path towards it. We have embedded these calls to move the robot to the next location into a higher schedule. First we define a set of waypoints that have to be covered in the survey, then the robot iterates autonomously over each one of them. The whole procedure is followed in an unstructured office environment with dynamic obstacles, like humans, opening / closing the doors, etc.

For communicating with the rest of the infrastructure the mobile platform is equipped with a WLAN access point that operates in client mode and connects to one of the six APs deployed on every floor in our building. We are controlling the robot's AP by a ROS component that is location aware and selects the most appropriate AP in the different parts of the floor.

pict

Figure 3.4: Turtlebot Robotic Platform

pict

Figure 3.5: Robotic Platform Design

WLAN Access Points

TKN tesbed is equipped with 18 dualband TP-link N750 APs (model TL-WDR4300) [4] (Figure 3.6). They run OpenWRT as operating system (and cOntrol Management Framework (OMF) as control and measurement plane. Positions of the WLAN APs in the 2nd floor of TKN testbed are given in Figure 3.7.

pict

Figure 3.6: TL-WDR4300 WLAN Access Point

pict

Figure 3.7: Locations of WLAN routers in the 2nd floor of TKN testbed

WMP on Alix2D2 Embedded PCs

Wireless Mac Processor (WMP) is a customizable WLAN 802.11b/g MAC [13]. It is running on ALIX2D2 embedded PCs [1] equipped with Broadcom WL5011S 802.11b/g cards and shown in Figure 3.8. In our infrastructure three ALIX2D2 exist. The customization of the MAC protocol is done via JAVA GUI. At the ALIX2D2 the firmware is loaded by the so called “bytecode-manager” on the Broadcom cards.

pict

Figure 3.8: Alix2D2 Embedded PC

Low-Cost Spectrum Analysers

The TKN infrastructure also comprises several WiSpy sensing devices (Figure 3.9). These are low-cost spectrum scanners that monitor activity in the 868 MHz, 2.4 and 5 GHz spectrum, and output the measured RF energy and the quality of the received signals.

pict

Figure 3.9: WiSpy USB dongle

R&S FSV7 Spectrum Analyser

Rhode&Schwarz FSV7 signal and spectrum analyser [6] (Figure 3.10) is a very flexible and fast signal and spectrum analyser covering the frequency range between 9kHz and 7 GHz. It is simple extendible by several measurement applications and toolboxes. Furthermore, it is possible, by buying appropriate license, to add complete receiver chains like Bluetooth, LTE, WiMAX or WLAN.

pict

Figure 3.10: R&S FSV7 spectrum analyser

R&S SMBV100A Signal Generator

Rhode&Schwarz SMBV100A is a very flexible signal generator [7] (Figure 3.11). It provides transmissions of baseband signals in the range of 9 kHz to 6 GHz. It is possible send any generated or stored signal with up to 120 MHz bandwidth. By applying toolboxes, the SMBV100A signal generator allows to generate different standards conform signals like e.g., WiMAX, WLAN or LTE. Together with the R&S FSV7 Spectrum Analyser complete transmissions chains can be set up.

pict

Figure 3.11: R&S SMBV100A Signal Generator

Instantiation of TKN Infrastructure for Indoor Localization Experiments

We have instantiated our testbed infrastructure for specific purpose of evaluation and benchmarking of different indoor localization solutions and algorithms. The overview of different capabilities of TKN tested for this specific purpose is given in Figure 4.1 [10].

The TKN infrastructure leverages a robotic mobility platform which enables autonomous, accurate and repeatable positioning of localization devices in the environment. Furthermore, it integrates devices for generating controlled RF interference, which can be used to evaluate the influence of RF interference on the performance of the localization system under test (SUT). With the TKN infrastructure it is currently possible to create different types of wireless interference, such as IEEE 802.11 (WiFi), IEEE 802.15.4 (ZigBee) or IEEE 802.15 (Bluetooth). For validation of the resulting RF context, the infrastructure features devices that monitor the RF spectrum at different measurement points in order to guarantee equal conditions for all SUT. Finally, testbed enables deployment of different indoor localization algorithms or solutions to be tested into the already existing infrastructural components.

pict

Figure 4.1: Overview of TKN testbed for evaluation and benchmarking of different indoor localization SUT

Autonomous Mobility Platform

For automatic transportation of the localized device of the SUT to different evaluation points, without the presence of a human test-person and in a repeatable way, we use the Turtlebot II robotic platform. Turtlebot provides an interface to request the robot to drive autonomously to a given coordinate. For an experiment one can define a set of measurement points and the robot iterates over each of them. When a measurement point is reached, this event and the current location and orientation of the robot is reported back the experimenter. The location estimations provided by the robot are highly accurate, achieving the mean localization errors of around 20 cm, so these location estimations can be considered as ground truths for indoor localization experiments.

Interference Generation

During an evaluation of the proposed localization solution the impact of external interference is mostly not considered. However, it can have an influence on the performance of the indoor localization SUT. For this reason we have developed means to generate various types of interference scenarios, as presented in Figure 4.2. The most common type of wireless activity in the 2.4 GHz is the WiFi traffic. We have adapted the interference scenarios from [11, 2] and using the OMF [5] we are able to create in our testbed the interference context of typical home or office environments. This type of interference can be created using TKN WLAN routers and Alix PC devices (Figure 4.3), but also additional devices such as laptops can be used to extend the infrastructure. Furthermore, it is possible to use R&S signal generator for creating different interference patterns, such as jamming on the WiFi channel, WiFI traffic, Bluetooth traffic, etc. For generating IEEE 802.15.4 interference patterns, some TWIST sensor nodes can be used, where the locations of these nodes are given in Figure 4.4.

pict

Figure 4.2: Interference generation

pict

Figure 4.3: Locations of Alix Embedded PCs (green dots) and TPLINK WLAN routers (purple dots) that can be used for interference generation in the 2nd floor of TKN testbed

pict

Figure 4.4: Locations of sensor nodes that can be used for interference generation in the 2nd floor of TKN testbed

Interference Monitoring

In the previous section we have described that we can generate different interference scenarios based on the needs of an experiment. Still, the spectrum in the ISM 2.4 GHz band is free for use and we do not have full control over all devices operating in those frequencies. We have disabled the university infrastructure WiFi network in the 2.4 GHz band in the building but the signal from the surroundings can still be received, as summarized in Figure 4.5. That is why it is necessary to monitor the spectral environment to tell if it is looking as expected. We can use OMF to orchestrate WiSpy [3] devices to perform spectrum sensing. We are using one of them connected to the robot to make sure that the measured interference is not exceeding planned one. Furthermore, for monitoring the wireless spectrum with much finer granularity R&S spectrum analyser can be used. Same as WiSpy devices, it is controlled using OMF. Finally, some low-power sensor nodes of the TWIST testbed can be used as the distributed spectrum analyser, where the locations of nodes are given in Figure 4.6.

pict

Figure 4.5: Interference monitoring

pict

Figure 4.6: Locations of sensor nodes that can be used as distributed spectrum analyser in the 2nd floor of TKN testbed

System Under Test

Parts of TKN testbed infrastructure can be used for deploying different SUT devices to be evaluated. Namely, mobile part of the SUT can be mounted on the Turtlebot robotic platform and the devices that we can provide as a mobile part of the SUT are given as follows:

  • TmoteSky low power sensor node;
  • EyesIFX low power sensor node;
  • Shimmer low power sensor node;
  • Smartphone Nexus S (GT-I9023);
  • Tablet Nexus 7, 2013 version;
  • Notebook Lenovo ThinkPad;
  • Notebook Asus 1215n.

Furthermore, four static parts (anchor nodes) of SUT TP LINK WLAN access points can be used, and their locations are given in Figure 4.7. Finally, some TWIST sensor nodes (TmoteSky nodes in 2.4 GHz ISM band or EyesIFX nodes in 868 MHz ISM band) can be used as anchor points for deployment of different SUTs, with locations given in Figure 4.8.

pict

Figure 4.7: Locations of TP LINK WLAN routers that can be used as a part of SUTs in the 2nd floor of TKN testbed

pict

Figure 4.8: Locations of sensor nodes that can be used as a part of SUTs in the 2nd floor of TKN testbed

Instructions for Participants

This chapter gives instructions for participants for adapting their algorithms and using the hardware provided in the TKN testbed.

Interfacing with the SUT

All participants have to deploy their algorithm on one of the devices intended for deploying SUTs and described previously. Further, participants have to provide an HTTP URI on which their algorithm listens for requests for location estimation. Upon request, the algorithms must be able to provide the location estimate as a JSON response in the following format:

{ 
        "coordinate_x":   'Estimated location: coordinate x', 
        "coordinate_y":   'Estimated location: coordinate y', 
        "coordinate_z":   'Estimated location: coordinate z', 
        "room_label":      'Estimated location: room' 
}

JSON parameters coordinate_x and coordinate_y are required parameters and as such they must be reported upon request. Parameter coordinate_z is an optional parameter, due to the 2D evaluation environment. If this parameter is provided from a SUT, evaluation team will also calculate 3D localization error, although this information will not be used in final scoring. Finally, parameter room_label is an optional parameter and if it is not provided the EVARILOS Benchmarking Platform will automatically map the room estimate from the estimated coordinates x and y.

Coordinates (x,y) or (x,y,z) of the location estimates have to be calculated according to the predefined zero-point coordinate (x,y,z) = (0, 0, 0), given in Figure 5.1. Furthermore, room labels have to be the same as ones indicated in figure. The technical team will provide a footprint of the environment in vector format to all participants.

pict

Figure 5.1: Zero-point location and room labels

The technical team will support the participants in deploying their algorithms on desired hardware, interfacing SUT with the EVARILOS Benchmarking Platform (EBP), controlling the robotic mobility platform, generating different interference scenarios and interference monitoring. Furthermore, each participant will be given 4 hours in order to train their algorithms in the TKN testbed environment, before the evaluation process starts. During that time participants will also be supported by the technical team.

Usage of Different Hardware Components

This section shortly describes how TKN testbed users can use the hardware for their experiments and fine-tuning of their indoor localization algorithms. All participants will be provided with the Virtual Private Network (VPN) access to the testbed's network. For the deployment and training part each participant will be able to generate the desired interference scenarios, navigate the robotic platform, etc. as described bellow. On the other side, for the evaluation part the evaluation committee will decide on the locations of interferers and evaluation points.

SUT Mobile Nodes

Different devices can be used as mobile parts of SUTs. Users will be able to use Secure Shell (SSH) tunnels to the desired devices and will be able to deploy their algorithms on those devices. Furthermore, for some devices participants will be able to completely change the firmware of the device (mostly for low-power sensor nodes). All additional requests have to be mentioned in advance and the technical committee will decide to allow them or not.

SUT Infrastructure Nodes

As mentioned before, as infrastructural parts of SUTs the wireless sensor network of WLAN routers can be used, depending on the requirements of algorithms. Locations of infrastructural nodes will be known to the participants in advance. The TKN Wireless Indoor Sensor Network Testbed (TWIST) is accessible through the web interface. More details and instructions on how to use testbed can be found on following URL address:

http://www.twist.tu-berlin.de/wiki

Different possibilities exist for using the WLAN routers as SUT infrastructural nodes. All routers are running OpenWRT, a Linux distribution for embedded devices [9]. First possibility of usage of WLAN AP is using them in a access point mode, where parameters on an AP, such as transmission power or wireless channel, can be set using a script that will be provided. Furthermore, participants will be given an SSH access to all WLAN routers where they will be able to deploy their scripts or anything else necessary for running their SUT. Finally, participants will be able to change the entire firmware of WLAN routers if needed.

Autonomous Mobility

Autonomous mobility platform will be accessible over the web interface where participants will be able to click on the location where they want to position the robotic platform and their SUT. As second way of usage, participants will be able to send the robot to a location by setting the coordinates of desired location. Finally, it will be possible to provide a set of waypoints to the robotic platform in order to fully automate even the training phase for different indoor localization algorithms. Robot will provide its current location or adequate messages if the desired location is currently not reachable.

Interference Generation

Participants will to the certain degree be able to generate the interference scenarios using the above described devices. Namely, the code for generating three interference scenarios described bellow will be provided by technical team, and the users will be able to select the nodes on which the code should run.

Interference Monitoring

Participants will be able to use different devices for monitoring interference levels. They will be able to obtain the dumps of wireless spectrum from the WiSpy device on the robot or WiSpy devices on multiple other locations. Furthermore, participants will be able to use TWIST low-power sensor nodes for distributed spectrum sensing. Finally, participants will also be able to use spectrum analyzer for collecting the spectrum traces with much finer granularity than the one achieved with low-cost WiSpy devices. The code for spectrum monitoring will also be provided by the technical team.

Evaluation Scenarios

This chapter presents interference scenarios that will be artificially generated in TKN testbed in order to evaluate different indoor localization algorithms. The goal is to determine if and to which extend different types and amounts of RF interference can influence the indoor localization performance. The text below presents the reference scenario and describes three interference scenarios that will be used for evaluation in TKN testbed.

Reference Scenario

This reference scenario is instantiated on the 2nd floor of the TKN testbed in Berlin. It is called “Reference Scenario”, since no artificial interference is generated and the presence of uncontrolled interference is minimized. According to the EVARILOS Benchmarking Handbook (EBH), this scenario is an instance of the “Small office” type of scenarios. In this scenario 20 evaluation points are defined (the example locations are given in Figure 6.1, different evaluation points will be used.

At each evaluation point, the indoor localization SUT will be requested to estimate the location. The SUT device will be carried to each evaluation point using the robotic platform. The navigation stack of the robotic platform gives an order of magnitude more accurate location estimation than the usual SUTs and the location obtained from the robotic platform will be considered as ground truth for the evaluation.

The experiments will be performed in the afternoons, so the influence of uncontrolled interference will be minimized. Furthermore, the wireless spectrum will be monitored using the WiSpy device attached to the robotic platform and all measurements with the interference threshold above certain level will be repeated. Finally, during each experiment a measurement of the wireless spectrum will be taken with the spectrum analyser at a predefined location.

pict

Figure 6.1: Example set of evaluation points

Interference Scenario 1

In this interference scenario instantiated in the TKN testbed interference is created using the IEEE 802.15.4 Tmote Sky nodes. The interference type is jamming on one IEEE 802.15.4 channel with a constant transmit power equal to 0 dBm. Five of these jamming nodes will be present in the testbed environment. Summary of this interference scenario is given in Table 6.1.

Table 6.1: Interference scenario 1 summary

scenario_1

Interference Scenario 2

Second interference scenario instantiated in the TKN testbed defines interference types that is usual for the office or home environments. Namely, interference is emulated using 4 Wireless Fidelity (WiFi) embedded personal computers (PCs): server, access point, data client, and video client. The server acts as a gateway for the emulated services. The data client is emulated as a Transmission Control Protocol (TCP) client continuously sending data over the AP to the server. Similarly, the video client is emulated as a continuous User Datagram Protocol (UDP) stream source of 500 kbps with the bandwidth of 50 Mbps. The AP is working on a WiFi channel overlapping with the SUT's channel and with the transmission power set to 20 dBm (100 mW). Summary of a described interference scenario is given in Table 6.2.

Table 6.2: Interference scenario 2 summary

scenario_2

Interference Scenario 3

For the third interference scenario instantiated in the TKN testbed signal generator will be used for generation of synthetic interference. The generated synthetic interference will have an envelope of the characteristic WiFi signals, but without any Carrier Sensing (CS). The summary of interference scenario 3 is given in Table 6.3.

Table 6.3: Interference scenario 3 summary

scenario_3

Evaluation Procedure

Track 2 evaluates RF-based indoor localization algorithms deployed on top of the existing hardware in TKN testbed. This chapter describes the evaluation procedure that will be followed for Track 2.

Evaluation Process

Indoor localization algorithms will be evaluated in 20 different evaluation points under four interference scenarios. The evaluation points will be selected by the evaluation team and will be the same for all evaluated algorithms. In the first run, all algorithms will be evaluated in the environment without controlled interference and the metrics will be calculated. The following three runs of evaluation will be done in the environment with three different interference scenarios described before. The locations of interference sources will be selected by the evaluation team and will be the same for all evaluated algorithms. For each location at each measurement point, the EVARILOS Benchmarking Platform will request a location estimate from the SUT. The evaluated data at each location will be automatically stored and metrics will be calculated and presented in real-time.

Evaluation Metrics

For the Track 2 the following metrics will be calculated:

  • Performance metrics - obtained from the experiment:
    • Geometric or point level accuracy of location estimation;
    • Room level accuracy of location estimation;
    • Latency or delay of location estimation.
  • Derived metric - calculated from the performance metrics:
    • Interference robustness of indoor localization algorithm.

Point Level Accuracy

Point level accuracy at one evaluation point is defined as the Euclidean distance between the ground truth provided by the robotic mobility platform (xGT, yGT) and the location estimated by indoor localization algorithm (xEST, yEST), given with the following equation:

pict         (7.1)

Room Level Accuracy

Room level accuracy of location estimation is a binary metrics stating the correctness of the estimated room, given with the following equation:

pict         (7.2)

Latency of Location Estimation

Latency or delay of location estimation is the time that the SUT needs to report the location estimate when requested. The time that will be measured in the evaluation is the difference between the moment when the request for indoor localization has been sent to the SUT (trequest) and the moment when the response arrives (tresponse), given with the equation:

pict         (7.3)

Interference Robustness

Interference robustness of indoor localization algorithm in metric that reflects the influence of different interference types to the performance of the indoor localization algorithm. In this evaluation, interference robustness will be expressed as the percentage of change in other metrics in the scenarios with interference in comparison to the performance in the scenario without interference (reference scenario). For the case of generalized metric (M), the interference robustness is given with the following equation:

pict         (7.4)

where Mreference is the value of metric M in the reference scenario, while Minterference is the value of metric M in the scenario with interference. Note that if the performance of an algorithm for the performance metric M is better in the scenario with interference, in comparison to the reference scenario, then the interference robustness metric will be set to 0 %.

Capturing the Evaluation Metrics

The evaluation procedure will be done in four steps, namely in four interference scenarios. In each step, for each of the 20 evaluation points, the set of metrics (point accuracy, room accuracy, latency) will be obtained. For each set, the 75th percentile of point level accuracy and latency will be calculated, together with percentage of correctly estimated rooms, as shown in Figure 7.1.

pict

Figure 7.1: Procedure of capturing performance metrics

Interference robustness is calculated using the principle given in Figure 7.2. For each interference scenario the interference robustness is calculated for each performance metric, using the Equation 7.4. The overall interference robustness is the averaged interference robustness over all interference scenarios and all performance metrics, given with following equation:

pict         (7.5)

In the equation the sum goes over all three interference scenarios (i = 1, 2, 3), and M1(i), M2(i) and M3(i) are interference robustness of 75th percentile of point accuracy, interference robustness of percentage of room level accuracy and interference robustness of 75th percentile of latency for scenario i, respectively.

pict

Figure 7.2: Procedure of capturing derived metric - interference robustness

Calculation of Final Score

Final scores will be calculated according to the approach described in the EVARILOS Benchmarking Handbook (EBH) and presented in Figure 7.3. The EBH proposes the calculation of the score for each metric according to a linear function that is defined by specifying minimal and maximal acceptable value for the metric. Furthermore, the EBH proposes the use of weighting factors for defining the importance of each metric for a given use-case.

pict

Figure 7.3: Calculation of the final score

In general, the linear translation function for calculating the score of each particular metric is given in Equation 7.6, where score can vary from 0 to 10. Minimal and maximal acceptable values are defined with Mmin and Mmax, respectively. Note that Mmin can be bigger than Mmax, e.g. in defining the acceptable point accuracy values one can discuss about acceptable localization error margins. Here M min is the biggest acceptable error, while Mmax is the desired average localization error.

pict         (7.6)

lin_fun

Figure 7.4: Linear translation function for each metric in case when Mmin < Mmax

Final scores in Track2 will be calculated for three different categories, i.e. different sets of marginal values and weights. These sets that will be used for different metrics in each category are given in Table 7.1, Table 7.2 and Table 7.3.


Table 7.1: Marginal values and weights for the first category (best accuracy)

table_final

Table 7.2: Marginal values and weights for the second category (best latency)

table_final

Table 7.3: Marginal values and weights for the third category (best interference robustness)

table_final

References

[1]   Alix2D2, 2013. Available at: http://www.pcengines.ch/alix2d2.htm.

[2]   CREW Repository, 2013. Available at: http://www.crew-project.eu/repository.

[3]   Metageek Wi-Spy, 2013. Available at: http://www.metageek.net/products/wi-spy.

[4]   N750 Dualband WLAN Router, 2013. Available at: http://www.tp-link.com.de/products/?categoryid=2166.

[5]   OMF 6 Documentation, 2013. Available at: http://omf.mytestbed.net/projects/omf6/wiki/Wiki.

[6]   R&S FSV7 Signal Analyzer, 2013. Available at: http://www.rohde-schwarz.de/de/Produkte/messtechnik-testsysteme/signal-und-spektrumanalysatoren/FSV-%7C-%C3%9Cberblick-%7C-100-%7C-6392.html.

[7]   R&S SMBV100A Vector Signal Generator, 2013. Available at: http://www.rohde-schwarz.de/de/Produkte/messtechnik-testsysteme/signalgeneratoren/SMBV100A.html.

[8]   Turtlebot II Documentation, 2013. Available at: http://wiki.ros.org/Robots/TurtleBot.

[9]   OpenWrt - Wireless Freedom, 2014. Available at: https://www.openwrt.org/.

[10] Filip Lemic, Jasper Büsch, Mikolaj Chwalisz, Vlado Handziski, and Adam Wolisz. Demo Abstract: Testbed Infrastructure for Benchmarking RF-based Indoor Localization Solutions under Controlled Interference. In EWSN'14, University of Oxford, Oxford, UK, 2014.

[11] Wei Liu, Michael Mehari, Stefan Bouckaert, Lieven Tytgat, Ingrid Moerman, and Piet Demeester. Demo Abstract: A Proof of Concept Implementation for Cognitive Wireless Sensor Network on a Large-scale Wireless Testbed. In EWSN'13, 2013.

[12] Morgan Quigley, Ken Conley, Brian Gerkey, Josh Faust, Tully Foote, Jeremy Leibs, Rob Wheeler, and Andrew Y Ng. ROS: an open-source Robot Operating System. In ICRA workshop, 2009.

[13] Ilenia Tinnirello, Giuseppe Bianchi, Pierluigi Gallo, Domenico Garlisi, Francesco Giuliano, and Francesco Gringoli. Wireless mac processors: programming mac protocols on commodity hardware. In INFOCOM, 2012 Proceedings IEEE, pages 1269-1277. IEEE, 2012.

[14] Tom Van Haute et al. D2.1 Initial Version of the EVARILOS Benchmarking Handbook. 2013.


FP7 Logo Fire Logo



News:

Report on the results of the real-life experiments in the validation scenarios is available

Report on final results of interference robust localization is available

Final Version of the EVARILOS Benchmarking Handbook is available

Evaluation Opportunities ‐ Technical Annex Track 1 Acrobat PDF Track 2 Acrobat PDF

Open Challenge ‐ Announcement of Winners

Join the EVARILOS mailing list

FIRE web site

FIRE - Brochure (2015)

Next events:
IPSN 2016
11‐14 April, 2016, Vienna
Net Futures 2015
20‐21 April, 2016, Brussels
ICC 2016
23‐27 May, 2016, Kuala Lumpur
IoT Week 2016
31 May‐2 June, 2016, Belgrade
EuCNC 2016
27‐30 June, 2016, Athens
IPIN 2016
4‐7 October, 2016, Madrid


EVARILOS Start: 1.11.2012
Project duration: 30 month
Contract Nr: 317989
EC Contribution: 1.379.944€
Participants: TUB, ADV, iMinds, SICS, THC
Project Fiche Acrobat PDF




This project has received funding from the European Union's Seventh Framework Programme for research, technological development and demonstration under grant agreement no 317989.

Imprint