Evarilos Logo
Evarilos


Evaluation Opportunites ‐ Invitation

Track 1 ‐ Turn-key RF-based Indoor Localization Solutions in Public Spaces Acrobat PDF

This document gives a short overview of the iMinds wireless testbed facilities and describes how these testbeds can be used for the evaluation of indoor localization solutions.

The goal is to evaluate localization solutions in a realistic environment. Track 1 focuses on benchmarking complete localization solutions from participants that deploy their own hardware or want to install custom software on the test bed. As such, this track will evaluate a full (commercial or non-commercial) localization system, including aspects such as installation time, energy consumption, etc.

Participants can either:

  • Deploy their own localization hardware in the w-iLab.t office environment;
  • Deploy their own localization software on the sensor or Wi-Fi nodes of the w-iLab.t testbed.

 EVARILOS will provide the participants with opportunities to:

  • Test and analyze the performance of their RF-based indoor localization solutions following a standardized evaluation methodology;
  • Test the sensitivity of their approach to radio interference under controlled conditions;
  • Disseminate information about their localization solution as part of website;
  • Win awards in multiple categories (best accuracy, best price efficiency, etc.) that can be used for promotional purposes.

Table of Contents

1. Testbed Description
1.1 w-iLab.t Office
1.2 w-iLab.t II Zwijnaarde

2. Track 1 technical annexes
2.1 Deployment process and technical requirements
2.2 Interfacing with the experimentation facilities
2.3 Evaluation scenarios
2.4 Evaluation procedure
2.5 Ranking process and confidentiality

3. Participation

1. Testbed Description

The solutions can be tested on one or two available test facilities. Full technical information is available on the w-iLab.t website.

1.1 w-iLab.t Office

The office environment testbed is deployed in the iMinds office spaces, meeting rooms, student lab rooms, corridors, etc. The office testbed consists of three floors which are actively used during the day. The testbed is a dynamic environment, meaning that there is a number of people moving in the premises, constant opening of doors or slight movement of infrastructure (chairs, tables, etc.) are expected and usual. Furthermore, uncontrolled wireless interference typical for office environments can be expected, including Wi-Fi, DECT, etc.

Devices from the participants can be installed in the office testbed, or custom software can be deployed on approximately 200 wireless node locations.

Each node location is equipped with the following hardware.

  • TMoteSky sensor node (programmable with TinyOS or Contiki);
  • Two IEEE 802.11 Wi-Fi A/B/G interfaces (programmable with Linux or windows);
  • USB slots to install additional hardware;
  • Power plugs to install additional hardware (limited to several per room).

More information is available at the w-iLab.t office configuration web site.

A floorplan of the third office floor (90m by 18m) is shown below (floorplans of the other two floors are very similar and are available on request).

pict

Figure 1.1: Floorplan of the third floor in the w.iLab-t testbed

A figure of the available node locations (on all three floors) is shown below.

pict

Figure 1.2: All available nodes in the w.iLab-t testbed

1.2 w-iLab.t Zwijnaarde

The Zwijnaarde testbed is located in an unmanned utility room above a cleanroom.  Very little outside interference is present in this testbed. Due to the presence of many metal objects, the environment resembles certain manufacturing environments. No persons are present in the environment, and as such the environment is very stable.

Devices from the participants can be installed in the Zwijnaarde testbed, or custom software can be deployed on approximately 200 wireless node locations.

The w-iLab.t Zwijnaarde hosts 60 fixed wireless nodes, each equipped with:

  • RMoni sensor node (programmable with TinyOS or Contiki, RMoni details);
  • Two IEEE 802.11 Wi-Fi a/b/g/n interfaces (programmable with Linux or windows);
  • USB slots to install additional hardware;
  • Power plugs to install additional hardware (limited to several locations).

Mobile wireless nodes are available for repeated remote testing. More information is available on the ilabt website.

pict

Figure 1.3: The w.iLab-t in Zwijnaarde

A floorplan of the Zwijnaarde testbed (60m by 20m) is shown below:

pict

Figure 1.4: Floorplan of the w.iLab-t II in Zwijnaarde

Finally, a figure of the available node locations is shown below:

pict

Figure 1.5: Overview of the nodes in w.iLab-t II

2 Track 1 technical annexes

2.1 Deployment process and technical requirements

Participants are allowed to deploy their own hardware and/or software. All systems have to be either RF-based, or in case of multimodal systems, they have to possess a strong RF-based component. Hybrid systems where other technologies are used such as infrared, ultrasound, RFID, or other are allowed.

Software can be sensor firmware (TinyOS, Contiki) or PC software (preferably Linux, although windows is also available). This software can be installed remotely on the available hardware platforms.

If hardware is installed, the participants are invited to visit the test facility and perform the deployment with support of the EVARILOS consortium. In this way, correct installation is guaranteed and remains the responsibility of the participant. The deployment process is limited to at most two days. A separate time slot will be allocated to each participant to preserve confidentiality. For installing additional hardware, power sources are power over Ethernet (max. 30W, 12V), USB or power plugs. A wireless/wired backbone is available. Location information should be available through an easy accessible software interface.

pict

Figure 2.1: The EVARILOS suite: overview

During the experiments, the performance of the localization solution will be evaluated at multiple locations and under different interference conditions (without interference, with Zigbee interference, with WiFi interference, ..).

For testing the performance of the localized device of the system under test at different evaluation points, either a test person will be used (w-iLab.t Ghent) or a remote controlled robot (w-iLab.t).

No action will be required from the participants to choose the evaluation points or to create interference. The only requirement from the system under test is that it can provide location estimates on request (see next section).

2.2 Interfacing with the experimentation facilities

This section gives instructions for participants for interacting with the benchmarking framework.

Participants have to provide an HTTP Uniform Resource Identifier (URI) on which their algorithm listens for requests for location estimation. Upon request, the algorithms must be able to provide the location estimate as a JSON response in the following format:

{ 
        "coordinate_x":   'Estimated location: coordinate x', 
        "coordinate_y":   'Estimated location: coordinate y', 
        "coordinate_z":   'Estimated location: coordinate z', 
        "room_label":      'Estimated location: room' 
}

Figure 2.2: The JSON format

JSON parameters coordinate_x and coordinate_y are required parameters and as such they must be reported upon request. Parameter coordinate_z is an optional parameter, due to the 2D evaluation environment. If this parameter is provided from a SUT, the evaluation team will also calculate 3D localization error, although this information will not be used in final scoring. Finally, parameter room_label is an optional parameter and if it is not provided the EVARILOS Benchmarking Platform will automatically map the room estimate from the estimated coordinates x and y.

The technical team will support the participants in deploying their algorithms on desired hardware, interfacing SUT with the EVARILOS Benchmarking Platform, controlling the robotic mobility platform, generating different interference scenarios and interference monitoring. Furthermore, each participant will be given 4 hours in order to train their algorithms in the testbed environment, before the evaluation process starts. During that time participants will also be supported by the technical team.

2.3 Evaluation scenarios

This chapter presents interference scenarios that will be artificially generated in wilab testbeds in order to evaluate different indoor localization algorithms. The goal is to  determine if and to which extend different types and amounts of RF interference can influence the indoor localization performance. The text below presents the reference scenario and describes three interference scenarios that will be used for evaluation in testbed.

2.3.1 Reference Scenario

This reference scenario is instantiated either on the 3rd floor of the w-iLab.t testbed in Ghent, or on the w-iLab.t Zwijnaarde testbed. It is called “Reference scenario”, since no artificial interference is generated and the presence of uncontrolled interference is monitored and minimized.

At multiple evaluation points, the indoor localization SUT will be requested to estimate the location. The SUT device will be carried to each evaluation point using the robotic platform (Zwijnaarde) or by a person (Ghent). The location estimations provided by the robot are highly accurate, achieving mean localization errors of around several cm, so these location estimations can be considered as ground truths for indoor localization experiments.

The experiments will be performed in the afternoons, so the influence of uncontrolled interference will be minimized. Furthermore, the wireless spectrum will be monitored using WiSpy devices and all measurements with the interference threshold above certain level will be repeated. Finally, during each experiment a measurement of the wireless spectrum will be taken with the spectrum analyser at a predefined location.

2.3.2 Interference Scenario 1

In this interference scenario interference is created using the IEEE 802.15.4 Tmote Sky nodes. The interference type is jamming on one IEEE 802.15.4 channel with a constant transmit power equal to 0 dBm. Five of these jamming nodes will be present in the testbed environment. A summary of this interference scenario is given below.


scenario_1
Table 2.1: Interference scenario 1 summary

2.3.4 Interference Scenario 2

The second interference scenario instantiated utilizes interference types that are usual for office or home environments. Namely, interference is emulated using 4 Wireless Fidelity (WiFi) embedded Personal Computers (PCs): server, access point, data client, and video client. The server acts as a gateway for the emulated services. The data client is emulated as a TCP client continuously sending data over the AP to the server. Similarly, the video client is emulated as a continuous UDP stream source of 500 kbps with the bandwidth of 50 Mbps. The AP is working on a WiFi channel overlapping with the SUT’s channel and with the transmission power set to 20 dBm (100 mW). A summary of a described interference scenario is given below.


scenario_2
Table 2.2: Interference scenario 2 summary

2.3.5 Interference Scenario 3

For the third interference scenario instantiated a signal generator will be used for generation of synthetic interference. The generated synthetic interference will have an envelope of the characteristic WiFi signals, but without any Carrier Sensing (CS). The summary of interference scenario 3 is given below.


scenario_3
Table 2.3: Interference scenario 3 summary

2.4 Evaluation Procedure

This chapter describes the evaluation procedure that will be followed for Track 1. In order to objectively compare and evaluate the different solutions, the following methodology will be applied:

  • Different metrics are calculated (see list below);
  • Metrics are converted to scores using translation functions. This is a parameterized function that converts the physical unit into score of 0 to 10 for each metric;
  • Scores per metrics are combining using weights resulting in a final overall score;
  • These weights depend on the specific category the solution is evaluated for.

pict

Figure 2.3: The evaluation procedure

2.4.1 Evaluation points

The indoor localization algorithms will be evaluated in 10 different evaluation points (in iLab.t Ghent) or 25 evaluation points (w-iLab.t Zwijnaarde) under four interference scenarios. The evaluation points will be selected by the evaluation team and will be the same for all evaluated algorithms. In the first run, all algorithms will be evaluated in the environment without controlled interference and the metrics will be calculated. The following three runs of evaluation will be done in the environment with three different interference scenarios described before. The locations of interference sources will be selected by the evaluation team and will be the same for all evaluated algorithms. For each location at each measurement point, the EVARILOS Benchmarking Platform will request a location estimate from the SUT. The evaluated data at each location will be automatically stored and metrics will be calculated and presented in real-time.

2.4.2 Evaluation Metrics

For the Track 1 the following metrics will be calculated:

Performance metrics - obtained from the experiment:

  • Geometric or point level accuracy of location estimation;
  • Room level accuracy of location estimation;
  • Latency or delay of location estimation;
  • Energy consumption of the localized node.

Derived metric - calculated from the performance metrics:

  • Interference robustness of indoor localization algorithm.

Deployment metrics - obtained during the experiment:

  • Set-up overhead;
  • Physical installation time;
  • Configuration complexity (based on questionnaire).

Point accuracy

Point level accuracy at one evaluation point is defined as the Euclidean distance between the ground truth provided by the robotic platform (xGT, yGT) and the location estimated by indoor localization algorithm (xEST, yEST) , given with the following equation:

pict

Room Level Accuracy

Room level accuracy of location estimation is a binary metrics stating the correctness of the estimated room, given with the following equation:

pict

Latency of Location Estimation

Latency or delay of location estimation is the time that the SUT needs to report the location estimate when requested. The time that will be measured in the evaluation is the difference between the moment when the request for indoor localization has been sent to the SUT (trequest) and the moment when the response arrives (tresponse), given with the equation:

pict

Energy efficiency of the localized node

This information needs to be provided by the participant. The energy efficiency is expressed in Watt based on available datasheets.

Interference Robustness

Interference robustness of indoor localization algorithm in metric that reflects the influence of different interference types to the performance of the indoor localization algorithm. In this evaluation, interference robustness will be expressed as the percentage of change in other metrics in the scenarios with interference in comparison to the performance in the scenario without interference (reference scenario). For the case of generalized metric (M), the interference robustness is given with the following equation:

pict

where Mreference is the value of metric M in the reference scenario, while Minterference is the value of metric M in the scenario with interference. Note that if the performance of an algorithm for the performance metric M is better in the scenario with interference, in comparison to the reference scenario, then the interference robustness metric will be set to 0 %.

Setup overhead - physical installation time

This metric measures the time that is needed to install the complete system. The time is measured from moment the installation of the SUT starts until all physical components are installed correctly. Also the number of men installing will be accounted for. This includes infrastructure, software, set-up and configuration time.

Setup overhead - configuration complexity

To capture relevant configuration data, a questionnaire will be used. This questionnaire will be used to evaluate the complexity of the configuration of the system. Example questions will include:

  • Is reconfiguration required if the environment changes?
  • Is fingerprinting required or not?
  • Do you need to manually enter coordinates?
  • Can you place your anchor points anywhere?

2.4.3 Capturing the Evaluation Metrics

The evaluation procedure will be done in four steps, namely in four interference scenarios. In each step, for each of the evaluation points, the set of metrics (point accuracy, room accuracy, latency) will be obtained. For each set, the 75th percentile of point level accuracy and latency will be calculated, together with percentage of correctly estimated rooms, as shown in the figure below.

pict

Figure 2.4: Capturing the evaluation metrics

Interference robustness is calculated as follows. For each interference scenario the interference robustness is calculated for each performance metric. The overall interference robustness is the averaged interference robustness over all interference scenarios and all performance metrics, given with following equation:

pict

In the equation the sum goes over all three interference scenarios (i = 1, 2, 3), and M1(i), M2(i) and M3(i) are interference robustness of 75th percentile of point accuracy, interference robustness of percentage of room level accuracy and interference robustness of 75th percentile of latency for scenario i, respectively. This process is illustrated in the figure below.

pict

Figure 2.5: Final estimation of interference robustness

2.4.4 Calculation of Final Score

Final scores will be calculated according to the approach described in the EVARILOS Benchmarking Handbook  (EBH) and presented in the figure below.

pict

Figure 2.5: Final score calculation

The EBH proposes the calculation of the score for each metric according to a linear function that is defined by specifying minimal and maximal acceptable value for the metric. Furthermore, EBH proposes the use of weighting factors for defining the importance of each metric for a given use-case.

In general, the linear translation function for calculating the score of each particular metric is given in the figure below, where score can vary from 0 to 10.

pict

Figure 2.6: Linear translation function for each metric in case when Mmin < Mmax

Minimal and maximal acceptable values are defined with Mmin and Mmax, respectively. Note that Mmin can be bigger than Mmax, e.g. in defining the acceptable point accuracy values one can discuss about acceptable localization error margins. Here M min is the biggest acceptable error, while Mmax is the desired average localization error.

The following marginal values will be used for the different metrics.


scenario_3
Table 2.4: Marginal values and weights

For the calculation of the final score, the intermediate scores will be weighted. The individual weights depend on the actual use case one is interested in. Therefore different categories will be introduced (see next section).

2.4.5 Evaluation in different categories

Depending on the application, different metrics will have a different level of importance. Therefore, different categories are introduced. Based on a single measurement set, a final score will be calculated for each category.

  • Most point accurate: targets track and trace applications;
  • Most room level accurate: the absolute error is not so important as long as the correct side of the wall is chosen;
  • Most interference robust;
  • Most installation friendly: targets on the ease of deployment.

For each of the categories, the following values will be used for the weights:


scenario_3
Table 2.5: Overview of the weight factors used in the different evaluation categories

Winners will be declared for categories providing there are sufficient number of candidates. The price money will be devided equally amongst the winners of all categories.

2.5 Ranking process and confidentiality

By varying the acceptable values and desired values, as well as the weight factors, a number of categories will be defined (most accurate, best room accuracy, etc.) Only the top three of the commercial solutions will be mentioned by name in each category. After the experiments, participants will have the opportunity to remain anonymous or can have their name indicated in the rankings.

3 Participation

For participating in the experiments, an extended abstract (length should not exceed 4 pages, including figures and tables) should be submitted.

The application details reported in the abstract should at least include:

  • General description of the localization solution;
  • Description of the used technologies;
  • Deployment requirements;
  • A contact person.

Additional technical information is available upon request.

Contact:
Eli De Poorter (eli.depoorter@intec.ugent.be)


FP7 Logo Fire Logo



News:

Report on the results of the real-life experiments in the validation scenarios is available

Report on final results of interference robust localization is available

Final Version of the EVARILOS Benchmarking Handbook is available

Evaluation Opportunities ‐ Technical Annex Track 1 Acrobat PDF Track 2 Acrobat PDF

Open Challenge ‐ Announcement of Winners

Join the EVARILOS mailing list

FIRE web site

FIRE - Brochure (2015)

Next events:
IPSN 2016
11‐14 April, 2016, Vienna
Net Futures 2015
20‐21 April, 2016, Brussels
ICC 2016
23‐27 May, 2016, Kuala Lumpur
IoT Week 2016
31 May‐2 June, 2016, Belgrade
EuCNC 2016
27‐30 June, 2016, Athens
IPIN 2016
4‐7 October, 2016, Madrid


EVARILOS Start: 1.11.2012
Project duration: 30 month
Contract Nr: 317989
EC Contribution: 1.379.944€
Participants: TUB, ADV, iMinds, SICS, THC
Project Fiche Acrobat PDF




This project has received funding from the European Union's Seventh Framework Programme for research, technological development and demonstration under grant agreement no 317989.

Imprint