Project acronym: OVERSEE

Project title: Open Vehicular Secure Platform

Project ID: 248333

Call ID: FP7-ICT-2009-4

Programme: 7th Framework Programme for Research and Technological Development

Objective: ICT-2009.6.1: ICT for Safety and Energy Efficiency in Mobility

Contract type: Collaborative project

Duration: 01-01-2010 to 30-06-2012 (30 months)

Deliverable D3.2:

Resource Management Layer Implementation

And Implementation Description

vi

D3.2:Resource Management Layer Implementation And Implementation Description

Abstract

This document provides a detailed description of the OVERSEE platform resource management layer. It presents the cloud of facilities that will serve as the foundation in the future development of OVERSEE applications. We include here the resource management layer, including the virtualization itself and the several run-time environments supported, the tools provided to work with XtratuM and a short description of the system configuration.

This version of the document is a preliminary draft version and has not undergone a full review yet.

Contents

Abstract ii

Contents iii

List of Figures v

List of Abbreviations vi

Document History vii

1 Introduction 1

1.1 Scope and Objective of the Document 1

1.2 Document Outline 1

2 Separation Layer 2

2.1 System Resources Management 2

2.1.1 Time Isolation: Scheduling 2

2.1.2 Space Isolation: Memory Protection 8

2.1.3 Audit and Attestation 9

2.1.4 Partition Update 10

2.2 XtratuM 2.3 Development Kit 12

3 XtratuM Guest Support for x86 architecture 14

3.1 XtratuM Abstraction Layer 14

3.2 Linux 14

3.2.1 Linux kernel paravirtualization 15

3.2.2 Distribution support 15

3.3 OSEK 16

3.4 Resident software 17

4 XtratuM device virtualization engine 19

4.1 Resources and Virtual Devices 19

4.2 System Architecture 21

4.3 Virtual Device Model 22

4.4 Request Model and Synchronization Mechanism 23

4.5 Device Publishing and Removal Protocol 25

4.6 Interrupt Handling 27

4.7 Bandwidth Reservation 27

4.8 XMIO memory allocator 28

4.8.1 Block sizes 28

4.8.2 Pool performance 29

4.9 XMIO Throughput 30

5 Secure I/O Partition 31

5.1 Generic Components of the Secure I/O Partition 31

5.1.1 IP Communication 32

5.1.2 Positioning Service 34

5.1.3 SVAS 34

5.1.4 USB Memory Access 34

5.2 Proof of Concept Components of the Secure I/O Partition 36

5.2.1 Bluetooth Forwarding 36

5.2.2 Audio Sharing 37

5.2.3 HMI Sharing 37

5.3 Basic Configuration of the Secure I/O Partition 39

5.3.1 OVERSEE configuration files 39

6 Conclusion 42

7 References 43

List of Figures

Figure 1: OVERSEE scheduling management 3

Figure 2: State machine for the management of spare slots. 5

Figure 3: Example of a scheduling sequence with the implemented dynamic scheduler 5

Figure 4: Performance loss due to the execution burden of context switches. The smaller the slot size, the greater the impact of the context switches. 8

Figure 5: Virtualization of available resources. 20

Figure 6: Disk virtualization example. 21

Figure 7: XtratuM I/O virtualization architecture. 21

Figure 8: Shared memory distribution. In this example, two devices are offered, the first one with one request queue and the second one with two 22

Figure 9: XMIO block diagram. Virtual devices are offered to the clients by means of connections. 22

Figure 10: XMIO request model. 23

Figure 11: Buffer chain. The buffer chain lies on the request queue space. The data buffers are located in buffer pool space at the top of the shared memory (recall on figure Figure 8). 24

Figure 12: Request queue indexes after initialization. 24

Figure 13: The client inserts requests on the available ring. 25

Figure 14: State machine for the I/O server for device offering and detachment. 25

Figure 15: State machine for the I/O client. 26

Figure 16: XMIO allocator memory distribution 29

Figure 17: Page allocation and buffer release operations, both involving the LUT. 30

Figure 18: Generic components of OVERSEE secure I/O Partition 31

Figure 19 TCP/IP Configuration 32

Figure 20: PoC components of OVERSEE secure I/O partition 36

Figure 21 HMI sharing components 38

Figure 22 Overview of OVERSEE configuration files 40

List of Abbreviations

CAM Co-operative Awareness Message

CU Communication Unit

ECU Electronic Control Unit

EV Emergency vehicle

GWN Global Wireless Networks

HM Health Monitor

HMI Human Machine Interface

ITS Intelligent Transport Systems

IVN In-Vehicle Network

LWN Local wireless network

OEM Original Equipment Manufacturer

PKI Public Key Infrastructure

PoC Proof of Concept

PS Positioning service

RE Runtime environment

SKPP Separation Kernel Protection Profile

SM Security Module

SVAS Secure Vehicle Access Service

UN User networks

VNC Virtual Network Computing

V2V Vehicle-to-vehicle

V2X Vehicle-to-vehicle or Vehicle-to-Infrastructure

Document History

Version / Date / Changes
0.1 / 23-06-2011 / Draft Version
0.2 / 28-11-2011 / Draft Version
0.3 / 13-02-2012 / Draft Version

ii

D3.2:Resource Management Layer Implementation And Implementation Description

1  Introduction

The Open Vehicular Secure Platform (OVERSEE) project has produced this deliverable; therefore it contains contributions from all partners, while the main contributors are: UPVLC, University of Siegen and OpenTech.

The present document describes the implementation of the resource management layer on the OVERSEE platform. This deliverable is the result of Task 3.2 (Resource Management Layer and Implementation Description), the second task of Workpackage 3.

1.1  Scope and Objective of the Document

The scope of this document is the implementation of the resource management layer, regarding the cloud of available facilities in order to set up the required virtualized environment.

The objective of the current deliverable is to provide a full description of all the virtualization related facilities. Hence, the description covers:

·  The virtualization layer, including a description of the new features that have been developed for XtratuM in this project.

·  The support for bare, Linux and OSEK run-time environments.

·  The device virtualization layer.

·  Other facilities regarding resource management, for which a solution has to be provided as a consequence of the virtualization architecture, for example, the audio forwarding (among others).

1.2  Document Outline

The rest of the document is structured as follows: section 1 gives an introduction about the scope and objectives of the document. Section 2 contains the description of the virtualization layer and the XtratuM Development Kit distributed to the partners. Section 3 is dedicated to the guest support over XtratuM. Section 4 describes the device virtualization solution adopted for OVERSEE. Finally, section 5 is concerned with general resource management issues.

2  Separation Layer

The XtratuM hypervisor has been the chosen separation layer. This hypervisor virtualizes the underlying hardware and provides a spatially and temporarily isolated runtime environment for applications of different nature. The applications may vary from single-threaded applications to entire operating systems.

The design of XtratuM makes it feasible to run highly critical real-time applications alongside with General Purpose Operating Systems (GPOS) with no real-time constraints. In addition to achieving real-time capabilities, the hypervisor keeps each execution environment spatially isolated. Therefore, this hypervisor is suitable for the purpose of the project OVERSEE, in which the main concern is about security.

2.1  System Resources Management

2.1.1  Time Isolation: Scheduling

Scheduling under the ARINC-653 standard must be strictly deterministic. Based on this concept and, in order to achieve determinism, XtratuM uses a static table which defines the scheduling plan. Time windows are allocated statically through the configuration file. While on run-time, each partition will only be executed inside its predefined slot(s) of time.

This policy is necessary under the scope of avionics systems, where a failure meeting a deadline in the system may be fatal. However, within the OVERSEE project, the software expected to be run is significantly less critical. Of course, XtratuM strict scheduling policies are not disregarded nor affected, but there have been included several mechanisms to increase scheduling flexibility. These mechanisms include, first, the ability to define several scheduling plans and, second, the possibility of dynamically schedule partitions inside certain allowed time windows.

2.1.1.1  Multiple scheduling plans

The first mechanism is the multi-plan scheduling. Such mechanism is not novel, in the sense that it is covered by the ARINC-653 standard. Nevertheless, XtratuM support for multi-plan scheduling has been implemented under the development phase of the OVERSEE project.

As far as the ARINC-653 standard is concerned, the main reasons to consider the multi-plan mechanism are:

  1. Partitions will certainly have different CPU needs while the system is booting. For example, an operating system based partition will probably need a larger amount of CPU to boot all of its facilities than a bare partition not based on an operating system.
  2. Hardware failures may force to migrate some partitions from one processor to another. Hence, the host processor may need to sacrifice some of its possibly running non-critical partitions to accept the extra burden of the migrated tasks from a failing processor. This would imply changing the scheduling plan while on run-time.

The OVERSEE platform will have to face the first case but not likely the second. Moreover, the definition of multiple scheduling plans is intended to cover the different CPU needs of each use case. For example, there is no need to be scheduling the e-call partition if an e-call is not taking place. Therefore, there will be a scheduling plan for each of the possible operating modes of the OVERSEE node.

The ability of changing the scheduling plan is reserved only for system partitions, considered trusted. However, the system partition responsible of changing the plan will not necessarily have the ability to detect that there is the need of a plan change. This information may reside on some non-system partition. This leads to the conclusion that the system will need to have a request procedure by which the non-system partitions inform of a change in the operating mode to the relevant system partition.

Therefore, a "Cyclic Scheduling Plan Manager" was developed, which will be executed in an OVERSEE system partition and taking the decision concerning a change of the cyclic scheduling plan, based on the requests for plan changes from other partitions and a policy specified in the <scheduling.xml> file. Figure 1 depicts the general concept, based on sampling channels for the plan change requests, the <scheduling.xml> file and the "Cyclic Scheduling Plan Manager". After the "Cyclic Scheduling Plan Manager" decided to switch to a new plan he invokes an XtratuM hyper call to set the new plan and provide a new set of the partition priorities to the "Spare Time Scheduling Manager".

Figure 1: OVERSEE scheduling management

2.1.1.2  Dynamic partition scheduling

Even inside a specific operating mode, the CPU needs of the partitions will dynamically change. The obvious example of this issue is the HMI partition. User actions are unpredictable and each of them will induce some amount of work on the system. Mainly focused on user experience, we have developed a mechanism by which dynamic partition scheduling is possible.

This mechanism must be considered as an extension to the basic cyclic scheduler of XtratuM. In order to preserve determinism, the cyclic scheduler will prevail over the dynamic scheduler. The system integrator will explicitly define some slots of time where the dynamic scheduling is allowed. From now on, these slots of time will be referred as spare slots. Therefore, dynamic scheduling will be confined to some controlled slots to avoid affecting critical tasks. It can be said that the cyclic scheduler is the master scheduler while the dynamic scheduler is the slave.

One of the main concerns in the design of XtratuM is simplicity. Considering that, a dynamic scheduling policy may not be simple and thus, the dynamic scheduling is not performed by XtratuM, in the sense that it is not XtratuM the one who decides which is the next partition to schedule (inside spare slots). Instead, we have decided to use an already used despite not much extended solution known as Application Defined Scheduling [1]. This solution was developed in order to allow a user task define a certain scheduling policy. The motivation was to allow a POSIX based system use scheduling policies not implemented by POSIX.

Unlike [1], the implemented mechanism is intended to reduce hypervisor burden while allowing dynamic scheduling, in order to maximize CPU usage. A good example of a use case where dynamic scheduling is useful is when using the security services partition (SecS). A user action may trigger some unencryption algorithm from the SecS. The SecS is idle while it is waiting for a request, and the user partition may be idle while it is waiting for the request to be serviced. These idle times can be reassigned, so that they are better used by partitions that have some useful work to do.

The dynamic plan is obtained by a partition known as the spare host. Therefore, partitions wanting to get some extra CPU must send the spare host a request. The requests can be sent via the hypervisor communication mechanisms, in this case a sampling port.

Finally, the results of this work have been published on a Spanish national workshop in [2].

2.1.1.2.1  Mechanism and interface with XtratuM

When designing the scheduling plan of an XtratuM based system, there may remain slots of time where no partition has been scheduled. This may respond either to a design consideration, or because the design leaves some holes in the plan where no useful work is performed. These holes are the ones which are going to be used for dynamic scheduling, by explicitly defining them on the configuration file.

The implemented mechanism is based on the use of a second level scheduling plan or spare plan. This scheduling plan is going to be obtained dynamically by the spare host. From the point of view of XtratuM, this solution is simple enough and does not introduce extra burden on the hypervisor while scheduling. The spare plan is simply a table formed of pairs [partition, duration].

XtratuM works with this plan using a simple state machine. After booting, the hypervisor has no spare plan. Through a specific hypercall, the spare host (and only the spare host) is able to send to the hypervisor a new spare plan (the spare host is assumed to be a trusted partition). At the hypervisor level, this plan is checked in order to avoid storing bad formed spare plans. Once the hypervisor has a valid plan stored, it needs the spare host to launch the plan in order to start scheduling on spare time. Currently, this is done by putting the spare host partition to the idle state.