Work Package 2

Middleware design and evaluation

A modular and design driven approach means identifying the set of functionalities that lies at the hearth of the different applications, and devise a proper middleware between the P2P network and the applications enabling a dynamic environment for specific service design and provisioning. We have identified four key fields of intervention, described in the following tasks. The WP aim is the design, analysis and fast prototyping (simulations or local implementations) of algorithms and protocols solving or improving the identified functions. Results in this WP will be used in WP3 for software design and implementation.

Task 2.1: SIP-based approach to P2P signaling

All P2P platforms developed recently (for file-sharing, but not only– just think of Skype), have proprietary and different signaling protocols and service primitives. This results in a very low reuse of existing protocols and in overlapping of functionalities, with waste of resources and poor interoperability between different P2P systems. At the same time IETF-based standards seem to already support all the primitive functions fitting the requirements of a P2P signaling system. In particular the Session Initiation Protocol (SIP), developed within IETF as signaling protocol for IP Telephony, has primitives for setting up, maintaining, and tearing down sessions between two or more terminals, and it can become an important and promising standard enabler for P2P networking. However, the current SIP architecture is still centralized as depicted in Fig.4, and few functions are available for P2P-style service primitives (like resource sharing and searching).


Fig. 4: Centralized SIP architecture

A possible solution lies in modifying the current centralized SIP architecture into a distributed P2P system. Ongoing research activities here focus mainly on the definition of a P2P architecture for IP telephony services surfing the success of P2P telephony with Skype. Profiles, instead, plans to address a richer set of P2P applications, like real-time audio/video multi-conferencing, distributed video streaming, content distribution, etc. In order to provide such a general platform further enhancements to SIP needs to be defined. The objective is to define a signaling platform (based on SIP) for the next generation of P2P systems. We will call this enhanced protocol “P2P-SIPpro” where pro stands for PROfiles.


Fig. 5: Distributed SIP architecture for P2P networking

The technical approach starts from existing P2P signaling middlewares, and re-map as much as possible of their functionalities into P2P-SIPpro. The P2P-SIPpro protocol will be a new middleware with a richer set of functions. The P2P-SIPpro protocol itself will not include all P2P service functions, probably leaving very high level primitives to application-specific layers that can be integrated on top of P2P-SIPpro. To this purpose, JXTA is a promising candidate as it has a modular structure where different functions can be easily separated. Fig.6 depicts the global resulting signaling architecture.


Fig. 6: Example of overlay P2P signaling architecture

The advantages in using SIP are:

  1. it is already used for time-sensitive content distribution applications, and application terminals are expected to implement a SIP stack;
  2. it is simple, flexible and easily extensible;
  3. it already addresses several basic functionalities that a P2P application has to deal with, like NAT and/or firewall traversal, peer addressing, session/capabilities negotiation.

An important aspect taken into account during the protocol design is security. Authentication of the peers, data integrity protection and confidentiality are fundamental issues that will be addressed specifically in T24 in tight cooperation with this task.

Task 2.2: Overlay network setup and maintenance

Different applications should coexist on top of the same overlay network. From this point of view, the P2P application middleware can be seen as a stack of different layers providing a framework to build distributed applications. In fact, the distribution architecture can be considered in a separate layer wrt the overlay network, as shown in Fig.7, even if performance, QoS and security parameters should be carefully evaluated at every level in order to provide reliable and efficient services.


Fig. 7: Topological separation between IP, Overlay and Distribution topologies

The P2P overlay network must be designed to support applications guaranteeing dependability in face of:

  • Changing conditions of the underlying network;
  • Peers connecting to and leaving the overlay;
  • Changes in the trust relationships among peers.

In order to achieve the goals above, the activity in this task will proceed as follows:

  1. Development of mathematical models using Markovian analysis, fluid models, and stochastic random graphs to model the P2P overlay networks and their interaction with applications In particular, these models must be oriented to design dynamic strategies for resource management, routing, and topology maintenance, and must, therefore, describe the time evolution of the P2P nodes and their effects on such strategies;
  2. Formulation of dynamic allocation and routing problems as distributed optimal control problems, with constraints on the maximum employed resources as well as on the minimum allowed performance. The selected performance metrics must account for possible incentive to cooperative users.
  3. Development of resource allocation and routing algorithms and detailed simulations thereof taking into account the application contexts singled out in WP1;
  4. Evaluation of the proposed algorithms by means of performance metrics that account for the user’s satisfaction wrt the application under investigation;
  5. Comparison with other existing middleware (e.g., JXTA);
  6. Evaluation of DHT-based overlays, that map nodes and resources to keys, and that provide key-based routings. A common framework that generalizes the important addressing and discovery goals should be studied as well for general purpose applications.

UniFi and UniTo RUs will cooperate to reach all these goals.

Task 2.3: Distribution architecture and topology

Initial works on P2P content distributions systems considered as a metric of interest the number of hops between source and destinations in the overlay graph, often assuming a tree-based distribution architecture. Indeed, the real measure is the time of delivery. The number of hops is an homo-morph metric only in homogeneous scenarios, while in heterogeneous ones it may not be a good representation. Extending performance analysis to dynamic, heterogeneous cases is far from trivial. The specific task of defining and building the distribution topology must consider the modelling activity in WP1, however it is a function that can, and should be largely independent from the specific application. UniTn RU will use sophisticated mathematical tools such as Stochastic Graph Processes (SGP), to analyze the distribution process. The use of SGP leads to the dynamic description of the system in terms of Master Equations (the differential form of the Chapman-Kolmogorov Equations describing the evolution of a stochastic chain), a well-known mathematical tool in physics and bio-hemistry. SGP have been rarely, if at all, used in the study of networks, but they appear to be particularly suited for the study of P2P systems, accounting in an abstract, yet representative way, both for the details of the distribution protocol, and for the impairments introduced by the Internet. Fig.5 describes the process of modelling a specific distribution protocol: the associated Master Equations are used to evaluate the performance, while from the SGP description the design of improved protocols and procedures can be derived.


Fig. 8: SGP and Master Equations in modeling and analysis of distribution processes

In parallel to the above activity, other two research lines will be lead by UniTo. The first one deals with finding appropriate solutions for the distribution of “flash-patches.” Flash-patches are pieces of critical software distributed in response to the appearance in the Internet of some malware (destructive software like viruses and worms). The key point is that flash-patches should propagate faster than the malware. This can be achieved, since the malware propagates following a random process, while the flash-patch can follow an optimized distribution process. The second line is related to network coding and erasure codes applied to the digital fountain approach. These techniques proved useful in increasing the throughput on overlay networks used for content distribution. However the actual gain depends on the correlation of the topology and the coding strategy. For this reason studying coding techniques partially independent from the application, while correlated with the distribution architecture, as well as to the overlay topology may prove a successful approach.

Task 2.4: Privacy and security provisioning

The role of T24 is modeling and design of protocols and algorithms for privacy and security suitable for implementation in a P2P context. Starting from the application needs modeling phase of T13, T24 will provide the theoretical building block required to provide security and privacy to applications. On the one hand, there is the need to study encryption algorithms compatible with application requirements in T13; on the other hand difficulties connected with the respect of privacy (up to anonymity) and with the integrity of data can arise. A specific research subject will be the evaluation of flexible and high-performance encryption functions; these functions must use keys generated during the access phase and guarantee the privacy of point-to-point or end-to-end communications. A different research line will be the investigation of transitive trust forms or reputation-based metrics for the creation of autonomous trust groups. By means of encryption also anonymity can be guaranteed, if necessary, via suitable protocols without interfering with the data integrity. To this end, it is important to guarantee the robustness of key/value pairs which can be provided by digital signature algorithms. In fact, a key requirement of a P2P network is to prevent a misbehaving user from modifying the key/values pairs relative to other users, thus contaminating the communication Also these digital signature algorithms rely on keys that must be negotiated by flexible and high-performance schemes. In particular public/private key encryption algorithms, despite of their flexibility, can be too computationally expensive for real-time communications.