Current Research Interests

J. Liebeherr

Network Protocols for the Post Internet Era. Peer-to-peer networks – collections of computing devices that self-organize without central control – have shown to be a disruptive technology that has the potential to revolutionize information systems. The ability to create large networks on-the-fly has enabled new application services in support of content distribution, file sharing, video streaming, and interactive teleconferences. But could the role of peer networking be even greater? Is it conceivable that peer network protocols become the foundation for a new architecture that is entirely based on the concepts of self-organizing networks? Can peer network protocols evolve into a follow-on technology to the Internet protocols? In our research, we are trying to address these questions by determining the potential and fundamental limits of the peer networking approach. They envision a network architecture characterized by the coexistence of virtually unlimited numbers of peer networks that can quickly grow to arbitrarily large sizes and adapt to changes in the number of peers and in the substrate network. The network architecture must be capable of supporting a much broader set of platforms than the Internet protocols support today.

The main vehicle for developing, evaluating, and deploying solutions is an overlay network system, called HyperCast, that we have developed in my research group over the last years. HyperCast is an open source software system (of about 100,000 lines of code) for self-organizing application-layer overlay networks. Initially conceived for the empirical evaluation of large-scale reliable multicasting, the software has evolved into a programming platform for application-layer overlay networks that accommodates different types of overlays and permits a variety of message semantics. HyperCast consists of a set of software-based protocols that support ad hoc creation of peer groups for point-to-point, point-to-multipoint, and multipoint-to-point delivery among peers. HyperCast has been enhanced recently to include protocols for mobile ad-hoc networks. The HyperCast software has been used to build complex software systems, for example, a situation awareness system for supporting emergency responders, video streaming systems, and a sensor network application for area protection.

A critical part of future research efforts lies in moving peer networking closer to the hardware, possibly attaching it directly onto processors of mobile devices, routers, and switches. This could avoid some of the performance penalties of current peer networks. Having a state-of-the-art peer network system available, we can  explore how to exploit peer networking technologies in lower layers of the system architecture. An open question is the viability of the existing HyperCast system as a platform for the interconnection of sensors and mobile devices. Also open are the design principles of protocols that can support networks of arbitrarily large sizes, meet or exceed the throughput and delay performance of existing networks, satisfy security requirements in a mobile and dynamically changing environment, and demonstrate the benefits of the architecture in the context of demanding real-time applications. Recently, experiments have shown that a HyperCast overlay network with 10,000 overlay sockets running on 100 Internet hosts can be built in less than one minute. The goal for the next 2 years is to build significantly larger peer networks, of up to one million nodes.

Ultimately, the research aims to enable new advanced technology applications. For example, an array of thousands of sensor devices placed in northern Ontario that monitor environmental factors, could form a peer network that connects to the backbone infrastructure of CA*net4, Canada’s optical research network, and enable climate researchers at the University of Toronto to monitor and visualize the output of the findings of these sensors in real time.

More Information: Publications:
  • An Overlay Approach to Data Security in Ad-Hoc Networks,” J. Liebeherr and G. Dong, Ad Hoc Networks Journal (to appear).
  • “Programming Overlay Networks with Overlay Sockets”, J. Liebeherr, J. Wang, G. Zhang, Proceedings of 5th COST 264 International Workshop on Networked Group Communications (NGC 2003), Munich, Germany. Pages 242–253, September 2003.
  • “Application-Layer Multicast with Delaunay Triangulations”, J. Liebeherr, M. Nahas, W. Si, IEEE Journal on Selected Areas in Communications, Special Issue on Network Support for Multicast Communication, 40(8):1472–1488, October 2002.
  • “HyperCast:  A Protocol for Maintaining Multicast Group Members in a Logical Hypercube Topology”, J. Liebeherr, T. K. Beam, Proceedings of First International Workshop on Networked Group Communication, Pisa, Italy, Springer Verlag, LNCS 1736, Pages 72–89, November 1999.

Statistical Network Calculus. We believe that the development of revolutionary new networks is hampered by the lack of methods to evaluate the performance of radically different designs of network architectures, protocols, or applications.  The development of a stochastic network calculus can potentially lead to the development of simple models and fast computational methods for communication networks that are very different from the networks and protocols used today.

We assert that the development of a stochastic network calculus, that is, a probabilistic extension of the network calculus that was conceived in the 1990s, can potentially lead to the development of simple models and fast computational methods  to evaluate communication networks.  The long-term goal of our research is to develop new theoretical concepts and and algorithms for predicting the delay and throughput performance of future networks.  Towards this goal, we focus on: 
  • Development of stochastic network calculus theory: We explore the capabilities and limitations of the stochastic network calculus approach and its applications to general network topologies. 
  • Applications: We investigate applications of the stochastic network calculus techniques  to understand the role of scheduling in high-speed networks, for the verification of service level agreements between network service providers,  and in the analysis of feedback-based buffer management and congestion control algorithms.
  • Development of computational methods: We develop computational algorithms for the stochastic network calculus that can be applied without requiring familiarity with the theory of the calculus itself.
  • Relation to other analytical approaches: We attempt to leverage off and integrate the results of related approaches for modeling and analyzing networks with statistical multiplexing, such as queueing networks, linear systems theory, and network decomposition theory.
Publications:
  • A Network Service Curve Approach for the Stochastic Analysis of Networks,  F. Ciucu, A. Burchard, and J. Liebeherr, ACM Sigmetrics '05, June 2005 (Best Student Paper Award). Journal version in IEEE Transaction on Information Theory,  52(6):2300–2312­, June 200
  • Statistical Per-Flow Service Bounds in a Network with Aggregate Provisioning, J. Liebeherr, S. D. Patek, and A. Burchard, Proceedings of IEEE Infocom 2003. 
  • A Min-Plus Calculus for End-to-end Statistical Service Guarantees, A. Burchard, J. Liebeherr, and  S. D. Patek IEEE Transaction on Information Theory, 52(9):4105–4114­,September 2006 
  • Statistical Service Assurances for Traffic Scheduling Algorithms, R. Boorstyn, A. Burchard, J. Liebeherr, C. Oottamakorn, IEEE Journal on Selected Areas in Communications. Special Issue on Internet QoS, December 2000.

Past Research


  • Deterministic Service Guarantees. A significant part of my research efforts in the 1990s addressed traffic control algorithms for networks with deterministic service guarantees. This work has included the derivation of best possible, that is, necessary and sufficient, admission control tests for the Earliest-Deadline-First (EDF) and Static Priority (SP) scheduling algorithms, and a proof of optimality of EDF for a deterministic service. Other work has included the development of new scheduling algorithms (RPQ, RPQ+), which can approximate the optimal EDF scheme arbitrarily closely, but with a lower implementation complexity than EDF.  I also worked on video traffic characterization methods for compressed video traffic, such as MPEG video. With these characterizations, and using actual MPEG video traces as benchmark traffic, it became feasible to determine the maximum utilization at which deterministic service guarantees can be maintained for MPEG video traffic.
Publications:
  • Priority Queue Schedulers with Approximate Sorting  in Output Buffered Switches, J. Liebeherr,  D. E. Wrege, IEEE Journal on Selected Areas in Communications. Special Issue on Next Generation IP Switches and Routers, June 1999.
  • Exact Admission Control in Continuous-Media Networks with Bounded Delay Services, J. Liebeherr, D. E. Wrege, D. Ferrari, IEEE/ACM Transactions on Networking, December 1996.
  • Deterministic Delay Bounds for VBR Video in Packet-Switching Networks: Fundamental Limits and Practical Tradeoffs, D.E. Wrege, E.W. Knightly, H. Zhang, J. Liebeherr, IEEE/ACM Transactions on Networking, June 1996.

  • Class-based Service Guarantees. Since the late 1990s, Internet researchers have developed interest in class-based QoS architectures that support service guarantees to traffic aggregates, rather than to individual flows. Class-based services trade off reduced complexity for weaker service guarantees. In his PhD research, Nicolas Christin explored the limits of such a service and who raised the question: What are the strongest service guarantees that can be given in class-based service architecture? The main contribution of this work is a new traffic control algorithm called JoBS (Joint Buffer Management and Scheduling), which makes scheduling and buffer management decisions in a single step. This is realized by running a feedback control loop at the output queue of the link level transmission queue. Nicolas Christin implemented JoBS in the FreeBSD Unix kernel, and showed that  shown that JoBS is viable to run in modern networks. The implementation of JoBS has been integrated into the ALTQ and KAME implementations. Both packages are distributed with the Unix versions FreeBSD, NetBSD, OpenBSD, and BSDI implementations. 

More Information:

Publications:

  • “Enhancing Class-Based Service Architectures with Adaptive Rate Allocation and Dropping Mechanisms”, N. Christin, J. Liebeherr  and T. F. Abdelzaher, ACM/IEEE Transactions on Networking (to appear).
  • The QoSbox: A PC-Router for Quantitative Service Differentiation in IP Networks”, (with N. Christin), Computer Networks (to appear).
  • A QoS Architecture for Quantitative Service Differentiation, N. Christin, J. Liebeherr, IEEE Communications Magazine, Special Issue on Scalability in IP-Oriented Networks, 41(6):38–45, June 2003. 
  • A Quantitative Assured Forwarding Service, N. Christin, J. Liebeherr, T. Abdelzaher, Proceedings of IEEE Infocom 2002, New York, Pages 864–873, June 2002.

  • MAC Protocols. Since my PhD thesis research, I have maintained a strong interest in media access protocols (MAC) for local and metropolitan area networks. I have worked on two media access protocols, Distributed Queue Dual Bus (DQDB) and Hybrid Fiber-Coax (HFC). Particularly, I have worked on the design and evaluation of protocols to improve fairness or provide priority access of these protocols. Realizing fairness and service differentiation in MAC protocols continue to pose difficult research problems, as evidenced by the recently proposed MAC protocols Resilient Packet Ring (IEEE 802.17) and WiFi (IEEE 802.11e). In my PhD thesisI presented and analyzed a media access protocol that quickly reaches a fair distribution of bandwidth. A follow-up study showed that in the presence of multi-priority traffic, the IEEE 802.6 standard could  not provide preemptive priorities, and proposed a solution to overcome this problem.
    The work on HFC networks ("cable modem networks") has had impact on the IEEE 802.14 standardization committee. The multi-priority media access protocol that was developed by Mark Corner and with researchers at NIST became the foundation for the priority scheme of the IEEE 802.14 standard effort. (The IEEE 802.14 standard was never completed, since the competing DOCSIS effort became the de-facto standard for HFC networks.)

Publications: 

    • A Priority Scheme for the IEEE 802.14 MAC Protocol for Hybrid Fiber-Coax Networks, M. D. Corner, N. Golmie, J. Liebeherr, C. Bisdikian, D. Su, IEEE/ACM Transactions on Networking, 8(2):200–211, April 2000.
    • An Effective Scheme for Pre-Emptive Priorities in Dual Bus Metropolitan Area Networks, J. Liebeherr, I. F. Akyildiz, A. N. Tantawi, Proceedings of ACM Sigcomm '92, August 1992.

  • Telecollaboration Tools. In the mid-1990s, we experimented a lot with MBONE tools and multimedia collaboration tools. We developed a system, called grounds-wide tele-tutoring system (gwTTS), for tele-tutoring on a campus network.The gwtts tele-tutoring software developed has been commercialized by the Virginian company Litton-Fibercom (CAMVision 7610 Distance Learning Controller). Paco Hope developed a (unfortunately unpublished)  tele-orchestra application for MIDI data over IP multicast.

  • More Information:

  • gwTTS website (1996)

  • Publication:
      • “An Interactive Distance Learning Application with Hybrid ATM/IP Networking”, J. Liebeherr, S. R. Brown, R. Albertson, Multimedia Tools and Applications, 11(2):211–229, June 2000.

8/2004