Filter
  • Architecture
  • Dissertation
  • Distributed Systems
  • Embedded Systems
  • Energy Systems
  • Graphics
  • Low Power
  • Low Power Computing
  • Machine Learning
  • Measurement
  • Memory
  • Networking
  • Operating Systems
  • Programming Languages
  • Rust
  • Science
  • Security
  • Simulation
  • Virtual Worlds
  • Wireless Networking
  • Wireless Networks
  • Wirelesss Networks

2024

Privacy-Preserving Control of Partitioned Energy Resources
Evan Laufer, Philip Levis, and Ram Rajagopal
Published in Proceedings of the ACM SIGEnergy Workshop on Cybersecurity and Privacy of Energy Systems, June 2024
Abstract
Distributed energy resources are an increasingly important part of the electric grid. We examine the problem of partitioning a distributed energy resource among many users while providing privacy to them. In this model, clients can send requests to a server, the server can verify that the requests are valid and aggregate them, but it cannot see the actual values in the requests. Without privacy, each user is forced to reveal their daily schedule or energy use. Energy resources add a novel challenge that prior systems do not address: they require verifying limits on private power (a rate over time) and energy (a sum) values. Furthermore, the cryptographic mechanisms must run on embedded energy control systems. We describe Weft, a novel cryptographic system that verifies both power (rate) and energy (integral) constraints on private client values and aggregates them. The key insight behind the approach is to rely on additively homomorphic secret shares, which allows servers to compute sums from rates. We present 3 cryptographic proof systems with different system trade-off for embedded systems: bit-splitting proofs minimize memory use, sorting proofs minimize computation, and commitment proofs minimize network communication. Using bit-splitting proofs, it takes an IoT client using a CortexM microcontroller 4 minutes of compute time to privately control its share of an energy resource for a day at 20s granularity.
BibTeX entry
@inproceedings{ laufer-energysp24 ,
  author    = "Evan Laufer, Philip Levis, and Ram Rajagopal",
  title     = "{Privacy-Preserving Control of Partitioned Energy Resources}",
  booktitle = "{Proceedings of the ACM SIGEnergy Workshop on Cybersecurity and Privacy of Energy Systems}",
  year      = {2024},
  month     = {June}
}
        
ALTO: An Efficient Network Orchestrator for Compound AI Systems
Keshav Santhanam, Deepti Raghavan, Muhammad Shahir Rahman, Thejas Venkatesh, Neha Kunjal, Pratiksha Thaker, Philip Levis, and Matei Zaharia.
Published in Proceedings of The 4th Workshop on Machine Learning and Systems (EuroMLSys), April 2024
Abstract
We present ALTO, a network orchestrator for efficiently serving compound AI systems such as pipelines of language models. ALTO leverages an optimization opportunity specific to generative language models, which is streaming intermediate outputs from the language model to downstream stages. We highlight two challenges that emerge while serving these applications at scale: handling how some stages can be stateful across partial outputs, and handling how language models can produce variable amounts of text. To address these challenges, we motivate the need for an aggregation-aware routing interface and distributed prompt-aware scheduling. ALTO’s partial output streaming increases throughput by up to 3× for a fixed latency target of 4 seconds/request and reduces tail latency by 1.8× compared to a baseline serving approach, on a complex chat bot verification pipeline.
BibTeX entry
@inproceedings{ alto-euromlsys24 ,
  author    = "Keshav Santhanam, Deepti Raghavan, Muhammad Shahir Rahman, Thejas Venkatesh, Neha Kunjal, Pratiksha Thaker, Philip Levis, and Matei Zaharia.",
  title     = "{ALTO: An Efficient Network Orchestrator for Compound AI Systems}",
  booktitle = "{Proceedings of The 4th Workshop on Machine Learning and Systems (EuroMLSys)}",
  year      = {2024},
  month     = {April}
}
        

2023

A Case Against CXL Memory Pooling
Philip Levis, Kun Lin, and Amy Tai
Published in Proceedings of the Twenty-Second ACM Workshop on Hot Topics in Networks (HotNets), November 2023
Abstract
Compute Express Link (CXL) is a replacement for PCIe. With much lower latency than PCIe and hardware support for cache coherence, programs can efficiently access remote memory over CXL. These capabilities have opened the possibility of CXL memory pools in datacenter and cloud networks, consisting of a large pool of memory that multiple machines share. Recent work argues memory pools could reduce memory needs and datacenter costs. In this paper, we argue that three problems preclude CXL and utility. The cost of a CXL pool will outweigh any memory pools from being useful or promising: cost, complexity, savings from reducing RAM. CXL has substantially higher latency than main memory, enough so that using it will require substantial rewriting of network applications in complex ways. Finally, from analyzing two production traces from Google and Azure Cloud, we find that modern servers are large relative to most VMs; even simple VM packing algorithms strand little memory, undermining the main incentive behind pooling. Despite recent research interest, as long as these three properties hold, CXL memory pools are unlikely to be a useful technology for datacenter or cloud systems.
BibTeX entry
@inproceedings{ cxl-hotnets23 ,
  author    = "Philip Levis, Kun Lin, and Amy Tai",
  title     = "{A Case Against CXL Memory Pooling}",
  booktitle = "{Proceedings of the Twenty-Second ACM Workshop on Hot Topics in Networks (HotNets)}",
  year      = {2023},
  month     = {November}
}
        
Cornflakes: Zero-Copy Serialization for Microsecond-Scale Networking
Deepti Raghavan, Shreya Ravi, Gina Yuan, Pratiksha Thaker, Sanjari Srivastava, Micah Murray, Pedro Henrique Penna, Amy Ousterhout, Philip Levis, Matei Zaharia, and Irene Zhang
Published in Proceedings of The 29th ACM Symposium on Operating Systems Principles, October 2023
Abstract
Data serialization is critical for many datacenter applications, but the memory copies required to move application data into packets are costly. Recent zero-copy APIs expose NIC scatter gather capabilities, raising the possibility of offloading this data movement to the NIC. However, as the memory coordination required for scatter-gather adds bookkeeping overhead, scatter-gather is not always useful.We describe Cornflakes, a hybrid serialization library stack that uses scatter-gather for serialization when it improves performance and falls back to memory copies otherwise.We have implemented Cornflakes within a UDP and TCP networking stack, across Mellanox and Intel NICs. On a Twitter cache trace, Cornflakes achieves 15.4% higher throughput than prior software approaches on a custom key-value store and 8.8% higher throughput than Redis serialization within Redis.
BibTeX entry
@inproceedings{ cornflakes-sosp23 ,
  author    = "Deepti Raghavan, Shreya Ravi, Gina Yuan, Pratiksha Thaker, Sanjari Srivastava, Micah Murray, Pedro Henrique Penna, Amy Ousterhout, Philip Levis, Matei Zaharia, and Irene Zhang",
  title     = "{Cornflakes: Zero-Copy Serialization for Microsecond-Scale Networking}",
  booktitle = "{Proceedings of The 29th ACM Symposium on Operating Systems Principles}",
  year      = {2023},
  month     = {October}
}
        

2022

Software defined grid energy storage
Sonia Martin, Nicholas Mosier, Obi Nnorom, Yancheng Ou, Liana Patel, Oskar Triebe, Gustavo Cezar, Philip Levis, Ram Rajagopal
Published in BuildSys '22: Proceedings of the 9th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation, December 2022
Abstract
Today, consumer battery installations are isolated, physical devices. Virtual power plants (VPPs) allow consumer devices to aggregate for grid services, but they are are vertically integrated, vendor controlled systems (e.g., Tesla’s VPP). Consumer batteries are therefore unable to participate in energy markets or other grid services outside what their vendor provides. We describe a software system that provides software control of multiple, networked battery energy storage systems in the electric grid. The system introduces two new ideas that enable flexible and dependable management of energy storage. The first is a virtual battery, which can either partition a battery or aggregate multiple batteries. The second is a reservation-based API which allows asynchronous control of batteries to meet contractual guarantees in a safe and dependable manner. Virtual batteries and a reservation-based API address the unique challenges of achieving high and efficient utilization of energy storage systems, including heterogeneity of battery systems such as varying C-rates, participation in energy markets, utility bill management systems, community resource sharing, and reliability. Using a testbed comprised of sonnen Inc. storage units installed in several homes and a lab, we demonstrate that virtualized batteries can seamlessly replace physical batteries, flexibly manage energy storage resources, isolate multiple clients using a shared battery, and create new energy storage applications.
BibTeX entry
@inproceedings{ martin22battery ,
  author    = "Sonia Martin, Nicholas Mosier, Obi Nnorom, Yancheng Ou, Liana Patel, Oskar Triebe, Gustavo Cezar, Philip Levis, Ram Rajagopal",
  title     = "{Software defined grid energy storage}",
  booktitle = "{ BuildSys '22: Proceedings of the 9th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation}",
  year      = {2022},
  month     = {December}
}
        
GRIP: A Graph Neural Network Accelerator Architecture
Kevin Kiningham, Philip Levis, and Christopher Ré
Published in IEEE TRANSACTIONS ON COMPUTERS, September 2022
Best Paper Award, IEEE TC 2023
Abstract
We present GRIP, a graph neural network accelerator architecture designed for low-latency inference. Accelerating GNNs is challenging because they combine two distinct types of computation: arithmetic-intensive vertex-centric operations and memory-intensive edge-centric operations. GRIP splits GNN inference into a three edge- and vertex-centric execution phases that can be implemented in hardware. GRIP specializes each unit for the unique computational structure found in each phase. For vertex-centric phases, GRIP uses a high performance matrix multiply engine coupled with a dedicated memory subsystem for weights to improve reuse. For edge-centric phases, GRIP use multiple parallel prefetch and reduction engines to alleviate the irregularity in memory accesses. Finally, GRIP supports several GNN optimizations, including an optimization called vertex-tiling that increases the reuse of weight data. We evaluate GRIP by performing synthesis and place and route for a 28 nm implementation capable of executing inference for several widely-used GNN models (GCN, GraphSAGE, G-GCN, and GIN). Across several benchmark graphs, it reduces 99th percentile latency by a geometric mean of 17X and 23X compared to a CPU and GPU baseline, respectively, while drawing only 5 W.
BibTeX entry
@inproceedings{ grip-ieeecomputer22 ,
  author    = "Kevin Kiningham, Philip Levis, and Christopher Ré",
  title     = "{GRIP: A Graph Neural Network Accelerator Architecture}",
  booktitle = "{IEEE TRANSACTIONS ON COMPUTERS}",
  year      = {2022},
  month     = {September}
}
        
Tighten Rust’s Belt: Shrinking Embedded Rust Binaries
Hudson Ayers, Evan Laufer, Paul Mure, Jaehyeon Park, Eduardo Rodelo, Thea Rossman, Andrey Pronin, Philip Levis, Johnathan Van Why
Published in Proceedings of the 23rd ACM SIGPLAN/SIGBED International Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES 2022), June 2022
Abstract
Rust is a promising programming language for embedded soft- ware, providing low-level primitives and performance similar to C/C++ alongside type safety, memory safety, and modern high-level language features. We find naive use of Rust leads to binaries much larger than their C equivalents. For flash- constrained embedded microcontrollers, this is prohibitive. We find four major causes of this growth: monomorphization, inefficient derivations, implicit data structures, and missing compiler optimizations. We present a set of embedded Rust programming principles which reduce Rust binary sizes. We apply these principles to an industrial Rust firmware application, reducing size by 76kB (19%), and an open source Rust OS kernel binary, reducing size by 23kB (26%). We explore compiler optimizations that could further shrink embedded Rust.
BibTeX entry
@inproceedings{ rust-lctes22 ,
  author    = "Hudson Ayers, Evan Laufer, Paul Mure, Jaehyeon Park, Eduardo Rodelo, Thea Rossman, Andrey Pronin, Philip Levis, Johnathan Van Why",
  title     = "{Tighten Rust's Belt: Shrinking Embedded Rust Binaries}",
  booktitle = "{Proceedings of the 23rd ACM SIGPLAN/SIGBED International Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES 2022)}",
  year      = {2022},
  month     = {June}
}
        
Tiered Trust for Useful Embedded Systems Security
Hudson Ayers, Prabal Dutta, Philip Levis, Amit Levy, Pat Pannuto, Johnathan Van Why, Jean-Luc Watson
Published in Proceedings of the 15th European Workshop on Systems Security (EuroSec), April 2022
Abstract
Traditional embedded systems rely on custom C code deployed in a monolithic firmware image. In these systems, all code must be trusted completely, as any code can directly modify memory or hardware registers. More recently, some embedded OSes have improved security by separating userspace applications from the kernel, using strong hardware isolation in the form of a memory protection unit (MPU). Unfortunately, this design requires either a large trusted computing base (TCB) containing all OS services, or moving many OS services into userspace. The large TCB approach offers no protection against seemingly-correct backdoored code, discouraging the use of kernel code produced by others and complicating security audits. OS services in userspace come at a cost to usability and efficiency. We posit that a model enabling two tiers of trust for kernel code is better suited to modern embedded software practices. In this paper, we present the threat model of the Tock Operating System, which is based on this idea. We compare this threat model to existing security approaches, and show how it provides useful guarantees to different stakeholders.
BibTeX entry
@inproceedings{ eurosec22-tock ,
  author    = "Hudson Ayers, Prabal Dutta, Philip Levis, Amit Levy, Pat Pannuto, Johnathan Van Why, Jean-Luc Watson",
  title     = "{Tiered Trust for Useful Embedded Systems Security}",
  booktitle = "{Proceedings of the 15th European Workshop on Systems Security (EuroSec)}",
  year      = {2022},
  month     = {April}
}
        
Towards Retina-Quality VR Video Streaming: 15 ms Could Save You 80% of Your Bandwidth
Luke Hsiao, Brooke Krajancich, Philip Levis, Gordon Wetzstein and Keith Winstein
Published in Computer Communication Review 52:1, January 2022
Abstract
Virtual reality systems today cannot yet stream immersive, retina- quality virtual reality video over a network. One of the greatest challenges to this goal is the sheer data rates required to transmit retina quality video frames at high resolutions and frame rates. Recent work has leveraged the decay of visual acuity in human perception in novel gaze-contingent video compression techniques. In this paper, we show that reducing the motion-to-photon latency of a system itself is a key method for improving the compression ratio of gaze-contingent compression. Our key finding is that a client and streaming server system with sub-15ms latency can achieve 5x better compression than traditional techniques while also using simpler software algorithms than previous work.
BibTeX entry
@inproceedings{ hsiao22-retina ,
  author    = "Luke Hsiao, Brooke Krajancich, Philip Levis, Gordon Wetzstein and Keith Winstein",
  title     = "{Towards Retina-Quality VR Video Streaming: 15 ms Could Save You 80% of Your Bandwidth}",
  booktitle = "{Computer Communication Review 52:1}",
  year      = {2022},
  month     = {January}
}
        

2021

Receiving Colliding LoRa Packets with Hard Information Iterative Decoding
Raejoon Jung and Philip Levis
Published in Proceedings of the IEEE Conference and Exhibition on Global Telecommunications (GLOBECOM), December 2021
Abstract
This paper presents symbol querying and symbol SIC, two techniques which allow LoRa receivers to recover colliding packets. A symbol querying receiver allows the demodulator and channel decoder to jointly search for the correct set of symbols during a collision. By operating in the frequency domain, both symbol querying and symbol SIC greatly limit the search space of possible packets, allowing for efficient implementations. Experimental results show that these techniques allow LoRa to elevate error detection to correction and outperform a BICM-ID receiver, receiving 3.8x more frames than a traditional LoRa receiver in a low SINR setting.
BibTeX entry
@inproceedings{ lorawan-globecom21 ,
  author    = "Raejoon Jung and Philip Levis",
  title     = "{Receiving Colliding LoRa Packets with Hard Information Iterative Decoding}",
  booktitle = "{Proceedings of the IEEE Conference and Exhibition on Global Telecommunications (GLOBECOM)}",
  year      = {2021},
  month     = {December}
}
        
Clamor: Extending Functional Cluster Computing Frameworks with Fine-Grained Remote Memory Access
Pratiksha Thaker, Hudson Ayers, Deepti Raghavan, Ning Niu, Philip Levis, and Matei Zaharia
Published in Proceedings of the ACM Symposium on Cloud Computing (SoCC 2021), November 2021
Abstract
We propose Clamor, a functional cluster computing framework that adds support for fine-grained, transparent access to global variables for distributed, data-parallel tasks. Clamor targets workloads that perform sparse accesses and updates within the bulk synchronous parallel execution model, a setting where the standard technique of broadcasting global variables is highly inefficient. Clamor implements a novel dynamic replication mechanism in order to enable efficient access to popular data regions on the fly, and tracks fine-grained dependencies in order to retain the lineage-based fault tolerance model of systems like Spark. Clamor can integrate with existing Rust and C++ libraries to transparently distribute programs on the cluster. We show that Clamor is competitive with Spark in simple functional workloads and can improve performance significantly compared to custom systems on workloads that sparsely access large global variables: from 5X for sparse logistic regression to over 100X on distributed geospatial queries.
BibTeX entry
@inproceedings{ thaker21-clamor ,
  author    = "Pratiksha Thaker, Hudson Ayers, Deepti Raghavan, Ning Niu, Philip Levis, and Matei Zaharia ",
  title     = "{Clamor: Extending Functional Cluster Computing Frameworks with Fine-Grained Remote Memory Access}",
  booktitle = "{Proceedings of the ACM Symposium on Cloud Computing (SoCC 2021)}",
  year      = {2021},
  month     = {November}
}
        
Breakfast of Champions: Towards Zero-Copy Serialization with NIC Scatter-Gather
Deepti Raghavan, Philip Levis, Matei Zaharia, and Irene Zhang
Published in Proceedings of the 18th Workshop on Hot Topics in Operating Systems (HotOS XVIII), June 2021
Abstract
Microsecond I/O will make data serialization a major bottleneck for datacenter applications. Serialization is fundamentally about data movement: serialization libraries coalesce and flatten in-memory data structures into a single transmittable buffer. CPU-based serialization approaches will hit a performance limit due to data movement overheads and be unable to keep up with modern networks. We observe that widely deployed NICs possess scatter-gather capabilities that can be re-purposed to accelerate serialization’s core task of coalescing and flattening in-memory data structures. It is possible to build a completely zero-copy, zero-allocation serialization library with commodity NICs. Doing so introduces many research challenges, including using the hardware capabilities efficiently for a wide variety of non-uniform data structures, making application memory available for zero-copy I/O, and ensuring memory safety.
BibTeX entry
@inproceedings{ raghavan21-hotos ,
  author    = "Deepti Raghavan, Philip Levis, Matei Zaharia, and Irene Zhang",
  title     = "{Breakfast of Champions: Towards Zero-Copy Serialization with NIC Scatter-Gather}",
  booktitle = "{Proceedings of the 18th Workshop on Hot Topics in Operating Systems (HotOS XVIII)}",
  year      = {2021},
  month     = {June}
}
        
Power Clocks: Dynamic Multi-Clock Management for Embedded Systems
Holly Chiang, Hudson Ayers, Daniel B. Giffin, Amit Levy and Philip Levis
Published in Proceedings of the International Conference on Embedded Wireless Systems and Networks (EWSN 2021), February 2021
Abstract
This paper presents Power Clocks, a kernel-based dynamic clock management system that reduces active energy use in embedded microcontrollers by changing the clock based on ongoing computation and I/O requests. In Power Clocks, kernel hardware drivers asynchronously request clocks, providing a set of constraints (e.g., maximum speed), which the kernel uses to dynamically choose the most efficient clock. To select a clock, Power Clocks makes use of the observation that though slower clocks use less power and are suited for fixed time I/O operations, faster clocks use less energy per clock tick, making them optimal for pure computation. Using Power Clocks, a networked sensing application consumes 27% less energy than the best static clock, and within 3% of an optimal hand-tuned dynamic clock strategy. Power Clocks provides similar energy savings even when there are multiple applications.
BibTeX entry
@inproceedings{ chiang21-clocks ,
  author    = "Holly Chiang, Hudson Ayers, Daniel B. Giffin, Amit Levy and Philip Levis",
  title     = "{Power Clocks: Dynamic Multi-Clock Management for Embedded Systems}",
  booktitle = "{Proceedings of the International Conference on Embedded Wireless Systems and Networks (EWSN 2021)}",
  year      = {2021},
  month     = {February}
}
        

2020

Creating Hardware Component Knowledge Bases with Training Data Generation and Multi-task Learning
Luke Hsiao, Sen Wu, Nicholas Chiang, Christopher Ré, and Philip Levis
Published in ACM Transactions on Embedded Computing Systems (TECS), September 2020
Abstract
Hardware component databases are vital resources in designing embedded systems. Since creating these databases requires hundreds of thousands of hours of manual data entry, they are proprietary, limited in the data they provide, and have random data entry errors. We present a machine learning based approach for creating hardware component databases directly from datasheets. Extracting data directly from datasheets is challenging because: (1) the data is relational in nature and relies on non-local context, (2) the documents are filled with technical jargon, and (3) the datasheets are PDFs, a format that decouples visual locality from locality in the document. Addressing this complexity has traditionally relied on human input, making it costly to scale. Our approach uses a rich data model, weak supervision, data augmentation, and multi-task learning to create these knowledge bases in a matter of days. We evaluate the approach on datasheets of three types of components and achieve an average quality of 77 F1 points–quality comparable to existing human-curated knowledge bases. We perform application studies that demonstrate the extraction of multiple data modalities including numerical properties and images. We show how different sources of supervision such as heuristics and human labels have distinct advantages that can be utilized together to improve knowledge base quality. Finally, we present a case study to show how this approach changes the way practitioners create hardware component knowledge bases.
BibTeX entry
@inproceedings{ tecs20hack ,
  author    = "Luke Hsiao, Sen Wu, Nicholas Chiang, Christopher Ré, and Philip Levis",
  title     = "{Creating Hardware Component Knowledge Bases with Training Data Generation and Multi-task Learning}",
  booktitle = "{ACM Transactions on Embedded Computing Systems (TECS)}",
  year      = {2020},
  month     = {September}
}
        
Approximate Partition Selection for Big-Data Workloads using Summary Statistics
Kexin Rong, Yao Lu, Peter Bailis, Srikanth Kandula, Philip Levis
Published in Proceedings of the VLDB Endowment, July 2020
Abstract
Many big-data clusters store data in large partitions that support access at a coarse, partition-level granularity. As a result, approximate query processing via row-level sampling is inefficient, often requiring reads of many partitions. In this work, we seek to answer queries quickly and approximately by reading a subset of the data partitions and combining partial answers in a weighted manner without modifying the data layout. We illustrate how to efficiently per- form this query processing using a set of precomputed summary statistics, which inform the choice of partition subset and weights. We develop novel means of using the statistics to assess the similarity and importance of partitions. Our experiments on several datasets and data layouts demonstrate that to achieve the same relative error compared to uniform partition sampling, our techniques offer from 2.7x to 70x reduction in the number of partitions read, and the statistics stored per partition require fewer than 100KB.
BibTeX entry
@inproceedings{ approx-vldb20 ,
  author    = "Kexin Rong, Yao Lu, Peter Bailis, Srikanth Kandula, Philip Levis",
  title     = "{Approximate Partition Selection for Big-Data Workloads using Summary Statistics}",
  booktitle = "{Proceedings of the VLDB Endowment}",
  year      = {2020},
  month     = {July}
}
        
POSH: A Data-Aware Shell
Deepti Raghavan, Sadjad Fouladi, Philip Levis, and Matei Zaharia
Published in Proceedings of the 2020 USENIX Annual Technical Conference (ATC '20), June 2020
Abstract
We present POSH, a framework that accelerates shell applications with I/O-heavy components, such as data analytics with command-line utilities. Remote storage such as networked filesystems can severely limit the performance of these applications: data makes a round trip over the network for relatively little computation at the client. Reducing the data movement by moving the code to the data can improve performance. POSH automatically optimizes unmodified I/O-intensive shell applications running over remote storage by offloading the I/O-intensive portions to proxy servers closer to the data. A proxy can run directly on a storage server, or on a machine closer to the storage layer than the client. POSH intercepts shell pipelines and uses metadata called annotations to decide where to run each command within the pipeline. We address three principal challenges that arise: an annotation language that allows POSH to understand which files a command will access, a scheduling algorithm that places commands to minimize data movement, and a system runtime to execute a distributed schedule but retain local semantics. We benchmark POSH on real shell pipelines such as image processing, network security analysis, log analysis, distributed system debugging, and git. We find that POSH provides speedups ranging from 1.6x to 15x compared to NFS, without requiring any modifications to the applications.
BibTeX entry
@inproceedings{ posh-atc20 ,
  author    = "Deepti Raghavan, Sadjad Fouladi, Philip Levis, and Matei Zaharia",
  title     = "{POSH: A Data-Aware Shell}",
  booktitle = "{Proceedings of the 2020 USENIX Annual Technical Conference (ATC '20)}",
  year      = {2020},
  month     = {June}
}
        
Design Considerations for Low Power Internet Protocols
Hudson Ayers, Paul Crews, Hubert Teo, Conor McAvity, Amit Levy, and Philip Levis
Published in Proceedings of the 16th Annual International Conference on Distributed Computing in Sensor Systems (DCOSS 2020), May 2020
Abstract
Low-power wireless networks provide IPv6 connectivity through 6LoWPAN, a set of standards to aggressively compress IPv6 packets over small maximum transfer unit (MTU) links such as 802.15.4. The entire purpose of IP was to interconnect different networks, but we find that different 6LoWPAN implementations fail to reliably communicate with one another. These failures are due to stacks implementing different subsets of the standard out of concern for code size. We argue that this failure stems from 6LoWPAN’s design, not implementation, and is due to applying traditional Internet protocol design principles to low-power networks. We propose three design principles for Internet protocols on low-power networks, designed to prevent similar failures in the future. These principles are based around the importance of providing flexible tradeoffs between code size and energy efficiency. We apply these principles to 6LoWPAN and show that the modified protocol provides a wide range of implementation strategies while allowing implementations with different strate- gies to reliably communicate.
BibTeX entry
@inproceedings{ lowpan-dcoss20 ,
  author    = "Hudson Ayers, Paul Crews, Hubert Teo, Conor McAvity, Amit Levy, and Philip Levis",
  title     = "{Design Considerations for Low Power Internet Protocols}",
  booktitle = "{Proceedings of the 16th Annual International Conference on Distributed Computing in Sensor Systems (DCOSS 2020)}",
  year      = {2020},
  month     = {May}
}
        
Accelerating Distributed Graphical Fluid Simulations with Micro‐partitioning
Hang Qu, Omid Mashayekhi, Chinmayee Shah, and Philip Levis
Published in COMPUTER GRAPHICS FORUM, May 2020
Abstract
Graphical fluid simulations are CPU‐bound. Parallelizing simulations on hundreds of cores in the computing cloud would make them faster, but requires evenly balancing load across nodes. Good load balancing depends on manual decisions from experts, which are time‐consuming and error prone, or dynamic approaches that estimate and react to future load, which are non‐ deterministic and hard to debug. This paper proposes Birdshot scheduling, an automatic and purely static load balancing algorithm whose performance is close to expert decisions and reactive algorithms without their difficulty or complexity. Birdshot scheduling’s key insight is to leverage the high‐ latency, high‐throughput, full bisection bandwidth of cloud computing nodes. Birdshot scheduling splits the simulation domain into many micro‐partitions and statically assigns them to nodes randomly. Analytical results show that randomly assigned micro‐partitions balance load with high probability. The high‐throughput network easily handles the increased data transfers from micro‐partitions, and full bisection bandwidth allows random placement with no performance penalty. Overlapping the communications and computations of different micro‐partitions masks latency. Experiments with particle‐level set, SPH, FLIP and explicit Eulerian methods show that Birdshot scheduling speeds up simulations by a factor of 2‐3, and can out‐ perform reactive scheduling algorithms. Birdshot scheduling performs within 21% of state‐of‐the‐art dynamic methods that require running a second, parallel simulation. Unlike speculative algorithms, Birdshot scheduling is purely static: it requires no controller, runtime data collection, partition migration or support for these operations from the programmer.
BibTeX entry
@inproceedings{ qu-cgf20 ,
  author    = "Hang Qu, Omid Mashayekhi, Chinmayee Shah, and Philip Levis",
  title     = "{Accelerating Distributed Graphical Fluid Simulations with Micro‐partitioning}",
  booktitle = "{COMPUTER GRAPHICS FORUM}",
  year      = {2020},
  month     = {May}
}
        
GReTA: Hardware Optimized Graph Processing for GNNs
Kevin Kiningham, Philip Levis, and Christopher Ré
Published in Proceedings of the Workshop on Resource-Constrained Machine Learning (ReCoML 2020), March 2020
Abstract
Graph Neural Networks (GNNs) are a type of deep neural network that can learn directly from irregular graph-structured data. However, GNN inference relies on sparse operations that are difficult to implement on accelerator architectures optimized for dense, fixed-sized computation. This paper proposes GReTA (Gather, Reduce, Transform, Activate), a graph processing abstraction designed to be efficient for an accelerator to execute and flexible enough to implement GNN inference. We demonstrate GReTA’s advantages by designing and synthesizing a custom accelerator ASIC for GReTA and implementing several GNN models (GCN, GraphSage, G-GCN, and GIN.) Across several benchmark graphs, our implementation reduces 97th percentile latency by a geometric mean of 15× and 21× compared to a CPU and GPU baseline respectively.
BibTeX entry
@inproceedings{ greta-recoml20 ,
  author    = "Kevin Kiningham, Philip Levis, and Christopher Ré",
  title     = "{GReTA: Hardware Optimized Graph Processing for GNNs}",
  booktitle = "{Proceedings of the Workshop on Resource-Constrained Machine Learning (ReCoML 2020)}",
  year      = {2020},
  month     = {March}
}
        
Learning in situ: a randomized experiment in video streaming
Francis Y. Yan, Hudson Ayers, Chenzhi Zhu, Sadjad Fouladi, James Hong, Keyi Zhang, Philip Levis, Keith Winstein
Published in Proceedings of the 17th USENIX Symposium on Networked Systems Design and Implementation (NSDI '20), February 2020
Community Service Award, IRTF Applied Networking Research Prize
Abstract
We describe the results of a randomized controlled trial of video-streaming algorithms for bitrate selection and network prediction. Over the last year, we have streamed 38.6 years of video to 63,508 users across the Internet. Sessions are randomized in blinded fashion among algorithms. We found that in this real-world setting, it is difficult for sophisticated or machine-learned control schemes to outperform a “simple” scheme (buffer-based control), notwithstanding good performance in network emulators or simulators. We performed a statistical analysis and found that the heavy-tailed nature of network and user behavior, as well as the challenges of emulating diverse Internet paths during training, present obstacles for learned algorithms in this setting. We then developed an ABR algorithm that robustly outperformed other schemes, by leveraging data from its deployment and limiting the scope of machine learning only to making predictions that can be checked soon after. The system uses supervised learning in situ, with data from the real deployment environment, to train a probabilistic predictor of upcoming chunk transmission times. This module then informs a classical control policy (model predictive control). To support further investigation, we are publishing an archive of data and results each week, and will open our ongoing study to the community. We welcome other researchers to use this platform to develop and validate new algorithms for bitrate selection, network prediction, and congestion control.
BibTeX entry
@inproceedings{ puffer-nsdi20 ,
  author    = "Francis Y. Yan, Hudson Ayers, Chenzhi Zhu, Sadjad Fouladi, James Hong, Keyi Zhang, Philip Levis, Keith Winstein",
  title     = "{Learning in situ: a randomized experiment in video streaming}",
  booktitle = "{Proceedings of the 17th USENIX Symposium on Networked Systems Design and Implementation (NSDI '20)}",
  year      = {2020},
  month     = {February}
}
        

2019

Falcon – A Flexible Architecture For Accelerating Cryptography
Kevin Kiningham, Philip Levis, Mark Anderson, Dan Boneh, Mark Horowitz, and Maurice Shih
Published in Proceedings of the The 16th IEEE International Conference on Mobile Ad-Hoc and Smart Systems (MASS '19), November 2019
Abstract
Internet of Things (IoT) devices, once deployed, must remain secure for their entire lifetime, which can be as long as 20 years. Over this lifetime, devices must be able to update which ciphers they use to meet evolving security requirements. However, devices cannot rely on software updates for their cryptography because software implementations consume too much energy. At the same time, fixed function hardware accelerators such as an AES engine cannot support new ciphers. This paper presents Falcon, a hardware architecture for accelerating a broad range of cryptography on energy limited devices. Rather than accelerate a fixed set of current ciphers, Falcon provides a general execution engine that accelerates dominant and emerging ciphers, such as AES, Cha-Cha, SHA- 256, RSA, ECC with Curve25519, as well as post-quantum ciphers such as R-LWE. For cryptography, Falcon provides the flexibility of software while reducing the energy consumption of cryptography by 5-60x compared to software. This reduction makes it feasible for IoT applications to upgrade the ciphers they use after deployment, allowing them to keep up to date with security best practices without reducing their deployment lifetime or reducing the application workload. In an application monitoring the temperature of sensitive medical supplies in hospitals, Falcon doubles the deployment lifetime (2.2x).
BibTeX entry
@inproceedings{ kiningham19falcon ,
  author    = "Kevin Kiningham, Philip Levis, Mark Anderson, Dan Boneh, Mark Horowitz, and Maurice Shih",
  title     = "{Falcon -- A Flexible Architecture For Accelerating Cryptography}",
  booktitle = "{Proceedings of the The 16th IEEE International Conference on Mobile Ad-Hoc and Smart Systems (MASS '19)}",
  year      = {2019},
  month     = {November}
}
        
Challenge: Unlicensed LPWANs Are Not Yet the Path to Ubiquitous Connectivity
Branden Ghena, Joshua Adkins, Longfei Shangguan, Kyle Jamieson, Philip Levis, and Prabal Dutta
Published in Proceedings of the 25th Annual International Conference on Mobile Computing and Networking (MobiCom '19), October 2019
Abstract
Low-power wide-area networks (LPWANs) are a compelling answer to the networking challenges faced by many Internet of Things devices. Their combination of low power, long range, and deployment ease has motivated a flurry of research, including exciting results on backscatter and interference cancellation that further lower power budgets and increase capacity. But despite the interest, we argue that unlicensed LPWAN technologies can only serve a narrow class of Internet of Things applications due to two principal challenges: capacity and coexistence. We propose a metric, bit flux, to describe networks and applications in terms of throughput over a coverage area. Using bit flux, we find that the combination of low bit rate and long range restricts the use case of LPWANs to sparse sensing applications. Furthermore, this lack of capacity leads networks to use as much available bandwidth as possible, and a lack of coexistence mechanisms causes poor performance in the presence of multiple, independently-administered networks. We discuss a variety of techniques and approaches that could be used to address these two challenges and enable LPWANs to achieve the promise of ubiquitous connectivity.
BibTeX entry
@inproceedings{ ghena-mobicom19 ,
  author    = "Branden Ghena, Joshua Adkins, Longfei Shangguan, Kyle Jamieson, Philip Levis, and Prabal Dutta",
  title     = "{Challenge: Unlicensed LPWANs Are Not Yet the Path to Ubiquitous Connectivity}",
  booktitle = "{Proceedings of the 25th Annual International Conference on Mobile Computing and Networking (MobiCom '19)}",
  year      = {2019},
  month     = {October}
}
        
Automating the Generation of Hardware Component Knowledge Bases
Luke Hsiao, Sen Wu, Nicholas Chiang, Christopher Ré, and Philip Levis
Published in Proceedings of the 20th ACM SIGPLAN/SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES '19), June 2019
Abstract
Hardware component databases are critical resources in designing embedded systems. Since generating these databases requires hundreds of thousands of hours of manual data entry, they are proprietary, limited in the data they provide, and have many random data entry errors. We present a machine-learning based approach for automating the generation of component databases directly from datasheets. Extracting data directly from datasheets is challenging because: (1) the data is relational in nature and relies on non-local context, (2) the documents are filled with technical jargon, and (3) the datasheets are PDFs, a format that decouples visual locality from locality in the document. The proposed approach uses a rich data model and weak supervision to address these challenges. We evaluate the approach on datasheets of three classes of hardware components and achieve an average quality of 75 F1 points which is comparable to existing human-curated knowledge bases. We perform two applications studies that demonstrate the extraction of multiple data modalities such as numerical properties and images. We show how different sources of supervision such as heuristics and human labels have distinct advantages which can be utilized together within a single methodology to automatically generate hard- ware component knowledge bases.
BibTeX entry
@inproceedings{ hack-lctes19 ,
  author    = "Luke Hsiao, Sen Wu, Nicholas Chiang, Christopher Ré, and Philip Levis",
  title     = "{Automating the Generation of Hardware Component Knowledge Bases}",
  booktitle = "{Proceedings of the 20th ACM SIGPLAN/SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES '19)}",
  year      = {2019},
  month     = {June}
}
        
Rehashing Kernel Evaluation in High Dimensions
Paris Siminelakis, Kexin Rong, Peter Bailis, Moses Charikar, and Philip Levis
Published in Proceedings of the 36th International Conference on Machine Learning (ICML '19), June 2019
Abstract
Kernel methods are effective but do not scale well to large scale data, especially in high dimensions where the geometric data structures used to accelerate kernel evaluation suffer from the curse of dimensionality. Recent theoretical advances have proposed fast kernel evaluation algorithms lever-aging hashing techniques with worst-case asymptotic improvements. However, these advances are largely confined to the theoretical realm due to concerns such as super-linear preprocessing time and diminishing gains in non-worst case datasets.In this paper, we close the gap between theory and practice by addressing these challenges via provable and practical procedures for adaptive sample size selection, preprocessing time reduction, and refined variance bounds that quantify the data-dependent performance of random sampling and hashing-based kernel evaluation methods. Our experiments show that these new tools offer up to 10x improvement in evaluation time on a range of synthetic and real-world datasets.
BibTeX entry
@inproceedings{ siminelakis19icml ,
  author    = "Paris Siminelakis, Kexin Rong, Peter Bailis, Moses Charikar, and Philip Levis",
  title     = "{Rehashing Kernel Evaluation in High Dimensions}",
  booktitle = "{Proceedings of the 36th International Conference on Machine Learning (ICML '19)}",
  year      = {2019},
  month     = {June}
}
        

2018

Smart Contracts for Machine-to-Machine Communication: Possibilities and Limitations
Yuichi Hanada, Luke Hsiao, Philip Levis
Published in 2018 IEEE International Conference on Internet of Things and Intelligence System (IOTAIS), October 2018
Abstract
Blockchain technologies, such as smart contracts, present a unique interface for machine-to-machine communication that provides a secure, append-only record that can be shared without trust and without a central administrator. We study the possibilities and limitations of using smart contracts for machine-to-machine communication by designing, implementing, and evaluatig AGasP, an application for automated gasoline purchases. We find that using smart contracts allows us to directly address the challenges of transparency, longevity, and trust in IoT applications. However, real-world applications using smart contracts must address their important trade-offs, such as performance, privacy, and the challenge of ensuring they are written correctly.
BibTeX entry
@inproceedings{ iotais18agasp ,
  author    = "Yuichi Hanada, Luke Hsiao, Philip Levis",
  title     = "{Smart Contracts for Machine-to-Machine Communication: Possibilities and Limitations}",
  booktitle = "{2018 IEEE International Conference on Internet of Things and Intelligence System (IOTAIS)}",
  year      = {2018},
  month     = {October}
}
        
Pantheon: the training ground for Internet congestion-control research
Francis Y. Yan, Jestin Ma, Greg D. Hill, Deepti Raghavan, Riad S. Wahby, Philip Levis, and Keith Winstein
Published in Proceedings of the 2018 USENIX Annual Technical Conference (ATC), July 2018
Best Paper Award, ATC 2018
Abstract
Internet transport algorithms are foundational to the performance of network applications. But a number of practical challenges make it difficult to evaluate new ideas and algorithms in a reproducible manner. We present the Pantheon, a system that addresses this by serving as a community “training ground” for research on Internet transport protocols and congestion control (https://pantheon.stanford.edu). It allows network researchers to benefit from and contribute to a common set of benchmark algorithms, a shared evaluation platform, and a public archive of results. We present three results showing the Pantheon’s value as a research tool. First, we describe a measurement study from more than a year of data, indicating that congestion-control schemes vary dramatically in their relative performance as a function of path dynamics. Second, the Pantheon generates calibrated network emulators that capture the diverse performance of real Internet paths. These enable reproducible and rapid experiments that closely approximate real-world results. Finally, we describe the Pantheon’s contribution to developing new congestion-control schemes, two of which were published at USENIX NSDI 2018, as well as data-driven neural-network-based congestion-control schemes that can be trained to achieve good performance over the real Internet.
BibTeX entry
@inproceedings{ pantheon-atc18 ,
  author    = "Francis Y. Yan, Jestin Ma, Greg D. Hill, Deepti Raghavan, Riad S. Wahby, Philip Levis, and Keith Winstein",
  title     = "{Pantheon: the training ground for Internet congestion-control research}",
  booktitle = "{Proceedings of the 2018 USENIX Annual Technical Conference (ATC)}",
  year      = {2018},
  month     = {July}
}
        
Distributing and Load Balancing Sparse Fluid Simulations
Chinmayee Shah, David Hyde, Hang Qu, and Philip Levis
Published in Proceedings of the 17th Annual Symposium on Computer Animation (SCA 2018), July 2018
Abstract
This paper describes a general algorithm and a system for load balancing sparse fluid simulations. Automatically distributing sparse fluid simulations efficiently is challenging because the computational load varies across the simulation domain and time. A key challenge with load balancing is that optimal decision making requires knowing the fluid distribution across partitions for future time steps, but computing this state for an arbitrary simulation requires running the simulation itself. The key insight of this paper is that it is possible to predict future load by running a low resolution simulation in parallel. This paper describes a system design and techniques to automatically distribute and load balance sparse fluid simulations, and presents speculative load balancing, a general technique to effectively balance future load using information about future load distribution obtained via a low resolution simulation. We mathematically formulate the problem of load balancing over multiple time-steps and present a polynomial time algorithm to compute an approximate solution to it. Our experimental results show that distributing and speculatively load balancing sparse FLIP simulations over 8 nodes speeds them up by 5.5x to 7.2x, and that speculative load balancing generates assignments that perform within 20% of optimal.
BibTeX entry
@inproceedings{ sca18-shah ,
  author    = "Chinmayee Shah, David Hyde, Hang Qu, and Philip Levis",
  title     = "{Distributing and Load Balancing Sparse Fluid Simulations}",
  booktitle = "{Proceedings of the 17th Annual Symposium on Computer Animation (SCA 2018)}",
  year      = {2018},
  month     = {July}
}
        
Locality-Sensitive Hashing for Earthquake Detection:A Case Study of Scaling Data-Driven Science
Kexin Rong, Clara E. Yoon, Karianne J. Bergen, Hashem Elezabi, Peter Bailis, Philip Levis, Gregory C. Beroza
Published in Proceedings of the VLDB Endowment, Volume 11, No. 11, July 2018
Abstract
In this work, we report on a novel application of Locality Sensitive Hashing (LSH) to seismic data at scale. Based on the high wave-form similarity between reoccurring earthquakes, our application identifies potential earthquakes by searching for similar time series segments via LSH. However, a straightforward implementation of this LSH-enabled application has difficulty scaling beyond 3 months of continuous time series data measured at a single seismic station.As a case study of a data-driven science workflow, we illustrate how domain knowledge can be incorporated into the workload to improve both the efficiency and result quality. We describe several end-to-end optimizations of the analysis pipeline from pre-processing to post-processing, which allow the application to scale to time series data measured at multiple seismic stations. Our optimizations enable an over 100x speedup in the end-to-end analysis pipeline.This improved scalability enabled seismologists to perform seismic analysis on more than ten years of continuous time series data from over ten seismic stations, and has directly enabled the discovery of 597 new earthquakes near the Diablo Canyon nuclear power plant in California and 6123 new earthquakes in New Zealand.
BibTeX entry
@inproceedings{ vldb18earthquakes ,
  author    = "Kexin Rong, Clara E. Yoon, Karianne J. Bergen, Hashem Elezabi, Peter Bailis, Philip Levis, Gregory C. Beroza",
  title     = "{Locality-Sensitive Hashing for Earthquake Detection:A Case Study of Scaling Data-Driven Science}",
  booktitle = "{Proceedings of the VLDB Endowment, Volume 11, No. 11}",
  year      = {2018},
  month     = {July}
}
        
Fonduer: Knowledge Base Construction from Richly Formatted Data
Sen Wu, Luke Hsiao, Xiao Cheng, Braden Hancock, Theodoros Rekatsinas, Philip Levis, and Christopher Ré
Published in Proceedings of 2018 ACM SIGMOD/PODS Conference on the Management of Data, June 2018
Abstract
We focus on knowledge base construction (KBC) from richly formatted data. In contrast to KBC from text or tabular data, KBC from richly formatted data aims to extract relations conveyed jointly via textual, structural, tabular, and visual expressions. We introduce Fonduer, a machine-learning-based KBC system for richly formatted data. Fonduer presents a new data model that accounts for three challenging characteristics of richly formatted data: (1) prevalent document-level relations, (2) multimodality, and (3) data variety. Fonduer uses a new deep-learning model to automatically capture the representation (i.e., features) needed to learn how to extract relations from richly formatted data. Finally, Fonduer provides a new programming model that enables users to convert domain expertise, based on multiple modalities of information, to meaningful signals of supervision for training a KBC system. Fonduer-based KBC systems are in production for a range of use cases, including at a major online retailer. We compare Fonduer against state-of- the-art KBC approaches in four different domains. We show that Fonduer achieves an average improvement of 41 F1 points on the quality of the output knowledge base—and in some cases produces up to 1.87× the number of correct entries—compared to expert- curated public knowledge bases. We also conduct a user study to assess the usability of Fonduer’s new programming model. We show that after using Fonduer for only 30 minutes, non-domain experts are able to design KBC systems that achieve on average 23 F1 points higher quality than traditional machine-learning-based KBC approaches.
BibTeX entry
@inproceedings{ fonduer-sigmod18 ,
  author    = "Sen Wu, Luke Hsiao, Xiao Cheng, Braden Hancock, Theodoros Rekatsinas, Philip Levis, and Christopher Ré",
  title     = "{Fonduer: Knowledge Base Construction from Richly Formatted Data}",
  booktitle = "{Proceedings of 2018 ACM SIGMOD/PODS Conference on the Management of Data}",
  year      = {2018},
  month     = {June}
}
        
Decoupling the Control Plane from Program Control Flow for Flexibility and Performance in Cloud Computing
Hang Qu, Omid Mashayekhi, Chinmayee Shah, Philip Levis
Published in Proceedings of the 13th European Conference on Computer Systems (EuroSys '18), April 2018
Abstract
Existing cloud computing control planes do not scale to more than a few hundred cores, while frameworks without control planes scale but take seconds to reschedule a job. We propose an asynchronous control plane for cloud computing systems, in which a central controller can dynamically reschedule jobs but worker nodes never block on communication with the controller. By decoupling control plane traffic from program control flow in this way, an asynchronous control plane can scale to run millions of computations per second while being able to reschedule computations within milliseconds. We show that an asynchronous control plane can match the scalability and performance of TensorFlow and MPI-based programs while rescheduling individual tasks in milliseconds. Scheduling an individual task takes 1us, such that a 1,152 core cluster can schedule over 120 million tasks/second and this scales linearly with the number of cores. The ability to schedule huge numbers of tasks allows jobs to be divided into very large numbers of tiny tasks, whose improved load balancing can speed up computations 2.1 - 2.3x.
BibTeX entry
@inproceedings{ canary-eurosys18 ,
  author    = "Hang Qu, Omid Mashayekhi, Chinmayee Shah, Philip Levis",
  title     = "{Decoupling the Control Plane from Program Control Flow for Flexibility and Performance in Cloud Computing}",
  booktitle = "{Proceedings of the 13th European Conference on Computer Systems (EuroSys '18)}",
  year      = {2018},
  month     = {April}
}
        
Don’t Talk Unless I Say So! Securing the Internet of Things With Default-Off Networking
James Hong, Amit Levy, Laurynas Riliskis, and Philip Levis
Published in Proceedings of the 3rd ACM/IEEE International Conference on Internet of Things Design and Implementation (IoTDI), April 2018
Abstract
The Internet of Things (IoT) is changing the way we interact with everyday objects. “Smart” devices will reduce energy use, keep our homes safe, and improve our health. However, as recent attacks have shown, these devices also create tremendous security vulnerabilities in our computing networks. Securing all of these devices is a daunting task. In this paper, we argue that IoT device communications should be default-off and desired network communications must be explicitly enabled. Unlike traditional networked applications or devices like a web browser or PC, IoT applications and devices serve narrowly defined purposes and do not require access to all services in the network. Our proposal, Bark, a policy language and runtime for specifying and enforcing minimal access permissions in IoT networks, exploits this fact. Bark phrases access control policies in terms of natural questions (who, what, where, when, and how) and transforms them into transparently enforceable rules for IoT application protocols. Bark can express detailed rules such as “Let the lights see the luminosity of the bedroom sensor at any time” and “Let a device at my front door, if I approve it, unlock my smart lock for 30 seconds” in a way that is presentable and explainable to users. We implement Bark for Wi-Fi/IP and Bluetooth Low Energy (BLE) networks and evaluate its efficacy on several example applications and attacks.
BibTeX entry
@inproceedings{ bark-iotdi18 ,
  author    = "James Hong, Amit Levy, Laurynas Riliskis, and Philip Levis",
  title     = "{Don't Talk Unless I Say So! Securing the Internet of Things With Default-Off Networking}",
  booktitle = "{Proceedings of the 3rd ACM/IEEE International Conference on Internet of Things Design and Implementation (IoTDI)}",
  year      = {2018},
  month     = {April}
}
        
Tethys: Collecting Sensor Data Without Infrastructure or Trust
Holly Chiang, James Hong, Kevin Kiningham, Laurynas Riliskis, Philip Levis, and Mark Horowitz
Published in Proceedings of the 3rd ACM/IEEE International Conference on Internet of Things Design and Implementation (IoTDI), April 2018
Abstract
Careful resource monitoring is necessary to understand usage patterns and set conservation goals in an institutional setting. Sensor systems provide data to measure consumption and evaluate the effectiveness of active interventions. However, deploying sensing systems can be difficult when infrastructure support is limited. This paper describes the process of designing Tethys, a wireless water flow sensor that collects data at per-fixture granularity without dependence on existing infrastructure and trusted gateways. Rather than rely on electrical infrastructure, Tethys implements energy harvesting to allow for long term deployment. To avoid dependence on existing network infrastructure, Tethys crowdsources the data collection process to residents’ smartphones acting as gateways. These gateways are untrusted and unreliable, so Tethys implements end-to-end reliability and security between the sensing device and a cloud backend. We present initial findings from a deployment in undergraduate residential halls. Our results demonstrate that Tethys can capture meaningful patterns in shower use. For instance, visible water conservation signs are statistically correlated with shorter mean shower length (p < 0:05) and are a potential area for future studies.
BibTeX entry
@inproceedings{ tethys-iotdi18 ,
  author    = "Holly Chiang, James Hong, Kevin Kiningham, Laurynas Riliskis, Philip Levis, and Mark Horowitz",
  title     = "{Tethys: Collecting Sensor Data Without Infrastructure or Trust}",
  booktitle = "{ Proceedings of the 3rd ACM/IEEE International Conference on Internet of Things Design and Implementation (IoTDI)}",
  year      = {2018},
  month     = {April}
}
        
Automatically Distributing Eulerian and Hybrid Fluid Simulations in the Cloud
Omid Mashayekhi, Chinmayee Shah, Hang Qu, Andrew Lim, and Philip Levis
Published in ACM Transactions on Graphics 37, 2, Article 24, April 2018
Abstract
Distributing a simulation across many machines can drastically speed up computations and increase detail. The computing cloud provides tremendous computing resources, but weak service guarantees force programs to manage significant system complexity: nodes, networks, and storage occasionally perform poorly or fail. We describe Nimbus, a system that automatically distributes grid-based and hybrid simulations across cloud computing nodes. The main simulation loop is sequential code and launches distributed computations across many cores. The simulation on each core runs as if it is stand-alone: Nimbus automatically stitches these simulations into a single, larger one. To do this efficiently, Nimbus introduces a four-layer data model that translates between the contiguous, geometric objects used by simulation libraries and the replicated, fine-grained objects managed by its underlying cloud computing runtime. Using PhysBAM particle level set fluid simulations, we demonstrate that Nimbus can run higher detail simulations faster, distribute simulations on up to 512 cores, and run enormous simulations (10243 cells). Nimbus automatically manages these distributed simulations, balancing load across nodes and recovering from failures. Implementations of PhysBAM water and smoke simulations as well as an open source heat-diffusion simulation show that Nimbus is general and can support complex simulations. Nimbus can be downloaded from https://nimbus.stanford.edu.
BibTeX entry
@inproceedings{ nimbus-tog18 ,
  author    = "Omid Mashayekhi, Chinmayee Shah, Hang Qu, Andrew Lim, and Philip Levis",
  title     = "{Automatically Distributing Eulerian and Hybrid Fluid Simulations in the Cloud}",
  booktitle = "{ACM Transactions on Graphics 37, 2, Article 24}",
  year      = {2018},
  month     = {April}
}
        

2017

Multiprogramming a 64 kB Computer Safely and Efficiently
Amit Levy and Bradford Campbell and Branden Ghena and Daniel Giffin and Pat Pannuto and Prabal Dutta and Philip Levis
Published in Proceedings of the 26th ACM Symposium on Operating Systems Principles (SOSP 2017), October 2017
Abstract
Low-power microcontrollers lack some of the hardware features and memory resources that enable multiprogrammable systems. Accordingly, microcontroller-based operating systems have not provided important features like fault isolation, dynamic memory allocation, and flexible concurrency. However, an emerging class of embedded applications are software platforms, rather than single purpose devices, and need these multiprogramming features. Tock, a new operating system for low-power platforms, takes advantage of limited hardware protection mechanisms as well as the type-safety features of the Rust programming language to provide a multiprogramming environment for microcontrollers. Tock isolates software faults, provides memory protection, and efficiently manages memory for dynamic application workloads written in any language. It achieves this while retaining the dependability requirements of long-running applications.
BibTeX entry
@inproceedings{ levy17-tock ,
  author    = "Amit Levy and Bradford Campbell and Branden Ghena and Daniel Giffin and Pat Pannuto and Prabal Dutta and Philip Levis",
  title     = "{Multiprogramming a 64 kB Computer Safely and Efficiently}",
  booktitle = "{Proceedings of the 26th ACM Symposium on Operating Systems Principles (SOSP 2017)}",
  year      = {2017},
  month     = {October}
}
        
The Case for Writing a Kernel in Rust
Amit Levy, Bradford Campbell, Branden Ghena, Pat Pannuto, Prabal Dutta, Philip Levis
Published in Proceedings of the Eighth ACM SIGOPS Asia-Pacific Workshop on Systems (APSys 2017), September 2017
Abstract
Decades of research has attempted to add safety mechanisms to operating system kernels, but this effort has failed in most practical systems. In particular, solutions that sacrifice performance have been generally avoided. However, isolation techniques in modern languages can provide safety while avoiding performance issues. Moreover, utilizing a type-safe language with no garbage collector or other runtime services avoids what would otherwise be some of the largest sections of trusted code base. We report on our experiences in writing a resource efficient embedded kernel in Rust, finding that only a small set of unsafe abstractions are necessary in order to form common kernel building blocks. Further, we argue that Rust’s choice to avoid runtime memory management by using a linear type system will enable the next generation of safe operating systems.
BibTeX entry
@inproceedings{ levy17-rust ,
  author    = "Amit Levy, Bradford Campbell, Branden Ghena, Pat Pannuto, Prabal Dutta, Philip Levis",
  title     = "{The Case for Writing a Kernel in Rust}",
  booktitle = "{Proceedings of the Eighth ACM SIGOPS Asia-Pacific Workshop on Systems (APSys 2017)}",
  year      = {2017},
  month     = {September}
}
        
Execution Templates: Caching Control Plane Decisions for Strong Scaling of Data Analytics
Omid Mashayekhi, Hang Qu, Chinmayee Shah, and Philip Levis
Published in Proceedings of the 2017 USENIX Annual Technical Conference (USENIX ATC '17), July 2017
Abstract
Control planes of cloud frameworks trade off between scheduling granularity and performance. Centralized systems schedule at task granularity, but only schedule a few thousand tasks per second. Distributed systems schedule hundreds of thousands of tasks per second but changing the schedule is costly. We present execution templates, a control plane abstraction that can schedule hundreds of thousands of tasks per second while supporting fine-grained, per-task scheduling decisions. Execution templates leverage a program’s repetitive control flow to cache blocks of frequently-executed tasks. Executing a task in a template requires sending a single message. Large-scale scheduling changes install new templates, while small changes apply edits to existing templates. Evaluations of execution templates in Nimbus, a data analytics framework, find that they provide the fine-grained scheduling flexibility of centralized control planes while matching the strong scaling of distributed ones. Execution templates support complex, real-world applications, such as a fluid simulation with a triply nested loop and data dependent branches.
BibTeX entry
@inproceedings{ mashayekhi17templates ,
  author    = "Omid Mashayekhi, Hang Qu, Chinmayee Shah, and Philip Levis",
  title     = "{Execution Templates: Caching Control Plane Decisions for Strong Scaling of Data Analytics}",
  booktitle = "{Proceedings of the 2017 USENIX Annual Technical Conference (USENIX ATC '17)}",
  year      = {2017},
  month     = {July}
}
        
Trust but Verify: Auditing Secure Internet of Things Devices
Judson Wilson, Riad S. Wahby, Henry Corrigan-Gibbs, Dan Boneh, Philip Levis and Keith Winstein
Published in Proceedings of the The 15th ACM International Conference on Mobile Systems, Applications, and Services (MobiSys 2017), June 2017
Abstract
Internet-of-Things devices often collect and transmit sensitive information like camera footage, health monitoring data, or whether someone is home. These devices protect data in transit with end-to-end encryption, typically using TLS connections between devices and associated cloud services. But these TLS connections also prevent device owners from observing what their own devices are saying about them. Unlike in traditional Internet applications, where the end user controls one end of a connection (e.g., their web browser) and can observe its communication, Internet-of-Things vendors typically control the software in both the device and the cloud. As a result, owners have no way to audit the behavior of their own devices, leaving them little choice but to hope that these devices are transmitting only what they should. This paper presents TLS–Rotate and Release (TLS-RaR), a system that allows device owners to authorize devices, called auditors, to decrypt and verify recent TLS traffic without compromising future traffic. Unlike prior work, TLS-RaR requires no changes to TLS’s wire format or cipher suites, and it allows the device’s owner to conduct a surprise inspection of recent traffic, without prior notice to the device that its communications will be audited.
BibTeX entry
@inproceedings{ mobisys17tlsrar ,
  author    = "Judson Wilson, Riad S. Wahby, Henry Corrigan-Gibbs, Dan Boneh, Philip Levis and Keith Winstein",
  title     = "{Trust but Verify: Auditing Secure Internet of Things Devices}",
  booktitle = "{Proceedings of the The 15th ACM International Conference on Mobile Systems, Applications, and Services (MobiSys 2017)}",
  year      = {2017},
  month     = {June}
}
        

2016

Robust, low-cost, auditable random number generation for embedded system security
Ben Lampert, Riad S. Wahby, Shane Leonard and Philip Levis
Published in The 14th ACM Conference on Embedded Networked Sensor Systems (SenSys), November 2016
Abstract
This paper presents an architecture for a discrete, high entropy hardware random number generator. Because it is constructed out of simple hardware components, its operation is transparent and auditable. Using avalanche noise, a nondeterministic physical phenomenon, the circuit is inherently probabilistic and resists adversarial control. Further, because it compares the outputs from two matched noise sources, it rejects environmental disturbances like RF energy and power supply ripple. The resulting hardware produces more than 0.98 bits of entropy per sample, is inexpensive, has a small footprint, and can be disabled to conserve power when not in use.
BibTeX entry
@inproceedings{ rng-sensys16 ,
  author    = "Ben Lampert, Riad S. Wahby, Shane Leonard and Philip Levis",
  title     = "{Robust, low-cost, auditable random number generation for embedded system security}",
  booktitle = "{The 14th ACM Conference on Embedded Networked Sensor Systems (SenSys)}",
  year      = {2016},
  month     = {November}
}
        
Beetle: Flexible Communication for Bluetooth Low Energy
Amit Levy, James Hong, Laurynas Riliskis, Philip Levis, and Keith Winstein
Published in Proceedings of the 14th International Conference on Mobile Systems, Applications and Services (MobiSys), June 2016
Abstract
The next generation of computing peripherals will be low-power ubiquitous computing devices such as door locks, smart watches, and heart rate monitors. Bluetooth Low Energy is a primary protocol for connecting such peripherals to mobile and gateway devices. Current operating system support for Bluetooth Low Energy forces peripherals into vertical application silos. As a result, simple, intuitive applications such as opening a door with a smart watch or simultaneously logging and viewing heart rate data are impossible. We present Beetle, a new hardware interface that virtualizes peripherals at the application layer, allowing safe access by multiple programs without requiring the operating system to understand hardware functionality, fine-grained access control to peripheral device resources, and transparent access to peripherals connected over the network. We describe a series of novel applications that are impossible with existing abstractions but simple to implement with Beetle.
BibTeX entry
@inproceedings{ beetle-mobisys16 ,
  author    = "Amit Levy, James Hong, Laurynas Riliskis, Philip Levis, and Keith Winstein",
  title     = "{Beetle: Flexible Communication for Bluetooth Low Energy}",
  booktitle = "{Proceedings of the 14th International Conference on Mobile Systems, Applications and Services (MobiSys)}",
  year      = {2016},
  month     = {June}
}
        
Ebb: A DSL for Physical Simulation on CPUs and GPUs
Gilbert Louis Bernstein, Chinmayee Shah, Crystal Lemire, Zachary DeVito, Matthew Fisher, Philip Levis, Pat Hanrahan
Published in ACM Transactions on Graphics (TOG), Volume 35 Issue 2, May 2016
Abstract
Designing programming environments for physical simulation is challenging because simulations rely on diverse algorithms and geometric domains. These challenges are compounded when we try to run efficiently on heterogeneous parallel architectures. We present Ebb, a Domain-Specific Language (DSL) for simulation, that runs efficiently on both CPUs and GPUs. Unlike previous DSLs, Ebb uses a three-layer architecture to separate (1) simulation code, (2) definition of data structures for geometric domains, and (3) runtimes supporting parallel architectures. Different geometric domains are implemented as libraries that use a common, unified, relational data model. By structuring the simulation framework in this way, programmers implementing simulations can focus on the physics and algorithms for each simulation without worrying about their implementation on parallel computers. Because the geometric domain libraries are all implemented using a common runtime based on relations, new geometric domains can be added as needed, without specifying the details of memory management, mapping to different parallel architectures, or having to expand the runtime’s interface. We evaluate Ebb by comparing it to several widely used simulations, demonstrating comparable performance to handwritten GPU code where available, and surpassing existing CPU performance optimizations by up to 9 × when no GPU code exists
BibTeX entry
@inproceedings{ ebb-tog2016 ,
  author    = "Gilbert Louis Bernstein, Chinmayee Shah, Crystal Lemire, Zachary DeVito, Matthew Fisher, Philip Levis, Pat Hanrahan",
  title     = "{Ebb: A DSL for Physical Simulation on CPUs and GPUs}",
  booktitle = "{ACM Transactions on Graphics (TOG), Volume 35 Issue 2}",
  year      = {2016},
  month     = {May}
}
        
CESEL: Securing a Mote for 20 Years
Kevin Kiningham, Mark Horowitz, Philip Levis, and Dan Boneh
Published in Proceedings of the 13th European conference on Wireless sensor networks (EWSN 2016), February 2016
Abstract
Embedded wireless sensors, once deployed, may remain in active use for decades. At the same time, as motes come to dominate both the number of hosts and data traffic of the Internet, their security will become fundamental to general Internet security.This paper argues that the next generation of embedded networked sensor devices (“motes’’) should consider this tension in their basic design and be designed to remain secure for 20 years in a rapidly changing and evolving security and cryptographic landscape. The key insight in this paper is that the economics of modern system-on-a-chip (SoC) designs provides ample space for hardware accelerators and cryptographic engines. A next generation mote can therefore include many such co-processors and features at almost no production cost. The paper describes an initial design for what hardware security support such a device should have, focusing on five hardware primitives: an atomic, unique counter, a random number generator based on physical entropy, additional instructions to accelerate symmetric ciphers, an elliptic curve accelerator, and support for modular polynomial multiplication used in post-quantum cryptographic signing algorithms. We call this architecture CESEL.
BibTeX entry
@inproceedings{ cesel-nextmote ,
  author    = "Kevin Kiningham, Mark Horowitz, Philip Levis, and Dan Boneh",
  title     = "{CESEL: Securing a Mote for 20 Years}",
  booktitle = "{Proceedings of the 13th European conference on Wireless sensor networks (EWSN 2016)}",
  year      = {2016},
  month     = {February}
}
        

2015

Ravel: Programming IoT Applications as Distributed Models, Views, and Controllers
Laurynas Riliskis, James Hong, and Philip Levis
Published in Proceedings of the 2015 International Workshop on Internet of Things towards Applications (IoT-App'15), November 2015
Abstract
The embedded sensor networks are a promising technology to improve our life with home and industrial automation, health monitoring, and sensing and actuation in agriculture. Fitness trackers, thermostats, door locks are just a few examples of Internet of Things that have already become part of our everyday life. Despite advances in sensors, microcontrollers, signal processing, networking and programming languages, developing an Internet of Things application is a laborious task. Many of these complex distributed systems share a 3-tier architecture consisting of embedded nodes, gateways that connect an embedded network to the wider Internet and data services in servers or the cloud. Yet the IoT applications are developed for each tier separately. Consequently, the developer needs to amalgamate these distinct applications together. This paper proposes a novel approach for programming applications across 3-tiers using a distributed extension of the Model-View-Controller architecture. We add new primitive: a space - that contains properties and implementation of a particular tier. Writing applications in this architecture affords numerous advantages: automatic model synchronization, data transport, and energy effciency.
BibTeX entry
@inproceedings{ ravel-riliskis-iotapp15 ,
  author    = "Laurynas Riliskis, James Hong, and Philip Levis",
  title     = "{Ravel: Programming IoT Applications as Distributed Models, Views, and Controllers}",
  booktitle = "{Proceedings of the 2015 International Workshop on Internet of Things towards Applications (IoT-App'15)}",
  year      = {2015},
  month     = {November}
}
        
Ownership is Theft: Experiences Building an Embedded OS in Rust
Amit Levy, Michael P Andersen, Bradford Campbell, David Culler, Prabal Dutta, Branden Ghena, Philip Levis and Pat Pannuto
Published in Proceedings of the 8th Workshop on Programming Languages and Operating Systems (PLOS 2015), October 2015
Abstract
Rust, a new systems programming language, provides compile-time memory safety checks to help eliminate runtime bugs that manifest from improper memory management. This feature is advantageous for operating system development, and especially for embedded OS development, where recovery and debugging are particularly challenging. However, embedded platforms are highly event-based, and Rust’s memory safety mechanisms largely presume threads. In our experience developing an operating system for embedded systems in Rust, we have found that Rust’s ownership model prevents otherwise safe resource sharing common in the embedded domain, conflicts with the reality of hardware resources, and hinders using closures for programming asynchronously. We describe these experiences and how they relate to memory safety as well as illustrate our workarounds that preserve the safety guarantees to the largest extent possible. In addition, we draw from our experience to propose a new language extension to Rust that would enable it to provide better memory safety tools for event-driven platforms.
BibTeX entry
@inproceedings{ levy-plos15-tock ,
  author    = "Amit Levy, Michael P Andersen, Bradford Campbell, David Culler, Prabal Dutta, Branden Ghena, Philip Levis and Pat Pannuto",
  title     = "{Ownership is Theft: Experiences Building an Embedded OS in Rust}",
  booktitle = "{Proceedings of the 8th Workshop on Programming Languages and Operating Systems (PLOS 2015)}",
  year      = {2015},
  month     = {October}
}
        
Instance-Aware Simplification of 3D Polygonal Meshes
Tahir Azim, Ewen Cheslack-Postava and Philip Levis
Published in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), June 2015
Abstract
Virtual worlds and games increasingly deliver 3D meshes over the Internet using instanced file formats, such as COLLADA. However, existing simplification algorithms do not account for instancing in their inputs, operating instead on triangle soups or indexed triangle meshes. This makes them unsuitable for highly instanced meshes, since expanding an instanced mesh to an indexed triangle mesh and then simplifying it can result in a larger output file size. The high cost of delivering these larger files over the Internet results in long network latencies and low performance. This paper presents instance-aware simplification (IAS), an algorithm designed to efficiently simplify instanced 3D meshes. To ensure smaller output file sizes, IAS incorporates the existing compression of mesh instancing into its cost metrics. Unlike existing algorithms, IAS strictly reduces file size as it simplifies, so that IAS-simplified meshes of a given file size have higher visual quality than meshes simplified using existing algorithms. On highly instanced models, IAS results in simplified versions that are orders of magnitude smaller than existing algorithms for a given triangle count.
BibTeX entry
@inproceedings{ icme15ias ,
  author    = "Tahir Azim, Ewen Cheslack-Postava and Philip Levis",
  title     = "{Instance-Aware Simplification of 3D Polygonal Meshes}",
  booktitle = "{Proceedings of the IEEE International Conference on Multimedia and Expo (ICME)}",
  year      = {2015},
  month     = {June}
}
        

2014

System Architecture Support for Green Enterprise Computing
Maria Kazandjieva, Chinmayee Shah, Ewen Cheslack-Postava, Behram Mistree, Philip Levis
Published in Proceedings 5th International Green Computing Conference (IGCC), November 2014
Abstract
This paper proposes a novel system for enterprise computing that reduces energy consumption without sacrificing performance or putting devices to sleep. It uses a hybrid architecture composed of multiple device classes and it runs an application in the most energy-efficient location. Our prototype, called Any- ware, provides desktop-class performance while reducing energy consumption by 75% through a combination of lightweight clients and a small number of servers. Providing such an elastic system invisibly to end users requires solving many challenges, including deciding where to run applications while simultaneously making it appear they all run locally. Anyware’s hybrid design suggests a new way to think about using the modern spectrum of personal computing devices.
BibTeX entry
@inproceedings{ igcc14kazandjieva ,
  author    = "Maria Kazandjieva, Chinmayee Shah, Ewen Cheslack-Postava, Behram Mistree, Philip Levis",
  title     = "{System Architecture Support for Green Enterprise Computing}",
  booktitle = "{Proceedings 5th International Green Computing Conference (IGCC)}",
  year      = {2014},
  month     = {November}
}
        

2013

CTP: An Efficient, Robust, and Reliable Collection Tree Protocol for Wireless Sensor Networks
Omprakash Gnawali, Rodrigo Fonseca, Kyle Jamieson, Maria Kazandjieva, David Moss, and Philip Levis
Published in ACM Transactions on Sensor Networks (TOSN), November 2013
Abstract
We describe CTP, a collection routing protocol for wireless sensor networks. CTP uses three techniques to provide efficient, robust, and reliable routing in highly dynamic network conditions. CTP’s link estimator accurately estimates link qualities by using feedback from both the data and control planes, using information from multiple layers through narrow, platform-independent interfaces. Second, CTP uses the Trickle algorithm to time the control traffic, sending few beacons in stable topologies yet quickly adapting to changes. Finally, CTP actively probes the topology with data traffic, quickly discovering and fixing routing failures. Through experiments on 13 different testbeds, encompassing 7 platforms, 6 link layers, and multiple densities and frequencies, and detailed observations of a long-running sensor network application that uses CTP, we study how these three techniques contribute to CTP’s overall performance.
BibTeX entry
@inproceedings{ ctp-tosn14 ,
  author    = "Omprakash Gnawali, Rodrigo Fonseca, Kyle Jamieson, Maria Kazandjieva, David Moss, and Philip Levis",
  title     = "{CTP: An Efficient, Robust, and Reliable Collection Tree Protocol for Wireless Sensor Networks}",
  booktitle = "{ACM Transactions on Sensor Networks (TOSN)}",
  year      = {2013},
  month     = {November}
}
        
Measuring and Analyzing the Energy Use of Enterprise Computing Systems
Maria Kazandjieva, Omprakash Gnawali, Philip Levis, and Christos Kozyrakis
Published in Journal of Sustainable Couputing, February 2013
Abstract
Until now, green computing research has largely relied on few, short-term power measurements to characterize the energy use of enterprise computing. This paper brings new and comprehensive power datasets through Powernet, a hybrid sensor network that monitors the power and utilization of the IT systems in a large academic building. Over more than two years, we have collected power data from 250+ individual computing devices and have monitored a subset of CPU and network loads. This dense, long-term monitoring allows us to extrapolate the data to a detailed breakdown of electricity use across the building’s computing systems. Our datasets provide an opportunity to examine data analysis and methodology techniques used in green computing research. We show that power variability both between similar devices and over time for a single device can lead to cost or savings estimates that are off by 15-20%. Extending the coverage of measured devices and the duration (to at least one month) significantly reduces errors. Lastly, our experiences with collecting data and the subsequent analysis lead to a better understanding of how one should go about power characterization studies. We provide several methodology guidelines for the green computing community. The data from the Powernet deployment can be found at http://sing.stanford.edu/maria/powernet.
BibTeX entry
@inproceedings{ kazandjieva2013measuring ,
  author    = "Maria Kazandjieva, Omprakash Gnawali, Philip Levis, and Christos Kozyrakis",
  title     = "{Measuring and Analyzing the Energy Use of Enterprise Computing Systems}",
  booktitle = "{Journal of Sustainable Couputing}",
  year      = {2013},
  month     = {February}
}
        
Long-term modification of cortical synapses improves sensory perception
Robert C Froemke, Ioana Carcea, Alison J Barker, Kexin Yuan, Bryan A Seybold, Ana Raquel O Martins, Natalya Zaika, Hannah Bernstein, Megan Wachs, Philip A Levis, Daniel B Polley, Michael M Merzenich and Christoph E Schreiner
Published in Nature Neuroscience 16, 79-88, January 2013
Abstract
Synapses and receptive fields of the cerebral cortex are plastic. However, changes to specific inputs must be coordinated within neural networks to ensure that excitability and feature selectivity are appropriately configured for perception of the sensory environment. We induced long-lasting enhancements and decrements to excitatory synaptic strength in rat primary auditory cortex by pairing acoustic stimuli with activation of the nucleus basalis neuromodulatory system. Here we report that these synaptic modifications were approximately balanced across individual receptive fields, conserving mean excitation while reducing overall response variability. Decreased response variability should increase detection and recognition of near-threshold or previously imperceptible stimuli. We confirmed both of these hypotheses in behaving animals. Thus, modification of cortical inputs leads to wide-scale synaptic changes, which are related to improved sensory perception and enhanced behavioral performance.
BibTeX entry
@inproceedings{ froemke2012long ,
  author    = "Robert C Froemke, Ioana Carcea, Alison J Barker, Kexin Yuan, Bryan A Seybold, Ana Raquel O Martins, Natalya Zaika, Hannah Bernstein, Megan Wachs, Philip A Levis, Daniel B Polley, Michael M Merzenich and Christoph E Schreiner",
  title     = "{Long-term modification of cortical synapses improves sensory perception}",
  booktitle = "{Nature Neuroscience 16, 79-88}",
  year      = {2013},
  month     = {January}
}
        

2012

Beyond Full Duplex Wireless
Kannan Srinivasan, Steven Hong, Mayank Jain, Jung Il Choi, Jeff Mehlman, Sachin Katti, and Philip Levis
Published in Asilomar Conference on Signals, Systems and Computers, November 2012
Abstract
Recent work has shown the possibility of implementing fullduplex wireless radios using commodity hardware. We discuss the possibility of extending full-duplex designs to support multiple input, multiple output (MIMO) systems. We explore how such a design could lead to a rethinking of wireless networks. We discuss various applications of full-duplex radios and the gains possible with those applications. We also discuss some of the challenges present in getting such radios and their applications to be a part of production networks.
BibTeX entry
@inproceedings{ asilomar12srinivasan ,
  author    = "Kannan Srinivasan, Steven Hong, Mayank Jain, Jung Il Choi, Jeff Mehlman, Sachin Katti, and Philip Levis",
  title     = "{Beyond Full Duplex Wireless}",
  booktitle = "{Asilomar Conference on Signals, Systems and Computers}",
  year      = {2012},
  month     = {November}
}
        
Experiences from a Decade of TinyOS Development
Philip Levis
Published in Proceedings of the 10th Symposium on Operating System Design and Implementation (OSDI), October 2012
Abstract
When first written in 2000, TinyOS’s users were a handful of academic computer science researchers. A decade later, TinyOS averages 25,000 downloads a year, is in many commercial products, and remains a platform used for a great deal of sensor network, low-power systems, and wireless research. We focus on how technical and social decisions influenced this success, sometimes in surprising ways. As TinyOS matured, it evolved language extensions to help experts write efficient, robust systems. These extensions revealed insights and novel programming abstractions for embedded software. Using these abstractions, experts could build increasingly complex systems more easily than with other operating systems, making TinyOS the dominant choice. This success, however, came at a long-term cost. System design decisions that seem good at first can have unforeseen and undesirable implications that play out over the span of years. Today, TinyOS is a stable, self-contained ecosystem that is discouraging to new users. Other systems, such as Arduino and Contiki, by remaining more accessible, have emerged as better solutions for simpler embedded sensing applications.
BibTeX entry
@inproceedings{ osdi12levis ,
  author    = "Philip Levis",
  title     = "{Experiences from a Decade of TinyOS Development}",
  booktitle = "{Proceedings of the 10th Symposium on Operating System Design and Implementation (OSDI)}",
  year      = {2012},
  month     = {October}
}
        
RFC 6719 - The Minimum Rank with Hysteresis Objective Function
Omprakash Gnawali and Philip Levis
Published in Internet Engineering Task Force (IETF), Request for Comments: 6719, September 2012
Abstract
The Routing Protocol for Low-Power and Lossy Networks (RPL) constructs routes by using Objective Functions that optimize or constrain the routes it selects and uses. This specification describes the Minimum Rank with Hysteresis Objective Function (MRHOF), an Objective Function that selects routes that minimize a metric, while using hysteresis to reduce churn in response to small metric changes. MRHOF works with additive metrics along a route, and the metrics it uses are determined by the metrics that the RPL Destination Information Object (DIO) messages advertise.
BibTeX entry
@inproceedings{ rfc6719 ,
  author    = "Omprakash Gnawali and Philip Levis",
  title     = "{RFC 6719 - The Minimum Rank with Hysteresis Objective Function}",
  booktitle = "{Internet Engineering Task Force (IETF), Request for Comments: 6719}",
  year      = {2012},
  month     = {September}
}
        
Unsupervised Conversion of 3D models for Interactive Metaverses
Jeff Terrace, Ewen Cheslack-Postava, Philip Levis and Michael Freedman
Published in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), July 2012
Abstract
A virtual-world environment becomes a truly engaging platform when users have the ability to insert 3D content into the world. However, arbitrary 3D content is often not optimized for real-time rendering, limiting the ability of clients to display large scenes consisting of hundreds or thousands of objects. We present the design and implementation of an automatic, unsupervised conversion process that transforms 3D content into a format suitable for real-time rendering while minimizing loss of quality. The resulting progressive format includes a base mesh, allowing clients to quickly display the model, and a progressive portion for streaming additional detail as desired. Sirikata, an open virtual world platform, has processed over 700 models using this method.
BibTeX entry
@inproceedings{ icme12terrace ,
  author    = "Jeff Terrace, Ewen Cheslack-Postava, Philip Levis and Michael Freedman",
  title     = "{Unsupervised Conversion of 3D models for Interactive Metaverses}",
  booktitle = "{Proceedings of the IEEE International Conference on Multimedia and Expo (ICME)}",
  year      = {2012},
  month     = {July}
}
        
A Scalable Server for 3D Metaverses
Ewen Cheslack-Postava, Tahir Azim, Behram Mistree, Daniel Horn, Jeff Terrace, Philip Levis, and Michael Freedman
Published in Proceedings of the USENIX Annual Technical Conference (ATC), June 2012
Abstract
Metaverses are three-dimensional virtual worlds where anyone can add and script new objects. Metaverses today, such as Second Life, are dull, lifeless, and stagnant because users can see and interact with only a tiny region around them, rather than a large and immersive world. Current metaverses impose this distance restriction on visibility and interaction in order to scale to large worlds, as the restriction avoids appreciable shared state in underlying distributed systems. We present the design and implementation of the Sirikata metaverse server. The Sirikata server scales to support large, complex worlds, even as it allows users to see and interact with the entire world. It achieves both goals simultaneously by leveraging properties of the real world and 3D environments in its core systems, such as a novel distributed data structure for virtual object queries based on visible size. We evaluate core services in isolation as well as part of the entire system, demonstrating that these novel designs do not sacrifice performance. Applications developed by Sirikata users support our claim that removing the distance restriction enables new, compelling applications that are infeasible in today’s metaverses.
BibTeX entry
@inproceedings{ atc12ewen ,
  author    = "Ewen Cheslack-Postava, Tahir Azim, Behram Mistree, Daniel Horn, Jeff Terrace, Philip Levis, and Michael Freedman",
  title     = "{A Scalable Server for 3D Metaverses}",
  booktitle = "{Proceedings of the USENIX Annual Technical Conference (ATC)}",
  year      = {2012},
  month     = {June}
}
        
Green Enterprise Computing Data: Assumptions and Realities
Maria Kazandjieva, Brandon Heller, Omprakash Gnawali, Philip Levis, and Christos Kozyrakis
Published in Proceedings of the Third International Green Computing Conference (IGCC), June 2012
Abstract
Until now, green computing research has largely relied on few, short-term power measurements to characterize the energy use of enterprise computing. This paper brings new and comprehensive power datasets through Powernet, a hybrid sensor network that monitors the power and utilization of the IT systems in a large academic building. Over more than two years, we have collected power data from 250+ individual computing devices and have monitored a subset of CPU and network loads. This dense, long-term monitoring allows us to extrapolate the data to a detailed breakdown of electricity use across the building’s computing systems. Our datasets provide an opportunity to examine assumptions commonly made in green computing. We show that power variability both between similar devices and over time for a single device can lead to cost or savings estimates that are off by 15-20%. Extending the coverage of measured devices and the duration (to at least one month) significantly reduces errors. Lastly, our experiences with collecting data and the subsequent analysis lead to a better understanding of how one should go about power characterization studies. We provide several methodology guidelines for future green computing research.
BibTeX entry
@inproceedings{ igcc12kazandjieva ,
  author    = "Maria Kazandjieva, Brandon Heller, Omprakash Gnawali, Philip Levis, and Christos Kozyrakis",
  title     = "{Green Enterprise Computing Data: Assumptions and Realities}",
  booktitle = "{Proceedings of the Third International Green Computing Conference (IGCC)}",
  year      = {2012},
  month     = {June}
}
        

2011

Buffer Sizing in Wireless Mesh Networks
Kamran Jamshaid, Basem Shihada, Li Xia, and Philip Levis
Published in Proceedings of the 2011 IEEE Eighth International Conference on Mobile Ad-Hoc and Sensor Systems p272-281, October 2011
Abstract
We analyze the problem of buffer sizing for TCP flows in 802.11-based Wireless Mesh Networks. Our objective is to maintain high network utilization while providing low queueing delays. The problem is complicated by the time-varying capacity of the wireless channel as well as the random access mechanism of 802.11 MAC protocol. While arbitrarily large buffers can maintain high network utilization, this results in large queueing delays. Such delays may affect TCP stability characteristics, and also increase queueing delays for other flows (including real-time flows) sharing the buffer. In this paper we propose sizing link buffers collectively for a set of nodes within mutual interference range called the ‘collision domain’. We aim to provide a buffer just large enough to saturate the available capacity of the bottleneck collision domain that limits the carrying capacity of the network. This neighborhood buffer is distributed over multiple nodes that constitute the network bottleneck; a transmission by any of these nodes fully utilizes the available spectral resource for the duration of the transmission. We show that sizing routing buffers collectively for this bottleneck allows us to have small buffers (as low as 2 - 3 packets) at individual nodes without any significant loss in network utilization. We propose heuristics to determine these buffer sizes in WMNs. Our results show that we can reduce the end-to-end delays by 6× to 10× at the cost of losing roughly 5% of the network capacity achievable with large buffers.
BibTeX entry
@inproceedings{ jamshaid2011 ,
  author    = "Kamran Jamshaid, Basem Shihada, Li Xia, and Philip Levis",
  title     = "{Buffer Sizing in Wireless Mesh Networks}",
  booktitle = "{Proceedings of the 2011 IEEE Eighth International Conference on Mobile Ad-Hoc and Sensor Systems p272-281}",
  year      = {2011},
  month     = {October}
}
        
Emerson: Accessible Scripting for Applications in an Extensible Virtual World
Behram Mistree, Bhupesh Chandra, Ewen Cheslack-Potava, Philip Levis, and David Gay
Published in Proceedings of the ACM international conference on Object oriented programming systems languages and applications (Onward), October 2011
Abstract
This paper presents Emerson, a new programming system for scripting objects in user-extensible virtual worlds such as Second Life, Active Worlds, Open Wonderland, etc. Emerson’s primary goal is to make it easy for novice programmers to write and deploy interesting applications. Scripting applications for these worlds is difficult due to two characteristics: the worlds must scale to millions of users and are therefore distributed, and there is no central authority or design so interaction is mostly between mutually untrusting applications. To simplify scripting for novices, Emerson employs two abstractions: multi-presencing and execution sandboxes. Multi-presencing allows a single program to centrally control what seem to be many distributed geometric objects. Execution sandboxes allow safely running application code provided by another object, borrowing the execution and deployment model of modern web applications. Emerson itself is implemented as a scripting plugin for the Sirikata open source virtual world platform. We evaluate the benefits of its design by describing several application examples. Through these examples, we explore the interactions between sandboxing and multi-presencing as well as their implications and discuss potential future authentication mechanisms that would make secure in-world application development more accessible.
BibTeX entry
@inproceedings{ onward11bmistree ,
  author    = "Behram Mistree, Bhupesh Chandra, Ewen Cheslack-Potava, Philip Levis, and David Gay",
  title     = "{Emerson: Accessible Scripting for Applications in an Extensible Virtual World}",
  booktitle = "{Proceedings of the ACM international conference on Object oriented programming systems languages and applications (Onward)}",
  year      = {2011},
  month     = {October}
}
        
Practical, Real-time, Full-Duplex Wireless
Mayank Jain, Jung Il Choi, Taemin Kim, Dinesh Bharadia, Siddharth Seth, Kannan Srinivasan, Philip Levis, Sachin Katti and Prasun Sinha
Published in Proceedings of the 17th Annual International Conference on Mobile Computing and Networking (Mobicom 2011), September 2011
Abstract
This paper presents a full duplex radio design using signal inversion and adaptive cancellation. Signal inversion uses a simple design based on a balanced/unbalanced (Balun) transformer. This new design, unlike prior work, supports wideband and high power systems. In theory, this new design has no limitation on bandwidth or power. In practice, we find that the signal inversion technique alone can cancel at least 45dB across a 40MHz bandwidth. Further, combining signal inversion cancellation with cancellation in the digital domain can reduce self-interference by up to 73dB for a 10MHz OFDM signal. This paper also presents a full duplex medium access control (MAC) design and evaluates it using a testbed of 5 prototype full duplex nodes. Full duplex reduces packet losses due to hidden terminals by up to 88%. Full duplex also mitigates unfair channel allocation in AP-based networks, increasing fairness from 0.85 to 0.98 while improving downlink throughput by 110% and uplink throughput by 15%. These experimental results show that a redesign of the wireless network stack to exploit full duplex capability can result in significant improvements in network performance.
BibTeX entry
@inproceedings{ mobicom11jain ,
  author    = "Mayank Jain, Jung Il Choi, Taemin Kim, Dinesh Bharadia, Siddharth Seth, Kannan Srinivasan, Philip Levis, Sachin Katti and Prasun Sinha",
  title     = "{Practical, Real-time, Full-Duplex Wireless}",
  booktitle = "{Proceedings of the 17th Annual International Conference on Mobile Computing and Networking (Mobicom 2011)}",
  year      = {2011},
  month     = {September}
}
        
Single Channel Full-Duplex Wireless Radios
Mayank Jain
Published in Doctoral Dissertation, August 2011
Abstract
Wireless networking is fast becoming the primary method for people to connect to the Internet and with each other. The available wireless spectrum is increasingly loaded, with users demanding higher performance and reliability from their wireless connections. This dissertation proposes single channel full-duplex as a new paradigm for wireless system design that can mitigate some of the throughput and reliability problems of today’s wireless systems. With full-duplex wireless radios, a node can receive and transmit data at the same time, without using multiple wireless channels. At the physical layer, this capability can double the available throughput at a node. Further, sending and receiving at the same time allows a wireless node to exchange control messages while receiving data, making real-time feedback schemes possible. Such feedback, assumed to be impossible till now, allows network researchers to rethink the way wireless networks are designed. This dissertation shows that with the extra feedback channel available through full-duplexing, reliability in existing wireless networks can be significantly improved. Motivated by the promise that full-duplex wireless holds, this dissertation explores the challenges in implementing such radios and how these radios could influence the design of wireless networking stacks. The primary challenge in implementing a full-duplex wireless system is removing the self-interference from a node’s transmit antenna to its receive antenna. This dissertation discusses various analog and digital techniques to cancel this self-interference. It presents an adaptive full-duplex wireless system that combines analog and digital self-interference cancellation to remove up to 73dB of self-interference. Exploiting full-duplexing to the fullest extent requires a redesign of higher layers, especially the MAC and network layers. This dissertation presents a full-duplex MAC layer implementation to show how full-duplexing can be leveraged to improve wireless reliability and performance. This implementation reduces hidden terminal losses by up to 88% and significantly improves network fairness and throughput. The main contribution of this dissertation is to motivate full-duplex radios as a direction for research on future wireless systems. It shows that designing full-duplex systems, while challenging, is feasible. It also presents many ideas on how full-duplexing could be used to improve a variety of wireless systems, including multi-hop data networks, cellular systems, and wireless LANs.
BibTeX entry
@inproceedings{ mayank-thesis ,
  author    = "Mayank Jain",
  title     = "{Single Channel Full-Duplex Wireless Radios}",
  booktitle = "{Doctoral Dissertation}",
  year      = {2011},
  month     = {August}
}
        
Energy Management in Mobile Devices with the Cinder Operating System
Arjun Roy, Stephen Rumble, Ryan Stutsman, Philip Levis, David Mazieres, and Nickolai Zeldovich
Published in Proceedings of the European Conference on Computer Systems (EuroSys 2011), April 2011
Abstract
We argue that controlling energy allocation is an increasingly useful and important feature for operating systems, especially on mobile devices. We present two new low-level abstractions in the Cinder operating system, reserves and taps, which store and distribute energy for application use. We identify three key properties of control - isolation, delegation, and subdivision - and show how using these abstractions can achieve them. We also show how the architecture of the HiStar information-flow control kernel lends itself well to energy control. We prototype and evaluate Cinder on a popular smartphone, the Android G1.
BibTeX entry
@inproceedings{ eurosys11roy ,
  author    = "Arjun Roy, Stephen Rumble, Ryan Stutsman, Philip Levis, David Mazieres, and Nickolai Zeldovich",
  title     = "{Energy Management in Mobile Devices with the Cinder Operating System}",
  booktitle = "{Proceedings of the European Conference on Computer Systems (EuroSys 2011)}",
  year      = {2011},
  month     = {April}
}
        
RFC 6206: The Trickle Algorithm
Philip Levis and Thomas Clausen and Jonathan Hui and Omprakash Gnawali and JeongGil Ko
Published in Internet Engineering Task Force (IETF), Request for Comments: 6206, March 2011
Abstract
The Trickle algorithm allows nodes in a lossy shared medium (e.g., low-power and lossy networks) to exchange information in a highly robust, energy efficient, simple, and scalable manner. Dynamically adjusting transmission windows allows Trickle to spread new information on the scale of link-layer transmission times while sending only a few messages per hour when information does not change. A simple suppression mechanism and transmission point selection allow Trickle’s communication rate to scale logarithmically with density. This document describes the Trickle algorithm and considerations in its use.
BibTeX entry
@inproceedings{ rfc6206 ,
  author    = "Philip Levis and Thomas Clausen and Jonathan Hui and Omprakash Gnawali and JeongGil Ko",
  title     = "{RFC 6206: The Trickle Algorithm}",
  booktitle = "{Internet Engineering Task Force (IETF), Request for Comments: 6206}",
  year      = {2011},
  month     = {March}
}
        

2010

A high-resolution human contact network for infectious disease transmission
Marcel Salathe, Maria Kazandjieva, Jung Woo Lee, Philip Levis, Marcus W. Feldman and James H. Jones
Published in Proceedings of the National Academy of Sciences (PNAS), December 2010
Abstract
The most frequent infectious diseases in humans and those with the highest potential for rapid pandemic spread are usually transmitted via droplets during close proximity interactions (CPIs). Despite the importance of this transmission route, very little is known about the dynamic patterns of CPIs. Using wireless sensor network technology, we obtained high-resolution data of CPIs during a typical day at an American high school, permitting the reconstruction of the social network relevant for infectious disease transmission. At 94% coverage, we collected 762,868 CPIs at a maximal distance of 3 m among 788 individuals. The data revealed a high-density network with typical small-world properties and a relatively homogeneous distribution of both interaction time and interaction partners among subjects. Computer simulations of the spread of an influenza-like disease on the weighted contact graph are in good agreement with absentee data during the most recent influenza season. Analysis of targeted immunization strategies suggested that contact network data are required to design strategies that are significantly more effective than random immunization. Immunization strategies based on contact network data were most effective at high vaccination coverage.
BibTeX entry
@inproceedings{ salathe2010high ,
  author    = "Marcel Salathe, Maria Kazandjieva, Jung Woo Lee, Philip Levis, Marcus W. Feldman and James H. Jones",
  title     = "{A high-resolution human contact network for infectious disease transmission}",
  booktitle = "{Proceedings of the National Academy of Sciences (PNAS)}",
  year      = {2010},
  month     = {December}
}
        
Granting Silence to Avoid Wireless Collisions
Jung Il Choi, Mayank Jain, Maria A. Kazandjieva, and Philip Levis
Published in Proceedings of the 18th IEEE International Conference on Network Protocols (ICNP), October 2010
Abstract
We describe grant-to-send, a novel collision avoidance algorithm for wireless mesh networks. Rather than announce packets it intends to send, a node using grant-to-send announces packets it expects to hear others send. We present evidence that inverting collision avoidance in this way greatly improves wireless mesh performance. Evaluating four protocols from 802.11 meshes and 802.15.4 sensor networks, we find that grant-to-send matches or outperforms CSMA and RTS/CTS in all cases. For example, in a 4-hop UDP flow, grant-to-send can achieve 96% of the theoretical maximum throughput while maintaining a 99.9% packet delivery ratio. Grant-to-send is also general enough to replace protocol-specific collision avoidance mechanisms common to sensor network protocols. Grant-to-send is simple. For example, incorporating it into 802.11 requires only 11 lines of driver code and no hardware changes. Furthermore, as it reuses existing 802.11 mechanisms, grant-to-send inter-operates with current networks and can be incrementally deployed.
BibTeX entry
@inproceedings{ choi2010granting ,
  author    = "Jung Il Choi, Mayank Jain, Maria A. Kazandjieva, and Philip Levis",
  title     = "{Granting Silence to Avoid Wireless Collisions}",
  booktitle = "{Proceedings of the 18th IEEE International Conference on Network Protocols (ICNP)}",
  year      = {2010},
  month     = {October}
}
        
A Case for Evaluating Sensor Network Protocols Concurrently
Omprakash Gnawali, Leonidas Guibas, and Philip Levis
Published in Proceedings of the Fifth ACM International Workshop on Wireless Network Testbeds, Experimental evaluation and Characterization (WiNTECH), September 2010
Abstract
Researchers typically evaluate and compare protocols on the testbeds by running them one at a time. This methodology ignores the variation in link qualities and wireless environment across these experiments. These variations can introduce significant noise in the results. Evaluating two protocols concurrently, however, suffers from inter-protocol interactions. These interactions can perturb performance even under very light load, especially timing and timing sensitive protocols. We argue that the benefits of running protocols concurrently greatly outweigh the disadvantages. Protocols rarely run in isolation in real networks, and so considering such interactions is valuable. Although the wireless environment is still uncontrolled, concurrent evaluations make comparisons fair and more statistically sound. Through experiments on two testbeds, we make the case for evaluating and comparing low data-rate sensor network protocols by running them concurrently.
BibTeX entry
@inproceedings{ gnawali2010case ,
  author    = "Omprakash Gnawali, Leonidas Guibas, and Philip Levis",
  title     = "{A Case for Evaluating Sensor Network Protocols Concurrently}",
  booktitle = "{Proceedings of the Fifth ACM International Workshop on Wireless Network Testbeds, Experimental evaluation and Characterization (WiNTECH)}",
  year      = {2010},
  month     = {September}
}
        
Whirlpool Routing for Mobility
Jung Woo Lee, Branislav Kusy, Tahir Azim, Philip Levis, and Basem Shihada
Published in Proceedings of the Eleventh ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc), September 2010
Abstract
We present the Whirlpool Routing Protocol (WARP), which efficiently routes data to a node moving within a static mesh. The key insight in WARP’s design is that data traffic can use an existing routing gradient to efficiently probe the topology, repair the routing gradient, and communicate these repairs to nearby nodes. Using simulation, controlled testbeds, and real mobility experiments, we find that using the data plane for topology maintenance is highly effective due to the incremental nature of mobility updates. WARP leverages the fact that converging flows at a destination make the destination have the region of highest traffic. We provide a theoretical basis for WARP’s behavior, defining an ‘update area’ in which the topology must adjust when a destination moves. As long as packets arrive at a destination before it moves outside of the update area, WARP can repair the topology using the data plane. Compared to existing protocols, such as DYMO and HYPER, WARP’s packet drop rate is up to 90% lower while sending up to 90% fewer packets.
BibTeX entry
@inproceedings{ lee2010whirlpool ,
  author    = "Jung Woo Lee, Branislav Kusy, Tahir Azim, Philip Levis, and Basem Shihada",
  title     = "{Whirlpool Routing for Mobility}",
  booktitle = "{ Proceedings of the Eleventh ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc)}",
  year      = {2010},
  month     = {September}
}
        
The k-factor: Inferring Protocol Performance Using Inter-Link Reception Correlation
Kannan Srinivasan, Mayank Jain, Jung Il Choi, Tahir Azim, Edward S Kim, Philip Levis and Bhaskar Krishnamachari
Published in Proceedings of the 16th Annual International Conference on Mobile Computing and Networking (Mobicom 2010), September 2010
Best paper award, Mobicom 2010
Abstract
This paper explores metrics that capture to what degree packet reception on different links is correlated. Specifically, it explores metrics that shed light on when and why opportunistic routing and network coding protocols perform well (or badly). It presents a new metric, k that, unlike existing widely used metrics, has no bias based on the packet reception ratios of links. This lack of bias makes k a better predictor of performance of opportunistic routing and network coding protocols. Comparing Deluge and Rateless Deluge, Deluge’s network coding counterpart, we find that k can predict which of the two is best suited for a given environment. For example, irrespective of the packet reception ratios of the links, if the average k of the link pairs is very high (close to 1.0), then using a protocol that does not code works better than using a network coding protocol. Measuring k on several 802.15.4 and 802.11 testbeds, we find that it varies significantly across network topologies and link layers. k can be a metric for quantifying what kind of a network is present and help decide which protocols to use for that network.
BibTeX entry
@inproceedings{ srinivasan2010kappa ,
  author    = "Kannan Srinivasan, Mayank Jain, Jung Il Choi, Tahir Azim, Edward S Kim, Philip Levis and Bhaskar Krishnamachari",
  title     = "{The k-factor: Inferring Protocol Performance Using Inter-Link Reception Correlation}",
  booktitle = "{Proceedings of the 16th Annual International Conference on Mobile Computing and Networking (Mobicom 2010)}",
  year      = {2010},
  month     = {September}
}
        
Achieving Single Channel, Full Duplex Wireless Communication
Jung Il Choi, Mayank Jain, Kannan Srinivasan, Philip Levis and Sachin Katti
Published in Proceedings of the 16th Annual International Conference on Mobile Computing and Networking (Mobicom 2010), September 2010
SIGMOBILE 2021 Test of Time Award
Abstract
This paper discusses the design of a single channel full-duplex wireless transceiver. The design uses a combination of RF and baseband techniques to achieve full-duplexing with minimal effect on link reliability. Experiments on real nodes show the full-duplex prototype achieves median performance that is within 8% of an ideal full-duplexing system. This paper presents Antenna Cancellation, a novel technique for self-interference cancellation. In conjunction with existing RF interference cancellation and digital baseband interference cancellation, antenna cancellation achieves the amount of self-interference cancellation required for full-duplex operation. The paper also discusses potential MAC and network gains with full-duplexing. It suggests ways in which a full-duplex system can solve some important problems with existing wireless systems including hidden terminals, loss of throughput due to congestion, and large end-to-end delays.
BibTeX entry
@inproceedings{ choi2010achieving ,
  author    = "Jung Il Choi, Mayank Jain, Kannan Srinivasan, Philip Levis and Sachin Katti",
  title     = "{Achieving Single Channel, Full Duplex Wireless Communication}",
  booktitle = "{Proceedings of the 16th Annual International Conference on Mobile Computing and Networking (Mobicom 2010)}",
  year      = {2010},
  month     = {September}
}
        
Emerson: Scripting for Federated Virtual Worlds
Bhupesh Chandra, Ewen Cheslack-Postava, Behram Mistree, Philip Levis, and David Gay
Published in Proceedings of the 15th International Conference on Computer Games: AI, Animation, Mobile, Interactive Multimedia, Educational & Serious Games (CGAMES), July 2010
Abstract
We introduce Emerson, a scripting language for virtual worlds that are seamless, scalable, and federated. These worlds present a number of unique challenges. Most importantly, scripts that specify the behavior of the world are distributed across many hosts and users may generate and host scripts. These constraints imply features not common in other systems, such as frequent use of asynchronous message passing for basic interaction between entities, a lack of trust between entities inhabiting the world, and live-editing of entities in the world. Emerson addresses these challenges with three core design concepts: entity-based isolation and concurrency, an event driven model with concise and expressive pattern matching to find handlers for messages, and strong support for example-based programming within the live virtual environment. Our prototype implementation of Emerson, based on the V8 JavaScript engine, demonstrates that a variety of applications can be easily written in Emerson in a live system.
BibTeX entry
@inproceedings{ chandra2010emerson ,
  author    = "Bhupesh Chandra, Ewen Cheslack-Postava, Behram Mistree, Philip Levis, and David Gay",
  title     = "{Emerson: Scripting for Federated Virtual Worlds}",
  booktitle = "{Proceedings of the 15th International Conference on Computer Games: AI, Animation, Mobile, Interactive Multimedia, Educational & Serious Games (CGAMES)}",
  year      = {2010},
  month     = {July}
}
        
Experiences in Measuring a Human Contact Network for Epidemiology Research
Maria Kazandjieva, Jung Woo Lee, Marcel Salathe, Marcus W. Feldman, James H. Jones, and Philip Levis
Published in Proceedings of the ACM Workshop on Hot Topics in Embedded Networked Sensors (HotEmNets), June 2010
Abstract
This paper discusses our experience in designing and deploying a 994-node sensor network to measure the social contact network of a high school over one typical day. The system aims to capture interactions of human subjects for the study of infectious disease spread. We describe unique challenges posed by a large-scale network that is heavily affected by humans. We present techniques to address challenges such as frequent node reboots and global timestamps. The end result of the deployment is a dataset of 792 traces which can be used to calculate the school population’s contact network and the rough location where interactions occurred.
BibTeX entry
@inproceedings{ kazandjieva2010experiences ,
  author    = "Maria Kazandjieva, Jung Woo Lee, Marcel Salathe, Marcus W. Feldman, James H. Jones, and Philip Levis",
  title     = "{Experiences in Measuring a Human Contact Network for Epidemiology Research}",
  booktitle = "{Proceedings of the ACM Workshop on Hot Topics in Embedded Networked Sensors (HotEmNets)}",
  year      = {2010},
  month     = {June}
}
        
Physically-Based Models of Low-Power Wireless Links using Signal Power Simulation
Tal Rusak and Philip Levis
Published in Computer Networks: The International Journal of Computer and Telecommunications Networking (COMNET), March 2010
Invited paper, Best paper at MSWiM 2008
Abstract
We propose deriving wireless simulation models from experimental traces of radio signal strength. Because experimental traces have holes due to packet losses, we explore two algorithms for filling the gaps in lossy experimental traces. Using completed traces, we apply the closest-fit pattern matching (CPM) algorithm, originally designed for modeling external interference, to model signal strength. We compare the observed link behavior using our models with that of the experimental packet trace. Our approach results in more accurate packet reception ratios (PRR) than current simulation methods, reducing the absolute error in PRR by up to about 0.3 in the experiments we present. We also find that using CPM for signal strength improves simulation of packet burstiness, reducing the Kantorovich-Wasserstein (KW) distance of conditional packet delivery functions (CPDFs) by a factor of about three for intermediate links. Our model reduces the factor of error in the number of parent changes in the standard TinyOS collection protocol (CTP) by an order of magnitude or more as compared to a real signal power trace in two simple test scenarios. We show that our methods are robust to the sampling frequency of the learning deployment and are thus generally applicable for simulating arbitrary applications without a pre-determined packet transmission frequency. These improvements give low-power wireless network simulators a better capability to capture real-world dynamics and edge conditions that protocol designers typically must wait until deployment to detect.
BibTeX entry
@inproceedings{ comnet09rusak ,
  author    = "Tal Rusak and Philip Levis",
  title     = "{Physically-Based Models of Low-Power Wireless Links using Signal Power Simulation}",
  booktitle = "{Computer Networks: The International Journal of Computer and Telecommunications Networking (COMNET)}",
  year      = {2010},
  month     = {March}
}
        
An Empirical Study of Low Power Wireless
Kannan Srinivasan, Prabal Dutta, Arsalan Tavakoli, Philip Levis
Published in ACM Transactions on Sensor Networks, February 2010
Abstract
We present empirical measurements of the packet delivery performance of the latest sensor platforms: Micaz and Telos motes. In this paper, we present observations that have implications to a set of common assumptions protocol designers make while designing sensornet protocols - specifically - the MAC and network layer protocols. We first distill these common assumptions in to a conceptual model and show how our observations support or dispute these assumptions. We also present case studies of protocols that do not make these assumptions. Understanding the implications of these observations to the conceptual model can improve future protocol designs.
BibTeX entry
@inproceedings{ srinivasan2010empirical ,
  author    = "Kannan Srinivasan, Prabal Dutta, Arsalan Tavakoli, Philip Levis",
  title     = "{An Empirical Study of Low Power Wireless}",
  booktitle = "{ACM Transactions on Sensor Networks}",
  year      = {2010},
  month     = {February}
}
        

2009

Surviving Sensor Network Software Faults
Yang Chen, Omprakash Gnawali, Maria Kazandjieva, Philip Levis, and John Regehr
Published in Proceedings of the 22nd ACM Symposium on Operating System Principles (SOSP), November 2009
Abstract
We describe Neutron, a version of the TinyOS operating system that efficiently recovers from memory safety bugs. Where existing schemes reboot an entire node on an error, Neutron’s compiler and runtime extensions divide programs into recovery units and reboot only the faulting unit. The TinyOS kernel itself is a recovery unit: a kernel safety violation appears to applications as the processor being unavailable for 10-20 milliseconds. Neutron further minimizes safety violation cost by supporting precious state that persists across reboots. Application data, time synchronization state, and routing tables can all be declared as precious. Neutron’s reboot sequence conservatively checks that precious state is not the source of a fault before preserving it. Together, recovery units and precious state allow Neutron to reduce a safety violation’s cost to time synchronization by 94% and to a routing protocol by 99.5%. Neutron also protects applications from losing data. Neutron provides this recovery on the very limited resources of a tiny, low-power microcontroller.
BibTeX entry
@inproceedings{ sosp09chen ,
  author    = "Yang Chen, Omprakash Gnawali, Maria Kazandjieva, Philip Levis, and John Regehr",
  title     = "{Surviving Sensor Network Software Faults}",
  booktitle = "{Proceedings of the 22nd ACM Symposium on Operating System Principles (SOSP)}",
  year      = {2009},
  month     = {November}
}
        
Energy Dumpster Diving
Maria Kazandjieva, Brandon Heller, Philip Levis, and Christos Kozyrakis
Published in Second Workshop on Power Aware Computing (HotPower), November 2009
Abstract
Power data alone cannot identify sources of energy inefficiency. However, correlating power data with utilization statistics can reveal where power is used well and where it is wasted. We describe a sensing infrastructure, PowerNet, that monitors power and utilization in a building environment. The deployment includes both wired and wireless sensors and covers offices, networking closets, and server racks. We present PowerNet’s architecture, then generate initial insights from each monitored environment. Analyzing PowerNet data traces identifies contexts where electricity consumption can be reduced without cost, and others which call for rethinking system designs altogether.
BibTeX entry
@inproceedings{ hotpower09kaz ,
  author    = "Maria Kazandjieva, Brandon Heller, Philip Levis, and Christos Kozyrakis",
  title     = "{Energy Dumpster Diving}",
  booktitle = "{Second Workshop on Power Aware Computing (HotPower)}",
  year      = {2009},
  month     = {November}
}
        
Collection Tree Protocol
Omprakash Gnawali, Rodrigo Fonseca, Kyle Jamieson, David Moss, and Philip Levis
Published in Proceedings of the 7th ACM Conference on Embedded Networked Sensor Systems (SenSys), November 2009
SenSys 2022 Test of Time Award
Abstract
This paper presents and evaluates two principles for wireless routing protocols. The first is datapath validation: data traffic quickly discovers and fixes routing inconsistencies. The second is adaptive beaconing: extending the Trickle algorithm to routing control traffic reduces route repair latency and sends fewer beacons. We evaluate datapath validation and adaptive beaconing in CTP Noe, a sensor network tree collection protocol. We use 12 different testbeds ranging in size from 20-310 nodes, comprising seven platforms, and six different link layers, on both interference-free and interference-prone channels. In all cases, CTP Noe delivers > 90% of packets. Many experiments achieve 99.9%. Compared to standard beaconing, CTP Noe sends 73% fewer beacons while reducing topology repair latency by 99.8%. Finally, when using low-power link layers, CTP Noe has duty cycles of 3% while supporting aggregate loads of 30 packets/minute.
BibTeX entry
@inproceedings{ sensys09gnawali ,
  author    = "Omprakash Gnawali, Rodrigo Fonseca, Kyle Jamieson, David Moss, and Philip Levis",
  title     = "{Collection Tree Protocol}",
  booktitle = "{Proceedings of the 7th ACM Conference on Embedded Networked Sensor Systems (SenSys)}",
  year      = {2009},
  month     = {November}
}
        
The Case for a Network Protocol Isolation Layer
Jung Il Choi, Maria Kazandjieva, Mayank Jain, and Philip Levis
Published in Proceedings of the 7th ACM Conference on Embedded Networked Sensor Systems (SenSys), November 2009
Abstract
Network protocols are typically designed and tested individually. In practice, however, applications use multiple protocols concurrently. This discrepancy can lead to failures from unanticipated interactions between protocols. In this paper, we argue that sensor network communication stacks should have an isolation layer, whose purpose is to make each protocol’s perception of the wireless channel independent of what other protocols are running. We identify two key mechanisms the isolation layer must provide: shared collision avoidance and fair channel allocation. We present an example design of an isolation layer that builds on the existing algorithms of grant-to-send and fair queueing. However, the complexities of wireless make these mechanisms insufficient by themselves. We therefore propose two new mechanisms that address these limitations: channel decay and fair cancellation. Incorporating these new mechanisms reduces the increase in end-to-end delivery cost associated with concurrently operating two protocols by more than 60%. The isolation layer improves median protocol fairness from 0.52 to 0.96 in Jain’s fairness index. Together, these results show that using an isolation layer makes protocols more efficient and robust.
BibTeX entry
@inproceedings{ sensys09choi ,
  author    = "Jung Il Choi, Maria Kazandjieva, Mayank Jain, and Philip Levis",
  title     = "{The Case for a Network Protocol Isolation Layer}",
  booktitle = "{Proceedings of the 7th ACM Conference on Embedded Networked Sensor Systems (SenSys)}",
  year      = {2009},
  month     = {November}
}
        
TOSThreads: Thread-Safe and Non-Invasive Preemption in TinyOS
Kevin Klues, Chieh-Jan Mike Liang, Jeongyeup Paek, Razvan Musaloiu-E., Philip Levis, Andreas Terzis, and Ramesh Govindan
Published in Proceedings of the 7th ACM Conference on Embedded Networked Sensor Systems (SenSys), November 2009
Abstract
Many threads packages have been proposed for programming wireless sensor platforms. However, many sensor network operating systems still choose to provide an event driven model, due to efficiency concerns. We present TOSThreads, a threads package for TinyOS that combines the ease of a threaded programming model with the efficiency of an event-based kernel. TOSThreads is backwards compatible with existing TinyOS code, supports an evolvable, thread-safe kernel API, and enables flexible application development through dynamic linking and loading. In TOSThreads, TinyOS code runs at a higher priority than application threads and all kernel operations are invoked only via message passing, never directly, ensuring thread-safety while enabling maximal concurrency. The TOSThreads package is non-invasive; it does not require any large-scale changes to existing TinyOS code. We demonstrate that TOSThreads context switches and system calls introduce an overhead of less than 0.92% and that dynamic linking and loading takes as little as 90 ms for a representative sensing application. We compare different programming models built using TOSThreads, including standard C with blocking system calls and a reimplementation of Tenet. Additionally, we demonstrate that TOSThreads is able to run computationally intensive tasks without adversely affecting the timing of critical OS services.
BibTeX entry
@inproceedings{ sensys09klues ,
  author    = "Kevin Klues, Chieh-Jan Mike Liang, Jeongyeup Paek, Razvan Musaloiu-E., Philip Levis, Andreas Terzis, and Ramesh Govindan",
  title     = "{TOSThreads: Thread-Safe and Non-Invasive Preemption in TinyOS}",
  booktitle = "{Proceedings of the 7th ACM Conference on Embedded Networked Sensor Systems (SenSys)}",
  year      = {2009},
  month     = {November}
}
        
Starburst SSD: An Efficient Protocol for Selective Dissemination
Tahir Azim, Philip Levis and Qasim Mansoor
Published in Proceedings of the IEEE International Conference on Communications (ICC), August 2009
Abstract
We present Starburst, a routing-based protocol designed to efficiently disseminate data items to small subsets within a sensor network. Starburst constructs a routing hierarchy to enable fast, efficient and reliable dissemination to nodes in a sensor network that satisfy data-specific predicates. The protocol is based on the idea that when only a few nodes need an update, it is more efficient and much faster to route to those nodes directly. When every node needs an update, algorithms such as Trickle are more efficient. Starburst therefore dynamically determines what portion of nodes need an update and locally adapts its delivery policy accordingly. We also present dynamic beacon selection algorithms which enable scalability and fault tolerance in Starburst. We have implemented and evaluated Starburst on top of both the BVR and S4 routing protocols with promising results. Our simulations show that Starburst reduces both the transmission cost and latency of existing dissemination protocols by at least 50% for small subsets of nodes, and performs no worse than them for larger sets. Finally, tests on the Motelab testbed validate our simulation results.
BibTeX entry
@inproceedings{ icc09azim ,
  author    = "Tahir Azim, Philip Levis and Qasim Mansoor",
  title     = "{Starburst SSD: An Efficient Protocol for Selective Dissemination}",
  booktitle = "{Proceedings of the IEEE International Conference on Communications (ICC)}",
  year      = {2009},
  month     = {August}
}
        
Apprehending Joule Thieves with Cinder
Stephen Rumble, Ryan Stutsman, Philip Levis, David Mazieres, and Nickolai Zeldovich
Published in Proceedings of the First ACM SIGCOMM Workshop on Networking, Systems, Applications on Mobile Handhelds (MobiHeld), August 2009
Abstract
Energy is the critical limiting resource to mobile computing devices. Correspondingly, an operating system must track, provision, and ration how applications consume energy. The emergence of third-party application stores and marketplaces makes this concern even more pressing. A third-party application must not deny service through excessive, unforeseen energy expenditure, whether accidental or malicious. Previous research has shown promise in tracking energy usage and rationing it to meet device lifetime goals, but such mechanisms and policies are still nascent, especially regarding user interaction. We argue for a new operating system, called Cinder, which builds on top of the HiStar OS. Cinder’s energy awareness is based on hierarchical capacitors and task profiles. We introduce and explore these abstractions, paying particular attention to the ways in which policies could be generated and enforced in a dynamic system.
BibTeX entry
@inproceedings{ mobiheld09rumble ,
  author    = "Stephen Rumble, Ryan Stutsman, Philip Levis, David Mazieres, and Nickolai Zeldovich",
  title     = "{Apprehending Joule Thieves with Cinder}",
  booktitle = "{Proceedings of the First ACM SIGCOMM Workshop on Networking, Systems, Applications on Mobile Handhelds (MobiHeld)}",
  year      = {2009},
  month     = {August}
}
        
Burstiness and scaling in the structure of low-power wireless links
Tal Rusak and Philip Levis
Published in ACM SIGMOBILE Mobile Computing and Communications Review, January 2009
Abstract
We observe that low-power wireless links have non-trivial time-scaling characteristics at both the physical- and link-layers. Packet reception rate (PRR) analysis shows that links are bursty rather than constant, i.e., their reception quality varies greatly from the overall packet reception rate at different times. Furthermore, this variation is seen at many timescales. We provide a possible explanation for burstiness using wavelet analysis of RSSI traces from a variety of wireless links. We show that these traces can be considered as consistent with statistical self-similarity but not with long range dependence. Using the variance in RSSI, we suggest a way to easily characterize when scaling occurs. Finally, while current simulators do not capture scaling, we propose and validate a possible modeling technique for network links that conforms to scaling phenomena.
BibTeX entry
@inproceedings{ sigmobile09rusak ,
  author    = "Tal Rusak and Philip Levis",
  title     = "{Burstiness and scaling in the structure of low-power wireless links}",
  booktitle = "{ACM SIGMOBILE Mobile Computing and Communications Review}",
  year      = {2009},
  month     = {January}
}
        

2008

Quanto: Tracking Energy in Networked Embedded Systems
Rodrigo Fonseca, Prabal Dutta, Philip Levis, and Ion Stoica
Published in Proceedings of the Eighth USENIX Symposium on Operating System Design and Implementation (OSDI), December 2008
Abstract
We present Quanto, a network-wide time and energy profiler for embedded network devices. By combining well-defined interfaces for hardware power states, fast high-resolution energy metering, and causal tracking of programmer-defined activities, Quanto can map how energy and time are spent on nodes and across a network. Implementing Quanto on the TinyOS operating system required modifying under 350 lines of code and adding 1275 new lines. We show that being able to take fine-grained energy consumption measurements as fast as reading a counter allows developers to precisely quantify the effects of low-level system implementation decisions, such as using DMA versus direct bus operations, or the effect of external interference on the power draw of a low duty-cycle radio. Finally, Quanto is lightweight enough that it has a minimal effect on system behavior: each sample takes 100 CPU cycles and 12 bytes of RAM.
BibTeX entry
@inproceedings{ osdi08fonseca ,
  author    = "Rodrigo Fonseca, Prabal Dutta, Philip Levis, and Ion Stoica",
  title     = "{Quanto: Tracking Energy in Networked Embedded Systems}",
  booktitle = "{Proceedings of the Eighth USENIX Symposium on Operating System Design and Implementation (OSDI)}",
  year      = {2008},
  month     = {December}
}
        
The B-factor: Measuring Wireless Link Burstiness
Kannan Srinivasan, Maria Kazandjieva, Saatvik Agarwal, and Philip Levis
Published in Proceedings of the 6th ACM Conference on Embedded Networked Sensor Systems (SenSys), November 2008
Abstract
Measuring 802.15.4 reception in three testbeds, we find that most intermediate links are bursty: they shift between poor and good delivery. We present a metric to measure this link burstiness and name it B. We find that link burstiness affects protocol performance and that B; can predict the effects. We show that measuring β allows us to reason about how long a protocol should pause after encountering a packet failure to reduce its transmission cost. We find that using B as a guide to setting a single constant in a standard sensor network data collection protocol reduces its average transmission cost by 15%. In addition to data from 802.15.4 testbeds, we examine traces from 802.11b networks and find B has a broader relevance in the wireless domain.
BibTeX entry
@inproceedings{ sensys08srinivasan ,
  author    = "Kannan Srinivasan, Maria Kazandjieva, Saatvik Agarwal, and Philip Levis",
  title     = "{The B-factor: Measuring Wireless Link Burstiness}",
  booktitle = "{Proceedings of the 6th ACM Conference on Embedded Networked Sensor Systems (SenSys)}",
  year      = {2008},
  month     = {November}
}
        
Investigating a Physically-Based Signal Power Model for Robust Wireless Link Simulation
Tal Rusak and Philip Levis
Published in Proceedings of the 11th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM), October 2008
Best paper award, MSWiM 2008.
Abstract
We propose deriving wireless simulation models from experimental traces of radio signal strength. Because experimental traces have holes due to packet losses, we explore two algorithms for filling the gaps in lossy experimental traces. Using completed traces, we apply the closest-t pattern matching (CPM) algorithm, originally designed for modeling external interference, to model signal strength. We compare the observed link behavior using our models with that of the experimental packet trace. Our approach results in more accurate packet reception ratios than current simulation methods, reducing the absolute error in PRR by up to about 30%. We also find that using CPM for signal strength improves simulation of packet burstiness, reducing the Kantorovich-Wasserstein (KW) distance of conditional packet delivery functions (CPDFs) by a factor of about 3 for intermediate links. These improvements give TOSSIM, a standard sensor network simulator, a better capability to capture real-world dynamics and edge conditions that protocol designers typically must wait until deployment to detect.
BibTeX entry
@inproceedings{ mswim08rusak ,
  author    = "Tal Rusak and Philip Levis",
  title     = "{Investigating a Physically-Based Signal Power Model for Robust Wireless Link Simulation}",
  booktitle = "{Proceedings of the 11th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM)}",
  year      = {2008},
  month     = {October}
}
        
The Emergence of a Networking Primitive in Wireless Sensor Networks
Philip Levis, Eric Brewer, David Culler, David Gay, Samuel Madden, Neil Patel, Joe Polastre, Scott Shenker, Robert Szewczyk, and Alec Woo
Published in Communications of the ACM, Volume 51, Issue 7, July 2008
Abstract
The wireless sensor network community approached networking abstractions as an open question, allowing answers to emerge with time and experience. The Trickle algorithm has become a basic mechanism used in numerous protocols and systems. Trickle brings nodes to eventual consistency quickly and efficiently while remaining remarkably robust to variations in network density, topology, and dynamics. Instead of flooding a network with packets, Trickle uses a “polite gossip” policy to control send rates so each node hears just enough packets to stay consistent. This simple mechanism enables Trickle to scale to thousand-fold changes in network density, reach consistency in seconds, and require only a few bytes of state yet impose a maintenance cost of a few sends an hour. Originally designed for disseminating new code, experience has shown Trickle to have much broader applicability, including route maintenance and neighbor discovery. This paper provides an overview of the research challenges wireless sensor networks face, describes the Trickle algorithm, and outlines several ways it is used today.
BibTeX entry
@inproceedings{ cacm08levis ,
  author    = "Philip Levis, Eric Brewer, David Culler, David Gay, Samuel Madden, Neil Patel, Joe Polastre, Scott Shenker, Robert Szewczyk, and Alec Woo",
  title     = "{The Emergence of a Networking Primitive in Wireless Sensor Networks}",
  booktitle = "{Communications of the ACM, Volume 51, Issue 7}",
  year      = {2008},
  month     = {July}
}
        
Data Discovery and Dissemination with DIP
Kaisen Lin and Philip Levis
Published in Proceedings of the Seventh International Conference on Information Processing in Wireless Sensor Networks (IPSN), April 2008
Abstract
We present DIP, a data discovery and dissemination protocol for wireless networks. Prior approaches, such as Trickle or SPIN, have overheads that scale linearly with the number of data items. For T items, DIP can identify new items with O(log(T)) packets while maintaining a O(1) detection latency. To achieve this performance in a wide spectrum of network configurations, DIP uses a hybrid approach of randomized scanning and tree-based directed searches. By dynamically selecting which of the two algorithms to use, DIP outperforms both in terms of transmissions and speed. Simulation and testbed experiments show that DIP sends 20-60% fewer packets than existing protocols and can be 200% faster, while only requiring O(log(log(T))) additional state per data item.
BibTeX entry
@inproceedings{ ipsn08lin ,
  author    = "Kaisen Lin and Philip Levis",
  title     = "{Data Discovery and Dissemination with DIP}",
  booktitle = "{Proceedings of the Seventh International Conference on Information Processing in Wireless Sensor Networks (IPSN)}",
  year      = {2008},
  month     = {April}
}
        

2007

Visibility: A New Metric for Protocol Design
Megan Wachs, Jung Il Choi, Jung Woo Lee, Kannan Srinivasan, Zhe Chen, Mayank Jain and Philip Levis
Published in Proceedings of the Fifth ACM Conference on Embedded Networked Sensor Systems (SenSys), November 2007
Abstract
This paper proposes a new sensornet protocol design goal: visibility. Visibility into behaviors at the network level will simplify debugging and ease the development process. We argue that increasing visibility is the responsibility of the network protocols themselves, and not solely the responsibility of existing debugging tools. We describe a quantitative visibility metric to evaluate and compare protocols, where visibility is defined as the energy cost of diagnosing the cause of a behavior in a protocol. The design and evaluation of Pull Collection Protocol, a novel multi-hop collection protocol, is an example of how to design for visibility without sacrificing throughput or node-level fairness. We also describe our optimizations for an existing protocol, Deluge, to increase its visibility and efficiency
BibTeX entry
@inproceedings{ sensys07wachs ,
  author    = "Megan Wachs, Jung Il Choi, Jung Woo Lee, Kannan Srinivasan, Zhe Chen, Mayank Jain and Philip Levis",
  title     = "{Visibility: A New Metric for Protocol Design}",
  booktitle = "{Proceedings of the Fifth ACM Conference on Embedded Networked Sensor Systems (SenSys)}",
  year      = {2007},
  month     = {November}
}
        
Four-Bit Wireless Link Estimation
Rodrigo Fonseca, Omprakash Gnawali, Kyle Jamieson, and Philip Levis
Published in Proceedings of the Sixth Workshop on Hot Topics in Networks (HotNets VI), November 2007
Abstract
We consider the problem of estimating link quality in an ad-hoc wireless mesh. We argue that estimating links well requires combining information from the network, link, and physical layers. We propose narrow, protocol-independent interfaces for the layers, which in total provide four bits of information: 1 from the physical layer, 1 from the link layer, and 2 from the network layer. We present a link estimator design with these interfaces that reduces packet delivery costs by up to 44% over current approaches and maintains a 99% delivery ratio over large, multihop testbeds.
BibTeX entry
@inproceedings{ hotnets07fonseca ,
  author    = "Rodrigo Fonseca, Omprakash Gnawali, Kyle Jamieson, and Philip Levis",
  title     = "{Four-Bit Wireless Link Estimation}",
  booktitle = "{Proceedings of the Sixth Workshop on Hot Topics in Networks (HotNets VI)}",
  year      = {2007},
  month     = {November}
}
        
Integrating Concurrency Control and Energy Management in Device Drivers
Kevin Klues, Vlado Handziski, Chenyang Lu, Adam Wolisz, David Culler, David Gay, and Philip Levis
Published in Proceedings of the 21st ACM Symposium on Operating System Principles (SOSP), October 2007
Abstract
Energy management is a critical concern in wireless sensornets. Despite its importance, sensor network operating systems today provide minimal energy management support, requiring applications to explicitly manage system power states. To address this problem, we present ICEM, a device driver architecture that enables simple, energy efficient wireless sensornet applications. The key insight behind ICEM is that the most valuable information an application can give the OS for energy management is its concurrency. Using ICEM, a low-rate sensing application requires only a single line of energy management code and has an efficiency within 1.6% of a hand-tuned implementation. ICEM’s effectiveness questions the assumption that sensornet applications must be responsible for all power management and sensornets cannot have a standardized OS with a simple API.
BibTeX entry
@inproceedings{ sosp07klues ,
  author    = "Kevin Klues, Vlado Handziski, Chenyang Lu, Adam Wolisz, David Culler, David Gay, and Philip Levis",
  title     = "{Integrating Concurrency Control and Energy Management in Device Drivers}",
  booktitle = "{Proceedings of the 21st ACM Symposium on Operating System Principles (SOSP)}",
  year      = {2007},
  month     = {October}
}
        
Opening the Sensornet Black Box
Jung Il Choi, Jung Woo Lee, Megan Wachs, and Philip Levis
Published in Proceedings of the International Workshop on Wireless Sensornet Architecture (WWSNA), April 2007
Abstract
We argue that the principal cause of sensornet deployment and development difficulty is an inability to observe a network’s internal operation. We further argue that this lack of visibility is due to the activity and resource constraints enforced by limited energy. We present the Mote Network (MNet) architecture, which elevates visibility to be its dominant design principle. We propose a quantitative metric for network visibility and explain why network isolation and fairness are critical concerns. We describe the Fair Waiting Protocol (FWP), MNet’s single-hop protocol and show how its fairness and isolation can improve throughput and efficiency. We present the Pull Collection Protocol as a case study in designing multihop protocols in the architecture.
BibTeX entry
@inproceedings{ wwsna07choi ,
  author    = "Jung Il Choi, Jung Woo Lee, Megan Wachs, and Philip Levis",
  title     = "{Opening the Sensornet Black Box}",
  booktitle = "{Proceedings of the International Workshop on Wireless Sensornet Architecture (WWSNA)}",
  year      = {2007},
  month     = {April}
}
        
Improving Wireless Simulation Through Noise Modeling
HyungJune Lee, Alberto Cerpa, and Philip Levis
Published in Proceedings of the Sixth International Conference on Information Processing in Wireless Sensor Networks (IPSN), April 2007
Abstract
We propose modeling environmental noise in order to efficiently and accurately simulate wireless packet delivery. We measure noise traces in many different environments and propose three algorithms to simulate noise from these traces. We evaluate applying these algorithms to signal-to-noise curves in comparison to existing simulation approaches used in EmStar, TOSSIM, and ns2. We measure simulation accuracy using the Kantorovich-Wasserstein distance on conditional packet delivery functions. We demonstrate that using a closest-fit pattern matching (CPM) noise model can capture complex temporal dynamics which existing approaches do not, increasing packet simulation fidelity by a factor of 2 for good links, a factor of 1.5 for bad links, and a factor of 5 for intermediate links. As our models are derived from real-world traces, they can be generated for many different environments.
BibTeX entry
@inproceedings{ ipsn07lee ,
  author    = "HyungJune Lee, Alberto Cerpa, and Philip Levis",
  title     = "{Improving Wireless Simulation Through Noise Modeling}",
  booktitle = "{Proceedings of the Sixth International Conference on Information Processing in Wireless Sensor Networks (IPSN)}",
  year      = {2007},
  month     = {April}
}
        

2006

Some Implications of Low-Power Wireless to IP Routing
Kannan Srinivasan, Prabal Dutta, Arsalan Tavakoli, and Philip Levis
Published in Proceedings of the Fifth Workshop on Hot Topics in Networks (HotNets V), November 2006
Abstract
We examine and outline challenges in IPv6 routing over low-power wireless personal area networks (PANs). We present empirical measurements and analysis of an increasingly popular PAN link layer, 802.15.4. We show that over short periods 802.15.4 exhibits bimodal connectivity, but over longer periods has many intermediate links. We quantify how synchonous acknowledgments affect common low-power routing metrics, such as ETX. We identify metrics for detecting modal changes in link quality. We explore how these behaviors affect IP routing and IPv6 requirements, such as route selection and maintenance, sub-IP fragmentation and assembly, and packet scheduling.
BibTeX entry
@inproceedings{ hotnets06srinivasan ,
  author    = "Kannan Srinivasan, Prabal Dutta, Arsalan Tavakoli, and Philip Levis",
  title     = "{Some Implications of Low-Power Wireless to IP Routing}",
  booktitle = "{Proceedings of the Fifth Workshop on Hot Topics in Networks (HotNets V)}",
  year      = {2006},
  month     = {November}
}
        
RSSI Is Under-Appreciated
Kannan Srinivasan and Philip Levis
Published in Proceedings of the Third Workshop on Embedded Networked Sensors (EmNets), May 2006
Abstract
There is a general belief in the Wireless Sensor Network (WSN) community that the received signal strength indicator (RSSI) is a bad estimator of the link quality. This belief is due to the existence of many asymmetry links in older radios such as CC1000 and TR1000. Newer radios that are based on IEEE 802.15.4 standard such as CC2420 implement another parameter called link quality indicator (LQI) which is believed to be a better in- dicator than RSSI. There is so far no extensive evaluation of CC2420 to validate this claim. We have conducted such an evaluation and our preliminary results indicate that RSSI for a given link has very small variation over time for a link. Our results also indicate that when the RSSI is above the sensitivity threshold (about -87 dBm), the packet reception rate (PRR) is atleast 85%. Around this sensitivity threshold, however, the PRR is not correlated possibly due to variations in local phenomena such as noise. LQI, on the other hand, varies over a wider range over time for a given link. However, the mean LQI computed over many packets has a better correlation with PRR.
BibTeX entry
@inproceedings{ emnets2006srinivasan ,
  author    = "Kannan Srinivasan and Philip Levis",
  title     = "{RSSI Is Under-Appreciated}",
  booktitle = "{Proceedings of the Third Workshop on Embedded Networked Sensors (EmNets)}",
  year      = {2006},
  month     = {May}
}
        

2005

A Unifying Link Abstraction for Wireless Sensor Networks
Joseph Polastre, Jonathan Hui, Philip Levis, Jerry Zhao, David Culler, Scott Shenker, and Ion Stoica
Published in Proceedings of the Third ACM Conference on Embedded Networked Sensor Systems (SenSys), November 2005
Abstract
Recent technological advances and the continuing quest for greater efficiency have led to an explosion of link and network protocols for wireless sensor networks. These protocols embody very different assumptions about network stack composition and, as such, have limited interoperability. It has been suggested that, in principle, wireless sensor networks would benefit from a unifying abstraction (or ‘narrow waist’ in architectural terms), and that this abstraction should be closer to the link level than the network level. This paper takes that vague principle and turns it into practice, by proposing a specific unifying sensornet protocol (SP) that provides shared neighbor management and a message pool. The two goals of a unifying abstraction are generality and efficiency: it should be capable of running over a broad range of link-layer technologies and supporting a wide variety of network protocols, and doing so should not lead to a significant loss of efficiency. To investigate the extent to which SP meets these goals, we implemented SP (in TinyOS) on top of two very different radio technologies: B-MAC on mica2 and IEEE 802.15.4 on Telos. We also built a variety of network protocols on SP, including examples of collection routing, dissemination, and aggregation. Measurements show that these protocols do not sacrifice performance through the use of our SP abstraction.
BibTeX entry
@inproceedings{ sensys05polastre ,
  author    = "Joseph Polastre, Jonathan Hui, Philip Levis, Jerry Zhao, David Culler, Scott Shenker, and Ion Stoica",
  title     = "{A Unifying Link Abstraction for Wireless Sensor Networks}",
  booktitle = "{Proceedings of the Third ACM Conference on Embedded Networked Sensor Systems (SenSys)}",
  year      = {2005},
  month     = {November}
}
        
Towards a Sensor Network Architecture: Lowering the Waistline
David Culler, Prabal Dutta, Cheng Tien Eee, Rodrigo Fonseca, Jonathan Hui, Philip Levis, Joseph Polastre, Scott Shenker, Ion Stoica, Gilman Tolle, and Jerry Zhao
Published in Proceedings of the Tenth Workshop on Hot Topics in Operating Systems (HotOS X), September 2005
Abstract
It is the central tenet of this paper that the primary factor currently limiting research progress in sensornets today is not any specific technical challenge (though many remain, and deserve much further study) but is instead the lack of an overall sensor network architecture. Such an architecture would identify the essential services and their conceptual relationships. Such a decomposition would make it possible to compose components in a manner that promotes interoperability, transcends generations of technology, and allows innovation.
BibTeX entry
@inproceedings{ hotos05culler ,
  author    = "David Culler, Prabal Dutta, Cheng Tien Eee, Rodrigo Fonseca, Jonathan Hui, Philip Levis, Joseph Polastre, Scott Shenker, Ion Stoica, Gilman Tolle, and Jerry Zhao",
  title     = "{Towards a Sensor Network Architecture: Lowering the Waistline}",
  booktitle = "{Proceedings of the Tenth Workshop on Hot Topics in Operating Systems (HotOS X)}",
  year      = {2005},
  month     = {September}
}
        
Software Design Patterns for TinyOS
David Gay, Philip Levis, and David Culler
Published in Proceedings of the ACM SIGPLAN/SIGBED 2005 Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES), June 2005
Abstract
We present design patterns used by software components in the TinyOS sensor network operating system. They differ significantly from traditional software design patterns due to the constraints of sensor networks and TinyOS’s focus on static allocation and whole-program composition. We describe how nesC has evolved to support these design patterns by including a few simple language primitives and optimisations.
BibTeX entry
@inproceedings{ lctes05gay ,
  author    = "David Gay, Philip Levis, and David Culler",
  title     = "{Software Design Patterns for TinyOS}",
  booktitle = "{Proceedings of the ACM SIGPLAN/SIGBED 2005 Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES)}",
  year      = {2005},
  month     = {June}
}
        
Active Sensor Networks
Philip Levis, David Gay, and David Culler
Published in Proceedings of the Second USENIX/ACM Symposium on Networked Systems Design and Implementation (NSDI), March 2005
Abstract
We propose using application specific virtual machines (ASVMs) to reprogram deployed wireless sensor networks. ASVMs provide a way for a user to define an application-specific boundary between virtual code and the VM engine. This allows programs to be very concise (tens to hundreds of bytes), making program installation fast and inexpensive. Additionally, concise programs interpret few instructions, imposing very little interpretation overhead. We evaluate ASVMs against current proposals for network programming runtimes and show that ASVMs are more energy efficient by as much as 20%. We also evaluate ASVMs against hand built TinyOS applications and show that while interpretation imposes a significant execution overhead, the low duty cycles of realistic applications make the actual cost effectively unmeasurable.
BibTeX entry
@inproceedings{ nsdi05levis ,
  author    = "Philip Levis, David Gay, and David Culler",
  title     = "{Active Sensor Networks}",
  booktitle = "{Proceedings of the Second USENIX/ACM Symposium on Networked Systems Design and Implementation (NSDI)}",
  year      = {2005},
  month     = {March}
}
        

2004

The Firecracker Protocol
Philip Levis and David Culler
Published in Proceedings of the 11th ACM SIGOPS European Workshop, September 2004
Abstract
We propose the Firecracker protocol for data dissemination in wireless sensor networks. Firecracker uses a combination of routing and broadcasts to rapidly deliver a piece of data to every node in a network. To start dissemination, the data source sends data to distant points in the network. Once the data reaches its destinations, broadcast-based dissemination begins along the paths, like a string of firecrackers. By using an initial routing phase, Firecracker can disseminate at a faster rate than scalable broadcasts while sending fewer packets. The selection of points to route to has a large effect on performance, indicating possible requirements for any-to-any routing protocols in wireless sensor networks.
BibTeX entry
@inproceedings{ sigopseuro04levis ,
  author    = "Philip Levis and David Culler",
  title     = "{The Firecracker Protocol}",
  booktitle = "{Proceedings of the 11th ACM SIGOPS European Workshop}",
  year      = {2004},
  month     = {September}
}
        
Trickle: A Self-Regulating Algorithm for Code Propagation and Maintenance in Wireless Sensor Networks
Philip Levis, Neil Patel, David Culler, and Scott Shenker
Published in Proceedings of the First USENIX/ACM Symposium on Networked Systems Design and Implementation (NSDI), March 2004
Best paper award, NSDI 2004; Test of time award, NSDI 2004 (awarded 2014)
Abstract
We present Trickle, an algorithm for propagating and maintaining code updates in wireless sensor networks. Borrowing techniques from the epidemic/gossip, scalable multicast, and wireless broadcast literature, Trickle uses a ‘polite gossip’ policy, where motes periodically broadcast a code summary to local neighbors but stay quiet if they have recently heard a summary identical to theirs. When a mote hears an older summary than its own, it broadcasts an update. Instead of flooding a network with packets, the algorithm controls the send rate so each mote hears a small trickle of packets, just enough to stay up to date. We show that with this simple mechanism, Trickle can scale to thousand-fold changes in network density, propagate new code in the order of seconds, and impose a maintenance cost on the order of a few sends an hour.
BibTeX entry
@inproceedings{ nsdi04levistrickle ,
  author    = "Philip Levis, Neil Patel, David Culler, and Scott Shenker",
  title     = "{Trickle: A Self-Regulating Algorithm for Code Propagation and Maintenance in Wireless Sensor Networks}",
  booktitle = "{Proceedings of the First USENIX/ACM Symposium on Networked Systems Design and Implementation (NSDI)}",
  year      = {2004},
  month     = {March}
}
        
The Emergence of Networking Abstractions and Techniques in TinyOS
Philip Levis, Sam Madden, David Gay, Joe Polastre, Robert Szewczyk, Alec Woo, Eric Brewer, and David Culler
Published in Proceedings of the First USENIX/ACM Symposium on Networked Systems Design and Implementation (NSDI), March 2004
Abstract
The constraints of sensor networks, an emerging area of network research, require new approaches in system design. We study the evolution of abstractions and techniques in TinyOS, a popular sensor network operating system. Examining CVS repositories of several research institutions that use TinyOS, we trace three areas of development: single-hop networking, multi-hop networking, and network services. We note common techniques and draw conclusions on the emerging abstractions as well as the novel constraints that have shaped them.
BibTeX entry
@inproceedings{ nsdi04levistinyos ,
  author    = "Philip Levis, Sam Madden, David Gay, Joe Polastre, Robert Szewczyk, Alec Woo, Eric Brewer, and David Culler",
  title     = "{The Emergence of Networking Abstractions and Techniques in TinyOS}",
  booktitle = "{Proceedings of the First USENIX/ACM Symposium on Networked Systems Design and Implementation (NSDI)}",
  year      = {2004},
  month     = {March}
}
        

2003

TOSSIM: Accurate and Scalable Simulation of Entire TinyOS Applications
Philip Levis, Nelson Lee, Matt Welsh, and David Culler
Published in Proceedings of the First ACM Conference on Embedded Networked Sensor Systems (SenSys), November 2003
Abstract
Accurate and scalable simulation has historically been a key enabling factor for systems research. We present TOSSIM, a simulator for TinyOS wireless sensor networks. By exploiting the sensor network domain and TinyOS’s design, TOSSIM can capture network behavior at a high fidelity while scaling to thousands of nodes. By using a probabilistic bit error model for the network, TOSSIM remains simple and efficient, but expressive enough to capture a wide range of network interactions. Using TOSSIM, we have discovered several bugs in TinyOS, ranging from network bit-level MAC interactions to queue overflows in an ad-hoc routing protocol. Through these and other evaluations, we show that detailed, scalable sensor network simulation is possible.
BibTeX entry
@inproceedings{ sensys03levis ,
  author    = "Philip Levis, Nelson Lee, Matt Welsh, and David Culler",
  title     = "{TOSSIM: Accurate and Scalable Simulation of Entire TinyOS Applications}",
  booktitle = "{Proceedings of the First ACM Conference on Embedded Networked Sensor Systems (SenSys)}",
  year      = {2003},
  month     = {November}
}
        
The nesC Language: A Holistic Approach to Network Embedded Systems
David Gay, Philip Levis, Robert von Behren, Matt Welsh, Eric Brewer, and David Culler
Published in Proceedings of the ACM SIGPLAN 2003 Conference on Programming Language Design and Implementation (PLDI), June 2003
Most influential PLDI 2003 paper (awarded 2013)
Abstract
We present nesC, a programming language for networked embedded systems that represent a new design space for application developers. An example of a networked embedded system is a sensor network, which consists of (potentially) thousands of tiny, low-power ‘motes,’ each of which execute concurrent, reactive programs that must operate with severe memory and power constraints. nesC’s contribution is to support the special needs of this domain by exposing a programming model that incorporates event-driven execution, a flexible concurrency model, and component-oriented application design. Restrictions on the programming model allow the nesC compiler to perform whole-program analyses, including data-race detection (which improves reliability) and aggressive function inlining (which reduces resource consumption). nesC has been used to implement TinyOS, a small operating system for sensor networks, as well as several significant sensor applications. nesC and TinyOS have been adopted by a large number of sensor network research groups, and our experience and evaluation of the language shows that it is effective at supporting the complex, concurrent programming style demanded by this new class of deeply networked systems.
BibTeX entry
@inproceedings{ pldi03gay ,
  author    = "David Gay, Philip Levis, Robert von Behren, Matt Welsh, Eric Brewer, and David Culler",
  title     = "{The nesC Language: A Holistic Approach to Network Embedded Systems}",
  booktitle = "{Proceedings of the ACM SIGPLAN 2003 Conference on Programming Language Design and Implementation (PLDI)}",
  year      = {2003},
  month     = {June}
}
        

2002

Mate: A Tiny Virtual Machine for Sensor Networks
Philip Levis and David Culler
Published in Proceedings of the 10th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS X), October 2002
Abstract
Composed of tens of thousands of tiny devices with very limited resources (“motes”), sensor networks are subject to novel systems problems and constraints. The large number of motes in a sensor network means that there will often be some failing nodes; networks must be easy to repopulate. Often there is no feasible method to recharge motes, so energy is a precious resource. Once deployed, a network must be reprogrammable although physically unreachable, and this reprogramming can be a significant energy cost. We present Maté, a tiny communication-centric virtual machine designed for sensor networks. Mat´e’s high-level in- terface allows complex programs to be very short (under 100 bytes), reducing the energy cost of transmitting new programs. Code is broken up into small capsules of 24 instructions, which can self-replicate through the network. Packet sending and reception capsules enable the deployment of ad-hoc routing and data aggregation algorithms. Maté’s concise, high-level program representation simplifies programming and allows large networks to be frequently reprogrammed in an energy-efficient manner; in addition, its safe execution environment suggests a use of virtual machines to provide the user/kernel boundary on motes that have no hardware protection mechanisms.
BibTeX entry
@inproceedings{ asplos02levis ,
  author    = "Philip Levis and David Culler",
  title     = "{Mate: A Tiny Virtual Machine for Sensor Networks}",
  booktitle = "{Proceedings of the 10th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS X)}",
  year      = {2002},
  month     = {October}
}
        

2000

Policies for Dynamic Clock Scheduling
Dirk Grunwald, Philip Levis, Charles B. Morrey III, and Michael Neufeld
Published in Proceedings of the 4th Symposium on Operating System Design and Implementation (OSDI), October 2000
Abstract
Pocket computers are beginning to emerge that provide sufficient processing capability and memory capacity to run traditional desktop applications and operating systems on them. The increasing demand placed on these systems by software is competing against the continuing trend in the design of low-power microprocessors towards increasing the amount of computation per unit of energy. Consequently, in spite of advances in low-power circuit design, the microprocessor is likely to continue to account for a significant portion of the overall power consumption of pocket computers. This paper investigates clock scaling algorithms on the Itsy, an experimental pocket computer that runs a complete, functional multitasking operating system (a version of Linux 2.0.30). We implemented a number of clock scaling algorithms that are used to adjust the processor speed to reduce the power used by the processor. After testing these algorithms, we conclude that currently proposed algorithms consistently fail to achieve their goal of saving power while not causing user applications to change their interactive behavior.
BibTeX entry
@inproceedings{ osdi00grunwald ,
  author    = "Dirk Grunwald, Philip Levis, Charles B. Morrey III, and Michael Neufeld",
  title     = "{Policies for Dynamic Clock Scheduling}",
  booktitle = "{Proceedings of the 4th Symposium on Operating System Design and Implementation (OSDI)}",
  year      = {2000},
  month     = {October}
}