Net2WG/Notes/20061109

From TinyOS Wiki
Revision as of 09:55, 1 August 2012 by Gnawali (talk | contribs) (New page: == Meeting nodes for November 9, 2006 == Attendance: Rodrigo, Phil, Arsalan, Om Notes: Rodrigo Rodrigo: How was the feedback at Sensys? Phil: It’s hard to say precisely. It was the end...)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Meeting nodes for November 9, 2006

Attendance: Rodrigo, Phil, Arsalan, Om Notes: Rodrigo

Rodrigo: How was the feedback at Sensys?

Phil: It’s hard to say precisely. It was the end of a long day, there were not many comments. It will be interesting in next couple of weeks after the release, people will start using it. My feeling is that this might end up in an arms race. In Matt Welsh’s group, Geoff has done good work on a collection protocol that’s very reliable.

Om: They’ve also released the Volcano code.

P: Matt wants to know how to check stuff into the main tree The current T2 policy is that working groups are responsible for parts of the tree.

What happened a lot in T1 was that the policy was much more open, people would check in code to the main tree, and then would not keep it up to data with the rest of the system. We are trying to avoid this.

Om: Documentation is also different, and much better now.

Om: Joe wasn’t there. Is that also going to be an arms race? P: Probably not P: With Matt, this is beneficial arms race. If lots of people start tweaking the collection protocols to get the best performance, that’s a good sign.

R: One good thing we tried to do is to define the protocol and interfaces. People could use them, or make a good case why they are not sufficient or adequate. This make for better maintainability.

Phil: Right, there are possibilities for new code. The authors could demonstrate either that 1. it solves a different significant problem, or 2. that it has different tradeoffs, or 3, that it has the same trafeoffs, uses the same protocol, but is an improvement.

P: I was at the IETF yesterday, in the IP over low power meshes workgroup (http://www.ietf.org/html.charters/6lowpan-charter.html). Interesting stuff going on there.

P: One related OSDI question: where is the code for the Berkeley NLA?

Arsalan: NLA will be released in the next couple of weeks, for T1.

Phil: It would be good to talk about directions from now.

Rodrigo: Henri said he will stop taking part in the group, as he graduated and is moving away from TinyOS. I will probably be stepping down as a chair in the next couple of months, as I am getting closer to finishing my program at Berkeley, and the direction is diverging from the sensor network networking area.

R: One interesting direction is to see how other protocols fit with our current decomposition of the collection protocol. It is very similar in spirit to the decomposition in the NLA OSDI paper, and it’s an interesting direction, I think.

The other direction we also started is the enhanced link layer abstraction.

Arsalan: I got interesting feedback on that at Sensys. Kevin Klues had poster on unified power management and demonstrated interest. UVA likewise. I will make public the TEP on devel.

Phil: send to the interested groups first, and then open it up for broader discussion.

Phil: A student here worked on protocol that sits between data link and network layer, but it’s a different idea. Different packet interface than SP, much simpler, an additional field, provides a grant to send abstraction. May sit above SP in the future.

Phil: There’s also one remaining issue with collection: congestion. When we sketched the protocol there was a congestion bit. From the protocol spec, if a node detects it is congested and its queue starts to fill up, it sets the bit. If other nodes hear this, they must not send to this node until they hear the bit cleared. This will work if you have brief congestion, but when it starts to happen a lot, it will repeat.

R: Another related issue is one of the very few causes of packet drops in CTP: queue overflows. This could be solved with network layer acknowledgments. There would be two levels of acks: the link layer ack means "I understood your packet", while the network layer ack means "I can take your packet (in my queue)". Then you could eliminate this type of drop, at the cost of possibly halting traffic in parts of the network or in the source during disconnections.

P: Then you have the problem of doubling the ack traffic, that's a very different design decision.

R: It's different, but not so much. It's something to think about in the future. A counter argument may be that there are some uses that value newer packets much more than older packets.

O: if you reduce the number of retries you can shift this balance.

P: Or you could do drop head instead of drop queue for the queues.

R: There are subtle interactions, positive feedback going on. It depends on the depth of the queue, the rate mismatch, how fast the sources are sending and how fast the signal propagates

P. There are two ways we can go about this. 1. Rate limiting. We say that CTP is intended to provide good throughput. Then we have to worry about this problem. 2. no, it’s intended for very low rate traffic, then we shouldn’t worry too much.

R: In the reference implementation we could provide a modular way to plug different policies based on the requirements of the application. The other question is whether the congestion bit is a sufficient mechanism.

Phil: From the protocol standpoint, we have the bit, but implementations are free to set it however and whenever they want.

R: Fair enough. There’s a lot of room for different implementations of the same protocol.

Om: we could test and see the behavior of the queue size. I think the mechanism will work if we have reasonably large queues.

P: With the current storage interface we could have very long, non-volatile queues. But even with Telos we could test with very long queues.

P: Next week: read IFRC, fusion, EE paper.

Om: I’ll also test CTP with a queue size of 60, put some packet metadata on a queue and see how the lengths behave.


R: sounds great.