Video@Scale: The New Demands are the Old Challenges – Power and Packet Drops

The challenge of creating seamless video experiences on demand has been a long-sought and long-fought dream. FaceBook video@scale brings specialists together to wrestle with the complexity of end-to-end technical tricks and user level satisfaction often at odds.

Lynne Jolitz at FaceBook Video at Scale. (L Jolitz).

The morning was a blitz of corner cases and tightly wound insights.  Minutia of transmission of video and complexity, issues of detection of dropped frames in various browser decode, up/down scaling of video quality on-the-fly, issues in CODEC switching, video stream sizing, I-frame synchronization between different video codecs, which codec to use, network versus browser issues (often appear the same), and getting around browser video correction.

But the two items I am going to focus on are the old hard chestnuts: power and packet drops.

Continue reading Video@Scale: The New Demands are the Old Challenges – Power and Packet Drops

The Security Frustrations of Apple’s “Personal” Personal Computer: Device Access, Two Factor ID, and 386BSD Role-Based Security

Image: smsglobal.com

Recently, a FaceBook friend lamented that he could not access his icloud mail from a device bound to his wife’s icloud access. He also expressed frustration with the security mechanism Apple uses to control access to devices – in particular, two-factor authentication. His annoyance was honest and palpable, but the path to redemption unclear.

Tech people are often blind to the blockers that non-technical people face because we’re used to getting around the problem. Some of these blockers are poorly architected solutions. Others are poorly communicated solutions. All in all, the security frustrations of Apple’s “personal” personal computer are compelling, real and significant. And do merit discussion.

Continue reading The Security Frustrations of Apple’s “Personal” Personal Computer: Device Access, Two Factor ID, and 386BSD Role-Based Security

Boom and Bust in IP Address Space Land

Dave Reed on e2e notes a very interesting item – ARIN has announced that migration to IPv6 is now mandatory for allocation of contiguous IP address space. “I still remember debating variable length addressing and source routing in the 1970’s TCP design days, and being told that 4 Thousand Million addresses would be enough for the life of the Internet” Dave crows. But is this an accurate “read”? (I know Dave won’t mind the pun, as he’s heard it many times before).

As I commented on e2e, I remember that debate as well. But the whole genesis of why 32 bits was good enough was an (underjustified) view on the use of networks rather than an understanding of how sparse addresses were actually employed. Everybody knows hash tables work best mostly empty – the same may be true with address blocks because they are allocated in routable units. But how does this really work?

The Minutia of Getting a Flash Video to Play Right Every Tme

OK – you’ve got it all together. The video is ready to download and play, it’s tested, we’ve watched it, the flash works (or Quicktime or whatever vintage you prefer). We watch customers watch it over and over. Things are going great. Then, someone somewhere tries to download it over the web, and it fails. The refresh button is hit over and over, it continues to fail, and that disappointed person just gives up. Why didn’t it play?

Looking over the logs today provides a window into just how difficult it is to provide 24/7 perfect video streaming to any type of computer anywhere. These problems vex the biggest and smallest vendor because they are based on architectural flaws so fundamental that these occasional failures are impossible to guard against.

Why Keep Alive “KeepAlive”?

Keepalive in TCP has always been controversial, since it blurs the difference between a dead connection and a moribund connection, or as Vadim Antonov puts it “the knowledge that connectivity is lost”. Advocates, in contrast, believe that the effort reclaiming resources needn’t be done and hence as David Reed puts it “there is no reason why a TCP connection should EVER time out merely because no one sends a packet over it.” Antonov expresses a very narrow affirmation of the value of retained state which is not necessarily useful in the time required, while Reed expresses the reductionist philosophy that no effort should be expended without jusification even if the basis of the repudiation is inherently faulty. But is either truly getting to the heart of the issue? Is it truly important to cleave to the historical constraints of the past Internet philosophical design? Or should we consider the question in the context of what is relevent to the Internet today?

I don’t ask these questions frivolously, but with a serious intent. While I am a student of history, and find the study of heritage very valuable in technical work (even patents require a love of reasoning over time), we should occasionally look at the world not as we would like it to be or how it was but how it is. Thus, I suspect the question should actually be “What is the point of having a long-lived TCP session with keepalive in the 21st century”? Is this not a security hole ripe for exploitation in an age of ubiquitous bandwidth and zombie machines? Is not the lack of security and credentials in the modern Internet the bane of both service providers and users? This is the heart of the issue.

TCP Protocols and Unfair Advantage – Being the Ultimate Pig on the Bandwidth Block

Little item from the testing side of proposed TCP protocols on stack fairness from the Hamilton Institute at the National University of Ireland, Maynooth.

According to Douglas Leith:
“In summary, we find that both Scalable-TCP and FAST-TCP consistently exhibit substantial unfairness, even when competing flows share identical network path characteristics. Scalable-TCP, HS-TCP, FAST-TCP and BIC-TCP all exhibit much greater RTT unfairness than does standard TCP, to the extent that long RTT flows may be completely starved of bandwidth. Scalable-TCP, HS-TCP and BIC-TCP all exhibit slow convergence and sustained unfairness following changes in network conditions such as the start-up of a new flow. FAST-TCP exhibits complex convergence behaviour.”

What’s this mean? Simple. In order to get more for themselves these approaches starve everyone else – the “pig at the trough” mentality. But what might work for a single flow in a carefully contrived test rig can immediately start to backfire once more complex “real world” flows are introduced.

There have been concerns for years that these approaches could wreak havoc on the Internet if not carefully vetted. I’m pleased to see someone actually is testing these proposed protocols for unfairness and the impact on network traffic. After 30 years of tuning the Internet, taking a hammer to it protocol-wise isn’t just bad science – it’s bad global policy.

Jitter, Jitter Everywhere, But Nary a Packet to Keep

I was looking over the end-to-end discussion on measuring jitter on voice calls on the backbone and came across this little gem: “Jitter – or more precise delay variance – is not important. Only the distribution is relevant”. This dismissive little item of a serious subject is all too commonplace, but misses the point the other researcher was making.

The critic assumes “fixed playout buffer lengths (e.g. from 20 to 200ms)” to calculate overall delay. But do these buffer lengths take into account compressed versus uncompressed audio? If not, the model is faulty right there. The author admits his approach is “problematic” but then assumes that “real-time adaptive playout scheduling” would be better – but then the measurement mechanism becomes part of the measurement, and you end up measuring this instead of the unmodified delay – which doesn’t help the researcher looking at jitter and delay measurements for voice.

But there is a more fundamental disconnect here between our voice-jitter researcher and his jitter-is-irrelevent nemesis – jitter does matter for some communications – it just depends on what problem is being solved. And it is careful definition of the problem that leads to dialogue.

ACM, Turing and the Internet

Vint Cerf and Bob Khan got a well-deserved dinner and party in San Francisco courtesy of the ACM. A collection of Internet “who’s whos”, lots of wine and speeches, and most importantly, their coveted Turing award. This award was announced several months ago. As Vint noted in an email reply to the Internet Society a month ago (try to take notes during an awards dinner – it can’t be done), “What is most satisfying about the Turing Award is that it is the first time this award has recognized contributions to computer networking. Bob and I hope that this will open the award to recognize many others who have contributed so much to the development and continued evolution and use of the Internet.”

So congratulations to Vint and Bob. I’m sure we are all very pleased that they have been honored with the Turing Award this year. They both deserve it – their work has changed our world!

Checksums and Rethinking Old Optimization Habits

More war stories on checksum failures over the years. Craig Partridge recalls “some part of BBN” experienced an NFS checksum issue and that it “took a while for the corruption of the filesystem to become visible…errors are infrequent enough that NIC (or switch, or whatever, …) testing doesn’t typically catch them. So bit rot is slow and subtle — and when you find it, much has been trashed (especially if one ignores early warning signs, such as large compilations occasionally failing with unrepeatable loading / compilation errors)”. Craig is absolutely right – this was exactly the case with the Sunbox project I described as well as the datacenter mirror example (see Checksums – Don’t Leave the Server Without Them). Too much damage too late. As implicit dependence on reliability increases, the value of checksums becomes very clear.

In the early deep space probes they learned the hard way the importance of always providing enough redundancy and error correction, because a single bit error might be the one that leads to the destruction of the communications ability of the spacecraft. One spacecraft had a corruption error like this that destroyed it for precisely this reason. They optimized out reliability to get a slightly greater data rate, and lost the spacecraft (this has happened more than once).

We’re reaching a point where we have to seriously think about whether an “optimization” is really valuable, since as Craig notes, you may not notice a problem until too late. In this age of ubiquitous computing, with plentiful processor, memory, and network bandwidth, we should be focussed on increased reliability and integrity, but old habits of a more parsimonious age die hard.

Another very recent example of ignoring the value of checksums is reflected in the recent ‘fasttrack’ problems of incorrect billing of tolls. But that’s another story…

Checksums – Don’t Leave the Server Without Them

Lloyd Wood commenting on an e2e post recently was asked why UDP has an end-to-end checksum on the packet since it doesn’t do retransmissions, and should it be turned off. Lloyd noted UDP “could have the checksum turned off, which proved disastrous for a number of applications, subtly corrupted filing systems which didn’t have higher-level end2end checks”. Lloyd is exactly right here. But why would someone turn off UDP checksums in the first place – it doesn’t seem to make sense, does it?

It is often the case that people turn off UDP checksums to “buy” more performance by relying on the CRC of the ethernet packet. So this is not a stupid question – it’s a very smart question, and a lot of smart people get fooled by the simplicity of the process. Performance gain by turning off checksums now can be obviated through the use of intelligent NIC technologies like SiliconTCP and TOE that calculate the checksum as the packet is being received.

This is a surprisingly common problem in datacenters – sometimes the problem would be a switch, sometimes a configuration error, sometimes a programming error in the application, and so forth. I most recently experienced this problem with an overheated ethernet switch passing video on an internal network. Since we don’t have things like SiliconTCP in commodity switches yet, check that switch if you’re having problems. In the meantime, here’s a few little datacenter horror stories to put in your pocket.