The Portability of LTFS

Effortlessly moving vast amounts of data

LTFS 3 Future: Segment 4


Transcript

Jay:
One of the areas that you spoke about was media and entertainment and that's the area that Iron Mountain has a pretty big business in so I have some familiarity with it. One interesting thing about media and entertainment, I wonder about whether there's trends where they're ahead of the market here is that, LTFS and LTO has become a pretty standard format for them. You hear a lot about cameras using LTFS onto LTO tape and then the tape is taken from the camera to the editing system in LTO. Maybe it's pulled to a hard drive for the editing, but then maybe the final cuts are done and that's stored on an LTFS and LTO tape and then it's stored on an LTFS for extended period of time so in case someone wants to go back and get the old cuts or the old data, it's all there. It's an interesting combination of the standard nature of it but also another element is the portability. The ability to capture the large amounts of data on the tape and then move them from one system to another of which the camera manufacturer be totally different from the video editing manufacturer, yet they all work together transparently in this nice method because the fact that LTFS, the underlying technology. That is the nature of LTFS, but I wonder whether there's a portability aspect of it. What kind of new business models or new benefits that could provide by allowing you to really move very very large volumes of data physically in a pretty easy way.

Michael:
That's a space that we're still trying to work out in the LTFS community. It's certainly true that before LTFS, the only real way to shift large quantities of data around was to write the data to a hard drive and then FedEx the hard drive across the country or across the world. In doing so you're really abusing hard drives because they're not designed for that kind of use. The M&E customers that I've spoken to about this scenario frequently talked about writing three or four copies of the data to different hard drives and then sending the hard drives off either in the same FedEx box or in separate FedEx boxes and this was just because they expected to have very high failure rates in shipping this data around. Hard drives are very sensitive, mechanical instruments that ultimately don't like being dropped, and don't like being abused, and don't like sitting on a shelf unpowered, and then being powered up - which is the other part of the scenario of shipping data across the country to an editing bay, pulling the data off, working with it, and then putting the hard drive on the shelf and hoping that it will be still a good device when you chose to access the data sometime in the future.

Part of the problem there is that a hard drive has the head and the reading mechanism baked into the single enclosure so if there's a problem with the reading mechanism, your data's locked into the drive. Tape media completely breaks that paradigm by building the reading mechanism and the head into the tape drive itself and the cartridge is just a media. The other important aspect that makes tape more reliable for data storage in terms of device failure, is that a hard drive is very carefully engineered so you have the spinning platter. The hard drive head is designed to be very, very close to that hard drive platter, must never ever touch the hard drive platter. In fact, modern hard drives typically have small wings on either side of the hard drive head which effectively bounce over the small cushion of air that is formed when the platter spins. Those wings effectively keep the separation.

In contrast, a tape drive is designed so that the tape head is in constant contact with the tape media. The tape media is a flexible plastic compared to a metal hard drive head and so there's some tolerance for impact. In fact, a tape drive head is designed to be constantly in contact with the media. There's some failure cases that are eliminated because there's the expectation of contact in tape drives and, if a tape drive fails, you can always eject the cartridge and put it in another tape drive and still access the data. In terms of shipping cartridges, an LTO cartridge is still relatively fragile, you don't want to be dropping an LTO cartridge from shoulder height onto concrete because you're likely to break something. But, if you put a tape cartridge in one of the handy storage boxes that come with every cartridge, they're a lot more robust. So we have a number of people who are using LTO and LTFS as a data exchange format. It turns out that a sneaker net with a box of five cartridges, or ten cartridges FedExed across the country is much faster and much cheaper than any potential IP network that you could set up to transfer data over long distances. There's the potential for LTFS to bring back sneaker net as a viable way of shipping data around.

Jay:
It's funny you mentioned sneaker net and the benefit of portability. I had actually blogged a while ago about this and I had blogged about comparing the throughput of a truck carrying tapes, an Iron Mountain truck of course, versus a carrier pigeon, versus the cloud. And what I found in the blog post, funny enough, is that if you do the math a truck can deliver something like twenty thousand gigabits per second throughput. Primarily because obviously the high density of tapes you can put in a truck and the relative speed of that and compare that to an average WAN connection and you can see that, to your point, there's a lot of throughput actually in using tapes. In fact, people don't realize how much more efficient it can be if you want to transfer large amounts of data that exceed the typical bandwidth of most company WANs, that you can actually use tape and get what equates to actually a pretty good bandwidth number particularly as it compares to what you might get through other mediums.

Michael:
Exactly. During some of our tradeshows we've spoken to some videographers in the sporting space who talked about having multiple camera locations at each game that they're covering but only actually providing a primary feed back to their home office during the game. So that primary feed would switch between cameras and only be feeding the current live camera back to home base. They also talked about actually capturing the video from all of the cameras at the game in the recording truck, but then having to sit there for some period of time after the game to transmit all of the other unused video angles so that they had a complete capture of the game back at their home office.

LTFS introduces the possibility of, rather than sitting there in your truck transmitting for whatever the length of time it takes to transmit all of these additional camera angles, you could simply write these additional camera angles to a set of LTFS cartridges and then ship the cartridges back to home base. At some point, the latency of shipping the cartridges back is significantly shorter than transmitting over a WAN or a dedicated satellite link.

Jay:
Yeah and I think that's the interesting point. It becomes really at some point a question of math. It's just a question of how much bandwidth do you have? How long of a window do you have to send it and you could calculate that. If you think about it, if you're doing with the tape thing, they are probably copying a tape and then they would put the tape in an overnight envelop and 24 hours later it would be in their central place, say in, I think ESPN's in Connecticut, say it's in Connecticut.

Well, there's a math question then about well, is it better to spend 24 hours and ship the tape overnight or is it better to spend 24 hours shipping the data over the WAN? But you know what, you may never be able to achieve 24 hours if there's a lot of data. Your example of video data is a really good one because in today's hi-def world, and moving at some point to 4K and higher and higher resolution, all of that chews up massive amounts of storage space, and hence, massive amounts of bandwidth if you tried to purely replicate it - which is interesting. There's a point about why people think about LTFS and with LTO as a way to transfer these amounts of data from the field to central editing areas, or central depots, or whatever it might be. It's sort of an interesting thing that I think people forget about because data volumes are growing so rapidly and people sometimes think in the context of their own environment, like "Oh I have a few hundred gigabytes, no problem, I can use the cloud to back it up, because I do.” But what happens if you're generating terabytes and terabytes and terabytes and how much bandwidth can you really have?

Michael:
You're exactly right and the other thing to factor in there is the actual monetary cost. If it's going to take you 24 hours to FedEx a cartridge across the country, you have a flat rate there of less than a hundred bucks. Assuming a handful of cartridges compared to multiple thousands, or tens of thousands, to maintain a high speed data link that will allow you to do that transfer. If the data's going to take 24 hours to get there, it's going to cost you a hundred dollars versus tens of thousands of dollars then there's a pretty compelling argument there.

Jay:
There is and the other thing I would add about that too is that there is the nature of physics. The further away you go, the more latency you get and the higher the latency, that indirectly can impact your throughput as well. So again, you can have the fastest line but maybe you need a faster line because of the increased latency that you may get going across a long distance - whether across the country or across the globe or wherever it might go. Whereas to your point, shipping overnight, which a hundred dollars seems like a lot of money to me, but compared to ten thousand, isn't really, and I was thinking about it in the context of me personally. But certainly for a big company that's very reasonable, and guess what, you don't have anymore latency issues really per se.

Michael:
Exactly.

The Speakers:

Michael Richmond
Jay Livens