Can a station broadcast mobile DTV and not OTA ATSC?

TonyT

DTVUSA Member
#1
My local news ran a story last night about Mobile DTV offering a variety of programming in the future. I thought to myself, how is ATSC M/H going to be any different than what's on TV now and then a thought came to me, will there be mobile DTV broadcasters that aren't broadcasting OTA ATSC signals? Could it provide for a cheaper way for entry level or startup companies to enter the broadcast arena?
 

Trip

Moderator, , , Webmaster of: Rabbit Ears
Staff member
#2
The ATSC-M/H signal is a part of an existing ATSC signal. It's impossible to transmit M/H without the original signal.

Plus, the FCC requires at least one standard SD video feed.

- Trip
 

Piggie

Super Moderator
#3
The ATSC-M/H signal is a part of an existing ATSC signal. It's impossible to transmit M/H without the original signal.

Plus, the FCC requires at least one standard SD video feed.

- Trip
One could do that and leave room for 5 mobile channels. And not have too bad of a local channel. Probably one of the mobile channels is a network which they could put on the main feed.

But more over, on what channel? Most places are close to running out of channels.
 

Trip

Moderator, , , Webmaster of: Rabbit Ears
Staff member
#4
The ATSC-M/H spec only allows for 8 groups, which is 7.33 Mbps, dedicated to mobile. (Not sure why they're crippling it like that.) That's probably 3 or 4 video streams.

- Trip
 

jsmar

DTVUSA Jr. Member
#5
The ATSC-M/H spec only allows for 8 groups, which is 7.33 Mbps, dedicated to mobile. (Not sure why they're crippling it like that.) That's probably 3 or 4 video streams.
- Trip
I believe this is incorrect. It took me a while to figure out where you may have gone wrong. I believe you are looking at the max number of groups per parade, which is 8. But as far as I can tell, you can have up to 112 parades. Of course that would only be feasible if each parade's PRC (parade repetition cycle) was 7, and each of those 112 (16 * 7) parades only used one group per subframe.

If you are looking for the maximum M/H bandwidth, it would be about double the number above. There are 16 slots per M/H subframe, and each subframe contains 156 TS packets. If the slot contains an M/H group then the M/H group uses 118 of the 156 packets, or 75.64% of the available bandwidth. There is no limitation on the number of slots that can contain an M/H group, i.e. all 16 slots can contain M/H groups.

I would note also that the number of groups can change from frame to frame (note - not subframe to subframe, i.e. each subframe within a frame must contain the same group layout), so categorizing an entire ATSC stream as containing a "number of groups" is probably not correct, although I am not sure in actual use if there will be any parades with a PRC value other than 1 (perhaps not for video, but I can easily see low bandwidth data doing this). I'm not sure if for example they wanted to allocate an average of 3.5 groups per subframe to a parade if they could set the number of groups for the parade to 7, but set the PRC to 2 (i.e. the parade would consume 7 slots per subframe in every other frame), or if setting the PRC to a value other than 1 is only for low bandwidth services that want to consume less than 1 group per subframe on average. I guess that would probably depend on the buffering standards for M/H video, which I have not looked at.
 
Last edited:

TVTom51

DTVUSA Member
#6
I believe this is incorrect. It took me a while to figure out where you may have gone wrong. I believe you are looking at the max number of groups per parade, which is 8. But as far as I can tell, you can have up to 112 parades. Of course that would only be feasible if the each parade's PRC (parade repetition cycle) was 7, and each of those 112 (16 * 7) parades only used one group per subframe.

If you are looking for the maximum M/H bandwidth, it would be about double the number above. There are 16 slots per M/H subframe, and each subframe contains 156 TS packets. If the slot contains an M/H group then the M/H group uses 118 of the 156 packets, or 75.64% of the available bandwidth. There is no limitation on the number of slots that can contain an M/H group, i.e. all 16 slots can contain M/H groups.

I would note also that the number of groups can change from frame to frame (note - not subframe to subframe, i.e. each subframe within a frame must contain the same group layout), so categorizing an entire ATSC stream as containing a "number of groups" is probably not correct, although I am not sure in actual use if there will be any parades with a PRC value other than 1 (perhaps not for video, but I can easily see low bandwidth data doing this). I'm not sure if for example they wanted to allocate an average of 3.5 groups per subframe to a parade if they could set the number of groups for the parade to 7, but set the PRC to 2 (i.e. the parade would consume 7 slots per subframe in every other frame), or if setting the PRC to a value other than 1 is only for low bandwidth services that want to consume less than 1 group per subframe on average. I guess that would probably depend on the buffering standards for M/H video, which I have not looked at.
When you say parade, what does that mean?
 

jsmar

DTVUSA Jr. Member
#7
When you say parade, what does that mean?
It's from the A/153 (ATSC M/H) standard, but I think it is terrible terminology. My guess is that the committee thought it was funny because a parade can contain one or two "ensembles" (think musically). Each ensemble encapsulates an IP datagram stream. What I am not sure of yet is whether or not there will be only one "service" (i.e. video) per IP stream, or multiple services per IP stream, or in the other direction, can a service span multiple streams from different parades. In the IP world packets can traverse multiple routes, so I would imagine that it is possible, although I don't know if that would be supported.

I've been concentrating on the very low levels of the standard right now, but I plan to try to fully understand the higher levels eventually.
 

TVTom51

DTVUSA Member
#8
It's from the A/153 (ATSC M/H) standard, but I think it is terrible terminology. My guess is that the committee thought it was funny because a parade can contain one or two "ensembles" (think musically). Each ensemble encapsulates an IP datagram stream. What I am not sure of yet is whether or not there will be only one "service" (i.e. video) per IP stream, or multiple services per IP stream, or in the other direction, can a service span multiple streams from different parades. In the IP world packets can traverse multiple routes, so I would imagine that it is possible, although I don't know if that would be supported.

I've been concentrating on the very low levels of the standard right now, but I plan to try to fully understand the higher levels eventually.
Appreciate the breakdown. :thumb:
 

jsmar

DTVUSA Jr. Member
#9
Hmm, I'm used to the AVS Forum where I can edit my posts as appropriate. It appears that there is a fairly short time limit on editing posts in this forum.

Anyway, I wanted to add something to my above post regarding setting the number of groups for a parade to 7 and the PRC to 2. That was just an example. It could also be done by alternating the number of groups between 3 and 4 every other frame. I've seen some posts from Trip (Sorry Trip, I'm not trying to pick on you;)) indicating that the number of groups is completely static. That may be true in current practice, but that is not what is in the standard. The number of groups for each parade can be changed dynamically from frame to frame (but not subframe to subframe). In fact, the standard specifically enables this by signaling the number of groups for the current frame during the first two subframes, and then switches to signaling in advance the number of groups for the next frame the parade will appear in during the transmission of the 3rd, 4th and 5th subframes.
 

Trip

Moderator, , , Webmaster of: Rabbit Ears
Staff member
#10
I'd rather learn what's right than continue to be incorrect. :)

I've seen I think it was Harris (though it could have been someone else) specify that 8 NoG limit as well, or that seemed to be the implication. They talked about dedicating 7.33 Mbps to mobile and then the rest to "high quality HD" (a laugh, I know) but it's certainly possible that I misinterpreted.

Of course, in practice, I've seen no station go over 6 NoG so far, and I don't think I've seen a station change the NoG. I had assumed WHUT might test with 8, given they only had a single SD stream plus UpdateTV, but they didn't.

I'm glad someone understands that spec, because I sure don't. And really, I don't have the time to pick through it. My only question is, will you be writing software to decode it when you're done? ;)

- Trip
 

jsmar

DTVUSA Jr. Member
#11
What I am not sure of yet is whether or not there will be only one "service" (i.e. video) per IP stream, or multiple services per IP stream, or in the other direction, can a service span multiple streams from different parades. In the IP world packets can traverse multiple routes, so I would imagine that it is possible, although I don't know if that would be supported.
So, I've been concentrating on Part 2 of the standard, but I just took a quick look at Part 3, which answers the above question. Yes, you can have multiple services per ensemble, and that will actually happen most of the time in practice. The standard also allows for a service to span multiple ensembles, in which case the ensembles form an "M/H Multiplex"; however that will not likely happen anytime soon, because the standard claims that would require parallel RS decoding and 1st generation M/H devices will most likely not have that capability. So no 14 Mbps MPEG4 1080p60 video in the near future :).

Here are two relevant quotes from Part 3 of the standard:
Normally, a single M/H Service is completely contained within a single M/H Ensemble. However, there may be situations where it is desirable to have an M/H Service that has components in multiple M/H Ensembles. A receiver must have the ability to decode multiple RS Frames concurrently in order to properly render such services, since a receiver needs to decode one sequence of RS Frames for each M/H Ensemble that it accesses concurrently.

It can facilitate rendering of services that span multiple Ensembles if all the IP datagrams in the ensembles have the same IP protocol version, and the UDP/IP addresses of the UDP/IP streams in the multiple Ensembles have been coordinated to avoid UDP/IP address collisions, so that a device can treat the Ensembles as a single IP subnet being accessed through a single network interface. A set of Ensembles in which the IP versions and UDP/IP addresses have been coordinated in this way is called an M/H Multiplex.
Note: It is recommended to assume that the receiver devices deployed in the initial M/H market have the ability to decode only a single sequence of RS Frames concurrently. Therefore, it is recommended to restrict each service to a single Ensemble for initial ATSC-M/H broadcasts.
 
Last edited:

jsmar

DTVUSA Jr. Member
#12
I've seen I think it was Harris (though it could have been someone else) specify that 8 NoG limit as well, or that seemed to be the implication. They talked about dedicating 7.33 Mbps to mobile and then the rest to "high quality HD" (a laugh, I know) but it's certainly possible that I misinterpreted.
It's possible that that limit existed in the Harris MPH submittal before it was merged to create the ATSC M/H standard.
Of course, in practice, I've seen no station go over 6 NoG so far, and I don't think I've seen a station change the NoG. I had assumed WHUT might test with 8, given they only had a single SD stream plus UpdateTV, but they didn't.
Yeah, it takes the ninth slot to be allocated before a slot in the lower half of a VSB field gets used (slots are allocated 1 in 4, then 1 in 2 before going to every slot). Since the ATSC randomizer is synced to the beginning of the field, I have to adjust for such a case. But I suspect I'll never actually get to test that situation.
I'm glad someone understands that spec, because I sure don't. And really, I don't have the time to pick through it. My only question is, will you be writing software to decode it when you're done? ;)
I was going to send you an update via email, but I thought perhaps this may be of interest to others in this forum. I found and joined this forum because there didn't seem to be an equivalent forum on the AVS forum. I'm not sure the level of detail that will follow will be of interest to others in this forum, but I figured I'd do it once, and let people tell me if this is at an inappropriate level.

Yes, I've already kind of mentioned that I was going to try to decode mobile DTV data elsewhere, and I inferred it above. I've already started writing code, because there is no better way to learn a standard than to actually try to implement it. You can read the same thing over and over again, but not really understand what it is saying. When you try to implement it you also find the holes (i.e. the stuff that is not mentioned that may require reading between the lines).

I assume you are referring to the ability to decode mobile DTV using a Legacy DTV decoder chip that has no knowledge of the Mobile DTV standard. Sure, that does interest me, but I'm not sure how useful this ability will be. My primary goal is being able to get detailed technical information out of the mobile stream, not to get video, although I may do something in that area to satisfy my curiosity.

First, you've been assuming that the mobile data obtained via a legacy decoder would actually be decodeable, whereas I have had my doubts. I noticed a few statements at other locations that seemed to indicate that this ability was actually a goal of the standard, but it clearly is not. The actual goal was just that mobile DTV data not interfere with the function of legacy receivers in any way.

I wasn't even sure the data would show up and not be dropped by a legacy decoder, but then you had TSReader output showing mobile DTV data with either pid 0x1eee or pid 0x1ff9. So that interested me, and as you know, I pursued getting a capture from someone else, since there are no stations in my area currently broadcasting mobile DTV. I was able to get some captures from the Washington D.C. area.

But again, just because the legacy decoders were providing data didn't mean that it would be useful or complete. As I read the standard in more detail I went back and forth regarding whether I thought it would be possible or not. Let me give some details why I thought that it may not be possible.

Mobile DTV data is not meant to go through the standard ATSC decoding process, because at the transmitting side, a Mobile DTV aware exciter splits the mobile data out and handles it differently. After the data traverses the Studio to Transmitter Link, mobile DTV data follows a different path than the standard TS main services. The first step in the normal TS main service chain is to randomize the data. Mobile DTV data bypasses this step, because it was randomized earlier in the process, and it doesn't want to harm the "all powerful" mobile training sequences.

The next step is that the data is Reed Solomon encoded. Normal TS main data goes through a normal encoding process where the 187 data bytes in each packet are appended with 20 bytes of parity data. Mobile data rearranges every packet in a group (i.e. a different pattern for each of the 118 packets in that group) and does a non standard (i.e. non systematic) Reed Solomon encoding where the 20 parity bytes are placed in different and not even consecutive locations in the packet. So the resulting 207 byte packet has real data after the 187th byte. Once again, its all about finding holes for the training sequences, and also not breaking legacy receivers.

Next we reach the ATSC interleaver. Both TS main and Mobile DTV data go through this, but the Mobile DTV data was "de-interleaved" in a complementary fashion earlier, so the interleaver is actually restoring the data to the order that a Mobile DTV decoder would like to see it.

The next stage is trellis encoding, and the Mobile DTV aware modified trellis encoder has the ability to support initialization sequences so that certain outputs can be guaranteed (once again, we gotta get those training sequences through unmolested). The remaining steps (pilot insertion, 8vsb modulation, etc. ) are common.

So, now let's look at how an ATSC legacy decoder is going to handle this. First, it appears that the trellis decoding is not an issue, i.e. the extra support for initialization doesn't affect the decoding side, so we are OK there. At this point a mobile DTV decoder would take the data as is and not follow the legacy path. The legacy decoder is going to de-interleave the data, but that is reversible, so we're OK there. It's the next step that really made me think all hope was lost for a while.

A legacy decoder thinks that every packet has 187 bytes of data, followed by 20 bytes of RS parity. So it error checks/corrects the data accordingly and then just DROPS the 20 bytes of what it thinks is RS parity. Oops. For Mobile DTV, those 20 bytes contain valid data. It wasn't even clear to me that the mobile RS encoder was following the same algorithm, since it was placing parity distributed throughout the packet. Luckily the "do no harm" goal bails us out here. At first I wasn't sure why it would make any difference if the legacy decoders detected a bunch of parity errors, since it wasn't going to decode those packets anyway. But then I realized that all those parity errors would affect the S/N quality meters on legacy tuners, making people think they were getting bad reception when they really were not, since only the M/H packets would appear to be bad. So, even though the parity was distributed throughout the packet, it had to satisfy the legacy decoders. Which means that those 20 bytes of valid data are recoverable by just reencoding the 187 bytes of data to generate the 20 legacy parity bytes, which are actually M/H data.

The next step in the decoding process is "de-randomizing". But note that the mobile DTV data was never randomized in the first place, so the legacy decoder is now randomizing it when it shouldn't. The randomizing process involves xor'ing the data with a pseudo random sequence that is synchronized with the beginning of a VSB frame. That's something that requires some information that the hardware decoder has (where the VSB frame starts) which is not available once the data is delivered from the decoder. Luckily each M/H slot can contain only 118 out of 156 packets as mobile data, and those 118 packets are consecutive. If all the packets could be mobile data then we'd receive a stream of mobile data and potentially have no idea where a slot starts. But given that there always has to be non mobile data between each 118 byte sequence, we can easily determine where packet 0 is. However, that is only half the problem, because packet 0 can be in the top half or the bottom half of a VSB frame, and the pseudo random sequence will be different for the two. As I mentioned above, we may never see mobile data from the bottom half in practice, but I have to allow for the possibility. In this case the known training data becomes useful. Packet 0 has 4 known training bytes at known locations within the packet. Since there are only two possible random sequences that can be used to encode the packet, I can simply try both and check the results against the known values. I can eventually extend this to extracting sync even when someone prefilters the transport stream and only provides the mobile DTV data. In that case I probably want to match more than 4 bytes, but some packets have 18-20 bytes of training data for me to compare against, which will allow me to detect and synchronize the stream.

In summary, I've only recently confirmed that it will be possible to decode M/H data from a legacy decoder. Note that I can't just take the data from the legacy decoder and continue decoding it. I first have to undo all the damage that the legacy decoder did in order to recover the M/H data before I can actually start to decode.

This all has some implications. First, hopefully noone expects to be able to receive M/H data with a legacy decoder while traveling. You will need a real M/H decoder to do that. If you can't receive the TS main services (i.e. the normal HD/SD channels being broadcast in the same data stream) then you're probably not going to be able to get the M/H data. Sure the M/H data has more error correcting code, but if the legacy decoder fails to recover the data using the standard RS code (which a mobile DTV decoder doesn't even use), you will not be able to recover the 20 data bytes that the legacy decoder dropped on the floor, probably making it almost impossible for those better RS codes to be effective. Besides, receiving data while moving is more about locking the signal in the first place, which is something a mobile DTV decoder does with the aid of those training sequences distributed throughout the group, which a legacy decoder knows nothing about. You can't recover what you can't receive in the first place.

Note also that there are three levels of mobile RS codes, and the top level, which adds 48 parity bytes, probably can't be decoded in realtime with the latest Pentium chips. I'm not going to even try. I'm going to leverage the only useful thing that the legacy decoder did for me, and that was recovering data via the legacy RS code (with 20 parity bytes effectively). I'm going to assume the data is valid beyond that point and just "turbo decode" the mobile RS coding by just dropping those 24/36/48 parity bytes. Perhaps I might be persuaded otherwise later if the "digital cliff" for this case is wider than I think, but it's a real low priority for me at this point. I'm certainly not going to write hand tuned assembly language to attempt real time decoding of this stuff.

Finally, even if I succeed in writing software that can decode this stuff, I'm not sure how useful it will be to others, or even if I will be allowed to distribute it. I work for a software company that "owns" my intellectual property, i.e. I can be fired for writing software and distributing it without the companies permission. So before I can provide anything I am going to need to get that permission. The process for releasing open source software is at least somewhat reasonable, but I can't say I'm looking forward to going through that.

Also, as I think I shared with Trip previously, I own an HD HomeRun and write all my software for Linux. I don't usually write software for Windows, although I have on occasion ported some command line stuff to Windows. So if you're hoping for Windows software you may be out of luck (although perhaps someone who cares enough can port whatever I produce if I get permission to distribute it).

The other issue is that there is no Video for Linux driver for the HDHomeRun, since it is a network tuner and not a USB/PCI tuner. So most of the software I've written is only useful for HDHomeRun owners, since I program it using Silicon Dust's proprietary API. I can probably port the software to a Video for Linux interface, but I would need access to a Video for Linux compatible tuner. Perhaps Trip's loaner program might be useful here :). The software will be able to process raw TS files, so although cumbersome, you could capture (either on Windows or Linux) and then post process on Linux with my software (at least that is a goal).

Note that I also have no intention of actually writing video/audio decoders or incorporating such libraries in my software. If I even go as far in the decoding process to actually extract video/audio data (which is not a high priority for me, although I am certainly curious about the actual content) I intend to just extract the services and either save them in a file in a format that some media player could use, or stream it directly to a compatible media player (VLC for example).

So, the current status of my software is at the point of having restored the M/H data. It syncs and de-randomizes the data, then RS encodes the data to recover the missing 20 bytes, and then re-interleaves it so that it can now be decoded. Luckily I've been able to verify that the software is working correctly due to the known training sequences, some of which go through the 20 "parity" bytes, so I know that I am properly recovering real data and my understanding of what is going on matches the standard.

So, anyone other than Trip actually interested in this level of detail? What is the interest level in being able to extract mobile DTV programming with your non moving Linux desktop/laptop using a legacy DTV tuner?
 

Trip

Moderator, , , Webmaster of: Rabbit Ears
Staff member
#13
If you want a loaner tuner to toy with, let me know. I can have one sent to you at some point in the near-ish future.

(I'm much too tired from moving into school today to comment on the rest of that exhaustive message.)

- Trip
 

Boo-Ray

DTVUSA Member
#16
Potentially you would be able to choose a mobile program you wanted to watch, and either have it streamed to media player like VLC or extracted to a file in a format that could be played by a media player.
Understood. I'd be interested in the overall breakdown of ATSC-M/H, but I have no interest in recording and or watching on media player. I think we've gone a bit off subject here from the OP.
 

t_newt

DTVUSA Rookie
#17
jsmar, what you are doing is cool--I like the way ATSC-M/H could be used to receive television in weak signal areas, for people who've lost over-the-air due to ATSC conversion.

I just happen to have a WinTV-HVR1800 sitting around. If you are interested I could send it to you. I'd rather it be used by someone doing something cool than just sitting around collecting dust.
 

jsmar

DTVUSA Jr. Member
#19
jsmar, what you are doing is cool--I like the way ATSC-M/H could be used to receive television in weak signal areas, for people who've lost over-the-air due to ATSC conversion.
Thanks. Just to be clear, what I am doing won't be useful to that class of people, i.e. if I succeed my software would only facilitate reception of ATSC M/H reception for those who are able to get good reception of normal ATSC programming with their legacy receivers. In order to be able to take advantage of the increased reception ability of ATSC M/H you will need a true ATSC M/H receiver.

I just happen to have a WinTV-HVR1800 sitting around. If you are interested I could send it to you. I'd rather it be used by someone doing something cool than just sitting around collecting dust.
Thanks, but I won't actually need a tuner like that unless I succeed in the first phase of decoding the ATSC M/H data.
 

otaota

DTVUSA Rookie
#20
So, anyone other than Trip actually interested in this level of detail? What is the interest level in being able to extract mobile DTV programming with your non moving Linux desktop/laptop using a legacy DTV tuner?
Yes, you've got other interested parties listening! Thanks for sharing.

I also try to follow the ATSC discussions and it's great to hear from someone actually doing some real implementation.

Sooner or later we'll see a finalized A/153 standard and chipsets hit the market, so I'm guessing the M/H programming streams will be readily available then. I think the value of the work you're doing is mostly in:

1) independent understanding and clarification of the standard (all the real stuff you need to know about implementation that is not written in the spec). I'm sure there will eventually be the need for many updates and addendums to the standard to fill in these missing details.

2) a diagnostics/monitoring tool. You will be able to provide information about the M/H structures at a much lower level than what is probably exposed by commercial M/H chipsets. It may be possible to put together a poor man's protocol analyzer as an alternative to the rare and expensive equivalents from the likes of Harris or Rhode & Schwarz.



Have you ever looked into the USRP project? There are already several people working on ATSC decoding with that platform and M/H is likely to be tackled soon, if not already.

Cheers,
Chuck
 
Top